Next Article in Journal
Belief Entropy-Based MAGDM Algorithm Under Double Hierarchy Quantum-like Bayesian Networks and Its Application to Wastewater Reuse
Previous Article in Journal
Global Sensitivity and Mathematical Modeling for Zoonotic Lassa Virus Transmission and Disability in Critical Cases in the Light of Fractional Order Model
Previous Article in Special Issue
The Circle Group Heuristic to Improve the Efficiency of the Discrete Bacterial Memetic Evolutionary Algorithm Applied for TSP, TRP, and TSPTW
error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MSCSO: A Modified Sand Cat Swarm Optimization for Global Optimization and Multilevel Thresholding Image Segmentation

1
College of Packaging Design Arts, Hunan University of Technology, Zhuzhou 412007, China
2
Graduate School of Government & Business, Yonsei University, Seoul 03722, Republic of Korea
3
College of Design, Hanyang University, Ansan 15588, Republic of Korea
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(11), 2012; https://doi.org/10.3390/sym17112012
Submission received: 28 September 2025 / Revised: 2 November 2025 / Accepted: 7 November 2025 / Published: 20 November 2025

Abstract

To address the limitations of the original Sand Cat Swarm Optimization (SCSO) algorithm—such as static strategy selection, insufficient population diversity, and coarse boundary handling—this paper proposes a multi-strategy enhanced version, namely the Modified Sand Cat Swarm Optimization (MSCSO). The algorithm improves performance through three core strategies: (1) an adaptive strategy selection mechanism that dynamically adapts to different optimization phases; (2) an adaptive crossover–mutation strategy inspired by differential evolution, in which mutation vectors are generated with the guidance of the global best solution and updated via binomial crossover, thereby enhancing both population diversity and local search capability; and (3) a boundary control mechanism guided by the global best solution, which repairs out-of-bound solutions by relocating them between the global best and the boundary, thus preserving useful search information and avoiding oscillation near the limits. To validate the performance of MSCSO, extensive experiments were conducted on the CEC2020 and CEC2022 benchmark suites under 10- and 20-dimensional scenarios, where MSCSO was compared with seven algorithms, including Particle Swarm Optimization (PSO) and Gray Wolf Optimizer (GWO). The results demonstrate that MSCSO consistently outperforms its competitors on unimodal, multimodal, and hybrid functions. Notably, MSCSO achieved the best Friedman ranking across all dimensions. Ablation studies further confirm that the three proposed strategies exhibit strong synergy, collectively accelerating convergence and enhancing stability. In addition, MSCSO was applied to multilevel threshold image segmentation, where Otsu’s criterion was adopted as the objective function and experiments were conducted on five benchmark images with 4–10 thresholds. The results show that MSCSO achieves superior segmentation quality, significantly outperforming the comparison algorithms. Overall, this study demonstrates that MSCSO effectively balances exploration and exploitation without increasing computational complexity, providing not only a powerful tool for global optimization but also a reliable technique for engineering tasks such as multilevel threshold image segmentation. These findings highlight its strong theoretical significance and promising application potential.

1. Introduction

Image segmentation constitutes a core process in the domains of computer vision and pattern recognition, dedicated to dividing an image into several distinct regions characterized by meaningful structural or semantic information, thereby facilitating the identification of salient objects within the scene [1,2]. Extensive applications of this method can be observed in domains such as medical diagnostics, remote sensing imagery, autonomous navigation, precision farming, and aeronautical research [3]. Generally, contemporary segmentation techniques are categorized into four primary groups: thresholding methods, region-growing procedures, clustering-based algorithms, and semantic segmentation networks built upon deep learning [4].
Within these segmentation paradigms, threshold-based methods are widely favored across academic and industrial settings for their ease of use, high processing speed, and stable effectiveness [5]. Threshold-based segmentation can be further classified into single-level and multi-level forms, depending on application requirements. The single-level approach separates the foreground and background with one threshold, while the multi-level variant applies multiple thresholds to produce more refined partitions. Since threshold selection critically determines segmentation accuracy and subsequent analytical reliability, efficient optimization of threshold values has become a central research topic. Among various criteria, Otsu’s method based on inter-class variance and Kapur’s entropy maximization technique are the most commonly used [6,7].
Multilevel thresholding can be regarded as a complex combinatorial optimization task. As the number of threshold levels increases, computational demands escalate rapidly, causing conventional techniques to struggle with both processing efficiency and solution accuracy [8,9]. Conversely, metaheuristic optimization methods inspired by the collective behaviors observed in natural organisms have demonstrated superior performance, as they are capable of producing high-quality segmentation thresholds within acceptable computational costs.
Over the past few years, a variety of bio-inspired computational methods have been introduced and proven effective in tackling segmentation problems. For example, Particle Swarm Optimization (PSO) [10], proposed based on the foraging behavior of bird flocks and fish schools, abstracts each candidate solution of an optimization problem as a “particle.” Each particle updates its velocity and position by tracking its personal best position and the global best position of the swarm while balancing exploration and exploitation through parameters such as inertia weight and acceleration coefficients. Another representative method is the Gray Wolf Optimizer (GWO), designed by Mirjalili et al. by simulating the hierarchical hunting behavior of gray wolves. In GWO, the α, β, and δ wolves guide the population search, and strategies such as encircling and hunting prey are used to update positions. This enables strong local exploitation ability, particularly in low-dimensional unimodal function optimization [11]. However, GWO exhibits limited exploration capability in high-dimensional and complex functions, where it often suffers from premature convergence and stagnation in later iterations, making it difficult to escape local optima [12,13]. The Whale Optimization Algorithm (WOA) represents another classical approach, inspired by the bubble-net hunting strategy of humpback whales. It employs three mechanisms—shrinking encircling, spiral updating, and random search—to achieve optimization. WOA provides a broad global search range and exhibits strong adaptability to nonlinear problems [14]. Nevertheless, it also suffers from clear performance deficiencies: the algorithm relies solely on random parameters to control the switching between search phases, lacking dynamic adaptation to the iterative process. As a result, it often shows low exploration efficiency in the early stage and insufficient exploitation accuracy in the later stage, with performance degradation becoming more pronounced in high-dimensional scenarios [15,16].
Global optimization approaches have been successfully employed in numerous disciplines. For instance, Yu et al. [17] presented an integrated forecasting approach that couples CEEMD with the Whale Optimization Algorithm and Kernel Extreme Learning Machine to achieve more accurate short-term wind power predictions. Inspired by the foraging, chasing, attacking, and food-storing behaviors of red-billed blue magpies, Fu et al. [18] developed the Red-Billed Blue Magpie Optimizer (RBMO) to address two-dimensional and three-dimensional UAV path planning challenges. Zhongda Lu et al. [19] designed an Improved Sparrow Search Algorithm (ISSA) to achieve a balanced optimization of environmental and economic objectives in isolated microgrid systems. Additionally, Kiana et al. [20] proposed a multi-objective Beluga Whale Optimization (MOBWO) algorithm specifically for solving feature selection problems.
With the advancement of research, numerous novel swarm intelligence algorithms have emerged, each inspired by diverse behaviors observed in nature. Examples include the Crested Porcupine Optimizer (CPO), which draws inspiration from the defensive behaviors of crested porcupines [21]; the Ant Colony Optimization (ACO) algorithm, inspired by the social foraging behavior of ants [22]; and the Bat Algorithm (BA), which mimics the echolocation-based hunting strategy of bats [23]. Similarly, the Nutcracker Optimizer (NOA) was developed based on Clark’s nutcracker’s ability to locate seeds, store them in caches, and later retrieve them using landmarks from different angles [24]. The Animated Oat Optimization (AOO) algorithm models the three unique behavioral patterns of wild oats [25], while the Snake Optimizer (SO) simulates snake foraging and reproductive behaviors [26]. PIMO, short for Projection-Iterative-Methods-based Optimizer, is a newly developed metaheuristic approach that draws its inspiration from projection iterative computation techniques [27].
Another representative approach is the Dung Beetle Optimizer (DBO), which simulates the rolling, navigating, foraging, and reproductive patterns observed in dung beetles [28]; the Marine Predators Algorithm (MPA), derived from the foraging strategies of marine predators [29]; and the Pathfinder Algorithm (PFA), motivated by the collective behavior of animals in locating optimal food sources or prey [30]. The Harris Hawks Optimization (HHO) algorithm simulates the cooperative predatory strategies of Harris hawks [31], whereas the Sparrow Search Algorithm (SSA) models the foraging and anti-predation behaviors of sparrows [32]. The Grasshopper Optimization Algorithm (GOA) was inspired by the life habits of grasshoppers [33], and the Remora Optimization Algorithm (ROA) reflects the foraging process of remoras attaching to different host species [34].
The Black Widow Optimization (BWO) method is founded on the specialized mating practices of black widow spiders [35]; the Golden Eagle Optimizer (GEO) is motivated by the spiral hunting maneuvers observed in golden eagles [36]; and the Starling Murmuration Optimizer (SMO) simulates the astonishing collective noise-making behavior of starlings [37]. Other emerging approaches include the Crayfish Optimization Algorithm (COA), which captures crayfish foraging, shelter-seeking, and competitive behaviors [38]; the Coyote Optimization Algorithm (COA), motivated by the social organization and lifestyle of North American coyotes [39]; and the Chimpanzee Optimization Algorithm (ChOA), which models the four cooperative behaviors of chimpanzees—attacking, driving, blocking, and chasing prey [40]. Finally, the Artificial Rabbits Optimization (ARO) algorithm was proposed based on rabbits’ survival strategies, including detour foraging and random hiding [41]; the Great Wall Construction Algorithm (GWCA), inspired by the competition and elimination mechanisms among workers during the construction of the Great Wall in ancient China [42]; the Cuckoo Catfish Optimizer (CCO), which simulates the searching, predation, and parasitic behaviors observed in cuckoo catfish [43]; and the Kirchhoff’s Law Algorithm (KLA), a novel optimization method inspired by electrical circuit laws, particularly Kirchhoff’s Current Law (KCL) [44].
Although existing swarm intelligence optimization algorithms each exhibit certain advantages, they commonly face the bottleneck of imbalanced exploration–exploitation trade-off when tackling complex optimization problems such as high-dimensional, multimodal, and non-convex tasks [45,46,47]. Most algorithms adopt static strategies to control the search process—for example, using fixed probabilities for strategy selection or relying on a single parameter to switch between exploration and exploitation phases. Such designs fail to dynamically adapt to the characteristics of the optimization problem (e.g., unimodal vs. multimodal, low-dimensional vs. high-dimensional) and the different requirements of iteration stages (strong exploration is needed in the early stage to cover the search space, while strong exploitation is needed in the later stage to refine the optimal solution). As a result, these algorithms are prone to missing the global optimum in the early stage and suffer from low convergence efficiency in the later stage due to poor strategy adaptability [48,49]. These shortcomings limit their performance in more complex real-world applications. For instance, in multilevel threshold image segmentation, as the number of thresholds increases, the search space expands exponentially, making it difficult for traditional algorithms to efficiently identify the global optimal threshold combination.
The Sand Cat Swarm Optimization (SCSO) algorithm [50], a recently proposed swarm intelligence approach, introduces a new perspective to this field. Inspired by the unique predatory behaviors of sand cats in desert environments, Seyyedabbasi et al. formulated SCSO as a two-stage optimization framework that abstracts the biological behavior of sand cats into “searching for prey” and “attacking prey.” This biologically inspired design enables SCSO to demonstrate a certain level of competitiveness in low-dimensional and simple function optimization. For example, in unimodal function optimization, its two-stage strategy allows relatively fast early convergence.
However, the original SCSO algorithm still suffers from several limitations: its exploration and exploitation strategies employ a fixed probability selection mechanism that cannot adapt dynamically during the optimization process, often causing the algorithm to miss the global optimum in early stages and exhibit slow convergence with low accuracy in later phases; the population diversity maintenance mechanism is relatively weak, making the algorithm prone to premature convergence in high-dimensional environments and unable to escape local optima; the boundary handling method is simplistic, as randomly resetting out-of-bounds solutions wastes valuable search information and adversely affects convergence efficiency and stability. These shortcomings restrict the performance of SCSO in high-dimensional complex optimization tasks and engineering applications such as image segmentation, highlighting the need for further improvements.
To address the limitations of the original SCSO and meet practical application requirements, this study proposes a Modified Sand Cat Swarm Optimization (MSCSO) algorithm, which establishes a more efficient swarm intelligence framework through three core enhancement strategies. First, an adaptive strategy selection mechanism is introduced. Unlike static probability assignments, this mechanism dynamically updates the application probabilities of the two search strategies (P1, P2) using adaptive formulas, enabling the algorithm to autonomously adjust to the most suitable strategy throughout the optimization process. This resolves the issue of insufficient exploration in the early stage and inefficient exploitation in the later stage. Second, a DE-inspired adaptive crossover–mutation strategy is integrated. After the standard position update of SCSO, a dynamic crossover probability is generated based on a normal distribution. Using three randomly selected individuals along with the global best solution, a mutation vector is constructed. A trial vector is then generated via binomial crossover, and the better individual is retained. This design enhances population diversity and local search ability while achieving a better balance between exploration and exploitation. Third, a global-best-guided boundary control mechanism is developed. By incorporating the current global best solution, when a candidate solution exceeds the boundary, it is no longer simply reset or reflected. Instead, the out-of-bounds component is repaired by relocating it to a random position between the global best solution Xbest and the corresponding boundary. This not only preserves valuable search information but also avoids boundary oscillation, thereby improving convergence efficiency.
The main contributions of this paper can be summarized as follows:
  • Algorithmic Innovation: To overcome the limitations of the original Sand Cat Swarm Optimization (SCSO)—namely static strategy selection, insufficient population diversity, and coarse boundary handling—this study proposes a Multi-Strategy Sand Cat Swarm Optimization (MSCSO) algorithm by integrating three core strategies. Specifically, an adaptive strategy selection mechanism is introduced to dynamically balance exploration and exploitation; a differential-evolution-inspired crossover–mutation strategy is designed to maintain population diversity; and a global-best-guided boundary control mechanism is constructed to preserve useful search information. These enhancements significantly improve the exploration ability, convergence efficiency, and stability of the algorithm when solving high-dimensional and complex optimization problems.
  • Comprehensive Performance Evaluation: Extensive experiments were conducted on the CEC2020 and CEC2022 international benchmark suites under 10- and 20-dimensional scenarios, covering unimodal, multimodal, and hybrid functions. MSCSO was compared with seven state-of-the-art algorithms using indicators such as mean fitness, standard deviation, convergence curves, and Friedman ranking. The results consistently verify the superior performance of MSCSO across different types of functions. Furthermore, ablation analyses were employed to evaluate the independent roles and the cooperative influence of the three strategies introduced in this work.
  • Engineering Application: MSCSO was applied to a practical engineering scenario involving multilevel threshold segmentation, using Otsu’s inter-class variance as the optimization criterion. Five standard images (baboon, camera, girl, lena, terrace) were tested with threshold levels ranging from 4 to 10. The resulting segmentation performance was quantified using PSNR, SSIM, FSIM, and objective function values. Results demonstrate that MSCSO achieves high-quality segmentation efficiently, highlighting both its effectiveness in complex image processing applications and its potential for solving practical engineering optimization challenges.
The remainder of this paper is organized as follows. Section 2 introduces the original Sand Cat Swarm Optimization (SCSO), including its biological inspiration and mathematical framework. Section 3 details the three improvement strategies of MSCSO: adaptive strategy selection, adaptive crossover–mutation, and global-best-guided boundary control. Section 4 presents global optimization experiments on the CEC2020 and CEC2022 benchmark suites, analyzing convergence, numerical results, and the contributions of each strategy through ablation studies. Section 5 applies MSCSO to multilevel threshold image segmentation, evaluating performance with PSNR, SSIM, FSIM, and visual results. Section 6 concludes the paper, summarizing the main findings, highlighting strengths and limitations, and discussing future research directions.

2. Sand Cat Swarm Optimization (SCSO)

The sand cat, a small feline species adapted to extreme desert habitats in Africa, possesses ear canals and eardrums that are several times larger than those of domestic cats. This distinctive auditory anatomy enables the production and perception of low-frequency sounds, allowing sand cats to detect subterranean prey such as insects and rodents, accurately localize them, and efficiently capture them [51,52].
Drawing on the behavioral patterns of sand cats, their hunting process can be conceptually divided into two main phases: prey searching and prey attacking. The Sand Cat Swarm Optimization (SCSO) algorithm takes inspiration from these predatory behaviors, with its framework explicitly modeled around these two stages. The corresponding mathematical formulations for each phase are presented below:

2.1. Initialize Population

In a dimensional optimization problem, each sand cat can be represented as a 1 × d i m vector, corresponding to a candidate solution. For the variable set X 1 , X 2 , , X d i m , each element X must lie within the search space bounds, i.e., between the lower bound l b and upper bound u b .
During the initialization phase, a N × d i m matrix is constructed, where N denotes the population size. In each iteration, the current best solution of the population is calculated. If the best solution obtained in the current iteration is better than that of the previous generation, it is saved as the global best solution; otherwise, no update is made. The initialization formula is given by [50]:
X = u b l b · r a n d 0 , 1 + l b ,

2.2. Search for Prey (Exploration Phase)

The position of each sand cat is denoted as X c i . The prey-searching process of sand cats relies on the emission of low-frequency sounds, and their perception range of these sounds is defined as r g , which is calculated as:
r g = S M S M · t T m a x
where S M = 2 represents the maximum auditory range of the sand cat, t is the current iteration number, and T m a x is the maximum number of iterations.
A parameter R is introduced to control the switching between the exploration and exploitation phases in SCSO. Its value is guided by r g and computed as [50,52]:
R = 2 · r g · r a n d 0 , 1 r g
During the prey-searching phase, each sand cat randomly searches for a new position within its perception range to avoid being trapped in local optima:
r = r g · r a n d 0 , 1
where r denotes the perception range of an individual sand cat. Based on this range, the sand cat estimates the potential location of the prey and updates its position using the candidate global best position X b e s t , its current position X c , and its perception range r as follows:
X c t + 1 = r · X b e s t t r a n d 0 , 1 · X c t

2.3. Attacking Prey (Exploitation Phase)

To clearly describe the predatory behavior of sand cats, the SCSO algorithm defines each sand cat’s perception range of low-frequency noise as a circular area. At each iteration, a roulette-wheel selection method is used to randomly determine an angular displacement φ , which specifies the movement direction of the sand cat within the circular area. The angle φ is randomly selected from 0° to 360°, normalized to the range [−1, 1]. This design assigns a different random movement direction to each sand cat, enhancing the stochasticity of the algorithm and effectively avoiding premature convergence [53,54]. The position update within the circular perception area is given by:
X r a n d t = r a n d 0 , 1 · X b e s t t X c t
X c t + 1 = X b e s t t r · X r a n d t · c o s φ
In summary, in the SCSO algorithm, the parameter R determines the switching between the exploration and exploitation phases. Given that the sand cat’s perception range r g varies from 2 to 0, the corresponding R value computed by Equation (3) is in the range [−4, 4]. When R > 1 , the algorithm is in the exploration phase, during which sand cats search for prey; when R 1 , the algorithm enters the exploitation phase, and sand cats attack the prey. The overall position update formula of SCSO can be expressed as:
X c t + 1 = r · X b e s t t r a n d 0 , 1 · X c t , i f   R > 1 X b e s t t r · X r a n d t · c o s φ , i f   R 1
The Algorithm 1 illustrates the step-by-step pseudocode of the SCSO algorithm.
Algorithm 1: The pseudo-code of the SCSO
1: Begin
2: Initialize: Set the initial values for the parameters r, rg, R.
3: Calculate the fitness of the objective function.
4:    While t < Tmax do
5:        For each search agent do
6:              Get a new angle value φ obtained by Roulette Wheel Select (−1 ≤ φ ≤ 1).
7:          If abs(R ≤ 1)
8:              Update the search agent’s position as specified by Equation (7).
9:          Else
10:            Update the search agent’s position as specified by Equation (5).
11:         End if
12:     End for
13:     t = t + 1
14:   End while
15:   return best solution
16: end

3. Proposed MSCSO

3.1. Adaptive Strategy Selection Mechanism

In the standard SCSO algorithm, the two search strategies—prey searching and prey attacking—are applied with fixed random probabilities. Such static allocation cannot adapt to the characteristics of different optimization problems nor dynamically adjust strategy preference according to the search phase. This can lead to inadequate exploration at the beginning, potentially preventing the algorithm from finding the global optimum, and inefficient exploitation in subsequent iterations, slowing down the convergence process.
In order to mitigate these limitations, this study implements an adaptive strategy selection scheme. The probabilities of assigning the two strategies to each member of the population are represented by P 1 and P 2 . The initial probabilities are set equally, i.e., P 1 = P 2 = 0.5 , meaning that at initialization, both strategies have an equal chance of being applied to update each individual. As illustrated in Figure 1, let r a n d denote a uniformly distributed random vector in the range [0, 1]. If the j t h element of r a n d is less than or equal to P 1 , the first strategy (prey searching) is applied to the j t h individual of the current population; otherwise, the second strategy (prey attacking) is applied.
After evaluating all newly generated trial vectors, the numbers of trial vectors successfully entering the next generation generated by the two strategies are recorded as n s 1 and n s 2 , and the numbers of trial vectors discarded are recorded as n f 1 and n f 2 . These counts are accumulated over a learning period (set to 50 generations in this study) and then used to update the strategy probabilities as follows:
P 1 = n s 1 · n s 2 + n f 2 n s 1 · n s 2 + n f 2 + n s 2 · n s 1 + n f 1 P 2 = 1 P 1
This expression represents the percentage of successful trial vectors generated by each strategy during the learning period. After the learning period, the probabilities of applying the two strategies are updated accordingly. All counters n s 1 , n s 2 , n f 1 and n f 2 are then reset to avoid potential side effects from the previous learning period. This adaptive process allows the algorithm to gradually evolve the most suitable search strategy for the problem at hand across different learning stages.

3.2. Adaptive Crossover and Mutation Strategy

To address the difficulty of balancing exploration and exploitation in the standard SCSO algorithm—where using only the random parameter R to control the search phase may lead to insufficient local refinement or premature convergence in later stages—this study introduces an adaptive crossover–mutation strategy inspired by Differential Evolution. This strategy is applied after the standard SCSO position update. For each individual, an adaptive crossover probability ( C R ) is generated based on an initial mean of 0.5. The C R values are drawn from a normal distribution N C R m , 0.1 and clipped to the range [0, 1]. Successful C R values that lead to improved offspring are recorded, and every 25 generations, the mean of these successful C R values is used to update C R m , allowing the crossover probability to adapt dynamically to the current search stage.
Next, guided mutation and crossover operations are performed. Three distinct individuals are randomly selected, and a mutant vector is generated by combining their differences with the global best solution. A trial vector is then produced via binomial crossover, ensuring at least one dimension is updated. The fitness of the trial vector is compared with the original individual, and the better one is retained while updating the global best. his design preserves the original biological search characteristics of SCSO while leveraging DE operators to enhance population diversity and local search capability, thereby achieving a better balance between exploration and exploitation. The corresponding formulas are:
X c t + 1 = X b e s t t + F · X 1 t X 2 t
X i , j t + 1 = X i t + 1 , i f   r a n d <   C R   o r   j = j r a n d X i , j t + 1 , o t h e r w i s e
where F is the mutation factor, controlling the scaling of the differential vector; in this study, F is randomly chosen from 0 , 1 to maintain moderate perturbation intensity. j r a n d = r a n d i d i m is a randomly selected dimension index, ensuring that at least one dimension is crossed. As show in Figure 2, X 1 and X 2 are randomly selected individuals used to generate diversity, and X b e s t t represents the global best position, guiding the search toward promising regions. The crossover probability C R follows a normal distribution N ( C R m , 0.1 ) , with the mean C R m initially set to 0.5 and dynamically updated every 25 iterations based on successful C R experiences. Once the normal distribution mean is recalculated, the record of successful C R values is cleared to avoid potential long-term accumulation effects. This allows the algorithm to learn an appropriate C R range suited to the current problem.

3.3. Global Optimum-Guided Boundary Control Mechanism

In handling out-of-bounds solution vectors, the standard SCSO algorithm typically employs simple random reset or mirror reflection. These approaches have two major drawbacks: (1) discarding all information of the out-of-bounds solution may result in the loss of valuable search direction, and (2) mechanical boundary mapping can cause repeated oscillation near the feasible domain boundaries, severely affecting convergence efficiency.
To overcome these limitations, this study designs a global-best-guided boundary handling mechanism. By introducing X b e s t (the best individual currently found) into the boundary constraint, the search process becomes more flexible, allowing the algorithm to perform finer searches in regions near the current optimum, thereby improving optimization efficiency and solution quality. This means that the search boundary is not only restricted by the fixed lower bound l b and upper bound u b , but also influenced by the current best solution, as illustrated in Figure 3.
When a position component X c d in the d t h dimension exceeds the bounds, instead of simply resetting it to the boundary or performing symmetric reflection, the component is repositioned randomly between the global best solution X best   d and the violated boundary. The mathematical expression is:
X c d = X best   d + α u b d X best d i f   X c d > u b d X best   d α X best   d l b d , i f   X c d < l b d X c d , otherwise
where α is a uniformly distributed random number in [ 0 ,   0.5 ] , i.e., α U 0 , 0.5 . This strategy ensures that out-of-bounds individuals are not only effectively pulled back into the feasible domain but also guided toward the region near the currently most promising solution X b e s t , thereby accelerating the convergence of the algorithm.
The MSCSO’s pseudocode is provided in Algorithm 2.
Algorithm 2: The pseudo-code of the MSCSO
1: Begin
2: Initialize: the relevant parameters r, rg, R, P1, P2.
3: Calculate the fitness of the objective function.
4:    While t < Tmax do
5:        For each search agent do
6:            Get a new angle value φ obtained by Roulette Wheel Select (−1 ≤ φ ≤ 1).
7:            If (randP1 )
8:               Update the search agent by Equation (7).
9:            Else
10:                Update the search agent by Equation (5).
11:          End if
12:          Update success/failure counts (ns1, ns2 and nf1, nf2).
13:          Adaptive Crossover and Mutation Strategy by Equation (11).
14:          Repair out-of-bounds individuals using Equation (12).
15:          Record successful CR values.
16:      End for
17:      Every 50 iterations:
18:          Update strategy selection probabilities (P1, P2) using Equation (9).
19:        Reset success/failure counts.
20:      Every 25 iterations:
21:          Update CRm based on successful CR values.
22:      t = t + 1
23:  End while
24:  return best solution
25: end

3.4. Analysis of Computational Complexity

The performance of an algorithm is crucial, but its time complexity should not be overlooked. In optimization tasks, an algorithm must not only deliver high-quality solutions but also ensure computational efficiency. Time complexity reflects how runtime scales with problem size, providing insight into performance on large-scale problems. For SCSO, the computational cost of parameter calculation is O ( N × d i m ) , where N is the population size and d i m is the problem dimension. Initialization also requires O ( N × d i m ) . During T iterations, position updates incur O ( T × N × d i m ) . Hence, the overall complexity of SCSO is O ( T × N × d i m ) . Since MSCSO only modifies the position update strategy without introducing extra factors, its complexity remains O ( T × N × d i m ) .

4. Experimental Results and Detailed Analyses

In this section, the proposed MSCSO algorithm is experimentally analyzed using the CEC 2020 [55] and CEC 2022 [56] benchmark suites. First, the parameter settings of all compared algorithms are detailed, and a qualitative analysis of MSCSO is provided. Subsequently, MSCSO is compared with seven other algorithms on the CEC2020 and CEC2022 test sets. To maintain fair comparisons and reduce randomness, all algorithms were run with a fixed population of 30 for 500 iterations, with 30 independent trials for each. Results are reported as mean (Ave) and standard deviation (Std), with the best results in bold. Experiments were conducted on a Windows 11 system with a 13th Gen Intel i5-13400 CPU (2.5 GHz) and 16 GB RAM using MATLAB 2024b.

4.1. Competitor Algorithms and Parameters Setting

In this section, the performance of the MSCSO algorithm is validated through comparative experiments with 7 state-of-the-art algorithms. The selected algorithms include Gray Wolf Optimizer (GWO), Whale Optimization Algorithm (WOA), Particle Swarm Optimization (PSO), Holistic Swarm Optimization (HSO), Dung Beetle Optimizer (DBO), Secretary Bird Optimization Algorithm (SBOA), and the standard Sand Cat Swarm Optimization (SCSO), among others.
In the comparative experiments, all algorithm parameters are set according to the references. For convenience, Table 1 summarizes the parameter configurations of each algorithm and provides the corresponding references for further consultation.

4.2. Qualitative Analysis of MSCSO

In this part, we perform a qualitative study of the MSCSO algorithm. First, the population diversity is examined, as it is a key factor for efficient search space exploration. Then, we assess how well the algorithm balances exploration and exploitation, with initial iterations favoring exploration and later iterations focusing on exploitation. Experiments are conducted to validate these behaviors. Finally, ablation studies are carried out to evaluate the impact of each proposed improvement. Comprehensive explanations are provided below.

4.2.1. Analysis of the Population Diversity

Within optimization methods, the diversity of a population quantifies how varied the individuals are [59], where each individual generally corresponds to a candidate solution. A decline in population diversity can lead to premature convergence to a local optimum, thereby constraining the algorithm’s global search capability. On the other hand, preserving higher diversity facilitates exploration across broader areas of the solution space and improves the probability of locating the global optimum. This section evaluates the population diversity within the MSCSO algorithm, based on the formulation provided in Equation (13) [60,61].
I C t = i = 1 N d = 1 D x i d t c d t 2
In this context, I C t indicates the diversity of the population, N is the total number of individuals, D stands for the problem dimension, and x i d t specifies the value of the i t h individual along the d t h dimension at iteration t . The quantity c d t captures how widely the population is spread around its center of mass at iteration t , calculated via Equation (14).
c d t = 1 D i = 1 N x i d t
Figure 4 presents the experimental outcomes of the population diversity analysis for both algorithms. It is evident that in most cases, the MSCSO algorithm exhibits higher population diversity than the SCSO algorithm. While the population diversity of SCSO typically declines to a very low level within 150 generations, MSCSO demonstrates a more gradual reduction in diversity and is better able to preserve population diversity throughout the process.

4.2.2. Analysis of the Balance Between Exploration and Exploitation

Exploration and exploitation are critical elements in any optimization algorithm. Exploration refers to the algorithm’s ability to conduct a wide-ranging search across the solution space, seeking out diverse and potentially undiscovered regions that might contain the global optimum. Exploitation, conversely, entails an intensive search within the vicinity of known high-quality solutions, refining them to achieve greater accuracy. This process utilizes existing information to deepen search efforts in promising areas.
Excessive exploration may lead to inefficient use of computational resources by scanning the solution space broadly without focused improvement, thereby overlooking opportunities to enhance solutions in specific regions. On the other hand, overemphasis on exploitation can result in premature convergence to a local optimum, hindering the algorithm’s ability to locate superior solutions in other parts of the search space [53]. Maintaining a proper trade-off between exploration and exploitation is essential for the algorithm’s efficiency. Here, we investigate the exploration and exploitation characteristics of MSCSO, with quantitative assessments based on Equations (15) and (16) [21,61].
E x p l o r a t i o n % = D i v t D i v m a x × 100 %
E x p l o i t a t i o n % = D i v t D i v m a x D i v m a x × 100 %
where D i v t quantifies the diversity at iteration t , calculated according to Equation (17), and D i v m a x refers to the highest diversity value recorded throughout the iterative process.
D i v t = 1 D d = 1 D 1 N i = 1 N m e d i a n x d t x i d t
Figure 5 displays the experimental results. In the initial phases of the algorithm’s iterations, exploration accounts for a substantially larger proportion compared to exploitation. As the execution advances, the level of exploitation gradually rises while exploration decreases correspondingly. Toward the end of the process, the proportion of exploitation nears 100% (without fully reaching it), indicating that MSCSO achieves an effective balance between exploration and exploitation, and exhibits robust capability in both dimensions.

4.2.3. Effects Analysis of the Modifications

In order to evaluate both the individual impact and the synergistic interactions of the three improvement strategies—the adaptive strategy selection mechanism (S1), the adaptive crossover and mutation strategy (S2), and the global optimal-guided boundary control mechanism (S3)—on the baseline SCSO algorithm, this study conducted ablation experiments based on the CEC2022 benchmark test (dimension dim = 20). Four comparative variants were designed for this purpose: MSCSO1 with only S1, MSCSO2 with only S2, MSCSO3 with only S3, and the complete MSCSO integrating all strategies. For the experiments, the population size was fixed at 30, and the number of function evaluations was set to 10,000× dim. The outcomes are illustrated in Figure 6.
Figure 6 systematically compares the convergence performance of the original SCSO algorithm, three single-enhancement strategy variants of MSCSO (MSCSO1 with only the adaptive strategy selection mechanism, MSCSO2 with only the adaptive crossover-mutation strategy, and MSCSO3 with only the global optimal-guided boundary control mechanism), and the complete MSCSO, based on the CEC2022 benchmark test set (20 dimensions). From the convergence curves of the test functions, it can be clearly observed that the complete MSCSO consistently exhibits the best convergence performance throughout the entire iterative process: whether in multimodal complex functions or unimodal functions, its convergence speed is significantly faster than that of the original SCSO, and the fitness level it ultimately converges to is far superior to SCSO. Meanwhile, compared to the three variants with only a single strategy, the convergence curve of the complete MSCSO is smoother, without significant oscillations or stagnation, and it continuously approaches the global optimal solution.
In terms of the effects of the individual strategies, the three strategies each contribute differently to improving the algorithm’s performance: the adaptive crossover-mutation strategy (MSCSO2) effectively alleviates the premature convergence issue often observed in the original SCSO, helping to maintain population diversity; the global optimal-guided boundary control mechanism (MSCSO3) significantly enhances the convergence stability of the algorithm, reducing performance fluctuations caused by improper boundary handling; and the adaptive strategy selection mechanism (MSCSO1) accelerates the algorithm’s approach to the optimal solution region in the early iterations by dynamically balancing exploration and exploitation. By integrating these three strategies and leveraging their synergy, the complete MSCSO not only retains the advantages of each individual strategy but also compensates for their limitations, resulting in comprehensive improvements in convergence speed, convergence accuracy, and stability. This fully demonstrates the necessity and synergistic effectiveness of the proposed three enhancement strategies, laying the foundation for MSCSO’s reliable performance in high-dimensional complex optimization tasks.

4.3. Compare Using CEC 2020 and CEC 2022 Test Functions

In this subsection, the effectiveness of the proposed MSCSO algorithm is validated using 10- and 20-dimensional functions from the CEC 2020 and CEC 2022 benchmark suites. MSCSO is compared with seven state-of-the-art algorithms, and numerical results are summarized in Table 2, Table 3, Table 4 and Table 5. To visualize convergence behavior, partial convergence curves for all eight algorithms are presented in Figure 7, while Figure 8 shows boxplots to assess algorithm stability and reduce the impact of randomness.
Analysis of Figure 7 and Figure 8 and Table 2, Table 3, Table 4 and Table 5 indicates that MSCSO consistently outperforms competing algorithms in global optimization tasks across three key aspects: convergence speed, optimization accuracy, and stability. Regarding convergence efficiency, MSCSO rapidly occupies the optimal search region within the first 100 iterations, as evidenced by the steeper decline in fitness values compared to GWO, WOA, and SCSO. For example, on CEC2020-F1 (10D), MSCSO achieves a fitness value on the order of 1.0000 × 102 after 50 iterations, whereas SCSO remains at 4.4957 × 109, and GWO/WOA stay within 107–108. Even on high-dimensional complex functions such as CEC2022-F7 (20D), MSCSO avoids common issues of slow early exploration and late stagnation, maintaining a steady downward trend throughout iterations and achieving a final mean value of 3.9069 × 103, which is only 1/153 of SCSO (5.9957 × 108), demonstrating the effectiveness of its adaptive strategy selection and crossover-mutation mechanisms.
Regarding optimization accuracy and data stability, MSCSO achieves both lower mean values and smaller standard deviations across benchmark functions. For instance, in the 10D CEC2020 scenario, F1 exhibits a mean of 1.0000 × 102 with a standard deviation of 3.3027 × 10−12, and F8 shows a mean of 2.2035 × 103 with zero deviation, whereas SCSO presents much higher means and dispersed results (F8 mean 9.8862 × 104, Std 1.5961 × 105). In higher dimensions, this advantage is more pronounced: CEC2020-F3 (20D) shows MSCSO mean 1.4430 × 103, below SCSO (5.9061 × 103) and GWO (2.7145 × 103), with standard deviation 2.2443 × 102, half that of SCSO (4.5820 × 102). Boxplots in Figure 8 further confirm that MSCSO achieves concentrated distributions with no outliers, illustrating the effectiveness of the global optimum-guided boundary control mechanism in reducing performance fluctuations.
Finally, MSCSO maintains superiority across different function types. On unimodal functions (e.g., CEC2020-F9, CEC2022-F4), it demonstrates precise localization of the optimum with excellent local exploitation (e.g., F9 10D mean 2.2959 × 103, slightly lower than SBOA but with smaller deviation). On multimodal functions (e.g., CEC2020-F5, CEC2022-F6), MSCSO effectively preserves population diversity, avoiding local optima; for F5 10D, its mean 1.9012 × 103 is comparable to SBOA (1.9013 × 103) with a more favorable standard deviation (5.0393 × 10−1 vs. 4.7006 × 10−1). The convergence curves (Figure 7, F5 10D) also show that MSCSO continues to improve beyond iteration 150, whereas SCSO and GWO plateau early. These results demonstrate MSCSO’s strong adaptability to various optimization problems, providing a solid foundation for its subsequent application in multi-level threshold image segmentation tasks.

4.4. Statistical Analysis

Statistical analysis plays a vital role in algorithm optimization, enabling researchers to evaluate and contrast the efficacy of various algorithmic approaches. This facilitates the selection of the most suitable method for specific research challenges. The performance of the MSCSO algorithm is examined in this section using the Wilcoxon rank-sum and Friedman statistical tests, with detailed descriptions of the methods and outcomes.

4.4.1. Wilcoxon Rank-Sum Test Analysis

The Wilcoxon rank-sum test is utilized in this subsection to evaluate whether the observed performance variations in the MSCSO algorithm are significant, independent of normality assumptions. Compared with the standard t-test, this non-parametric method provides greater flexibility, particularly for datasets with skewed distributions or extreme values. The test statistic W is given by Equation (18) [61].
W = i = 1 n 1 R X i
where R X i is the rank of X i among all data points, and the test statistic U is computed according to Equation (19).
U = W n 1 n 1 + 1 2
For large datasets, the distribution of U is approximately normally distributed, as described in Equations (20) and (21).
μ U = n 1 n 2 2
σ U = n 1 n 2 n 1 + n 2 + 1 12
and the standardized statistic Z is calculated by Equation (22).
Z = U μ U σ U
The significance level was set at 0.05 to determine whether the results from each MSCSO run showed statistically significant differences compared to other algorithms. The null hypothesis ( H 0 ) assumes no difference between the algorithms. A p-value below 0.05 leads to rejection of H 0 , suggesting a significant performance discrepancy, whereas a p-value equal to or above 0.05 means H 0 is retained. The experimental outcomes, detailed in Table 6, demonstrate that MSCSO possesses significant advantages. In the table, the symbol “+” denotes that MSCSO outperforms the compared algorithm in the Wilcoxon rank-sum test, “=” indicates no substantial difference, and “−” means MSCSO is inferior.

4.4.2. Evaluation Using the Friedman Mean Rank Test

The Friedman test is used in this subsection to rank the MSCSO algorithm in comparison with other methods. As a nonparametric procedure, it effectively evaluates median differences across three or more matched datasets. It is well-suited for repeated measures or block designs and offers a robust alternative to ANOVA in cases where normality is violated. The corresponding test statistic is determined according to Equation (23) [59,61].
Q = 12 n k k + 1 j = 1 k R j 2 3 n k + 1
In this context, n indicates the total number of blocks, k is the number of groups, and R j refers to the sum of ranks for the j t h group. group. When both n and k are sufficiently large, Q approximately follows a χ 2 distribution with k 1 degrees of freedom.
The experimental results are summarized in Table 7 (Friedman mean-rank test) and visualized in Figure 9 (algorithm ranking distributions). Here, M.R denotes the average rank of each algorithm over 30 benchmark functions, and T.R represents the overall global ranking. As shown in Table 7 and Figure 9, MSCSO consistently achieves the top rank across all evaluated scenarios, demonstrating its superior optimization performance. Specifically, on the CEC2020 benchmark, MSCSO attains M.R = 1.40, T.R = 1 for 10D and M.R = 1.20, T.R = 1 for 20D; on the CEC2022 benchmark, the corresponding M.R values are 1.75 (10D) and 1.50 (20D), with T.R remaining 1 in both cases. In contrast, SCSO ranks last in all dimensions (M.R > 7.5, T.R = 8). These results confirm the robust superiority of MSCSO across different benchmark sets and problem dimensions.

4.5. Evaluation of Runtime Performance for MSCSO Versus SCSO

The results discussed above indicate that the improved MSCSO outperforms the original SCSO in overall effectiveness. In this section, we compare the computational time of both algorithms on the CEC2020 test set with dimension 20. For consistency, MSCSO and SCSO use the same parameter settings as detailed in earlier sections. The average runtime is computed over 30 independent executions. Figure 10 depicts the average time (in seconds) taken by each algorithm to process the test functions.
From the data distribution in Figure 10, it can be observed that on most test functions, the runtime of MSCSO is slightly higher than that of the original SCSO algorithm. For instance, on function F1, the average runtime of SCSO is approximately 0.33 s, while MSCSO reaches 0.35 s, an increase of 0.02 s. Additionally, the average computational time of SCSO ranges between 0.28 and 0.38 s, while that of MSCSO ranges between 0.30 and 0.42 s, indicating no substantial disparity. The slight increase in computational time is primarily attributed to the three enhancement strategies integrated into MSCSO: the three enhancements—Adaptive Strategy Selection Mechanism, Adaptive Crossover and Mutation Strategy, and Global Optimum-guided Boundary Control Mechanism—introduce additional computational operations. Nevertheless, theoretical analysis of the time complexity indicates that MSCSO does not incur higher-order complexity, and its overall computational complexity remains comparable to that of the original SCSO. Consequently, the observed difference in runtime between the two algorithms is well within acceptable limits.

5. MSCSO for Multilevel Thresholding Image Segmentation

Two commonly used strategies for optimal threshold selection in image segmentation are Otsu’s method and Kapur’s entropy technique. In Otsu’s approach, the variance between foreground and background varies with different threshold values, and the threshold that yields the maximum variance is considered optimal. Kapur’s method, on the other hand, evaluates the sum of entropies of the foreground and background regions derived from their probability distributions, selecting the threshold that maximizes this sum. Both methods operate directly on image pixels to determine the optimal threshold according to their respective rules [5,54,62]. In this study, the experiments employ Otsu’s method to determine the optimal threshold.
Otsu’s method is a criterion for selecting the optimal threshold based on maximizing the between-class variance. Let an image I have L gray levels, and let n i represent the count of pixels with gray level i . The total number of pixels in the image can then be expressed as [63]:
N = i = 0 L 1 n i
The probability that a randomly selected pixel has gray level i can be defined as:
P i = n i N ,   i = 0 , 1 , , L 1
where P i 0 ,   and   P 0 + P 1 + + P L 1 = 1 .
For k thresholds, if a gray level t is chosen as a threshold, the image is split into two parts: pixels with gray levels between 0 and t are treated as the object, while pixels with gray levels in [ t + 1 ,   L 1 ] are considered the background. Let ω 0 and μ 0 denote the proportion and mean gray level of the object pixels, respectively, and ω 1 and μ 1 denote the proportion and mean gray level of the background pixels, respectively. Let μ be the overall mean gray level of the image, and v be the between-class variance. The calculations of ω 0 , μ 0 , ω 1 , μ 1 , μ and v are given as follows [6,63]:
ω 0 = i = 0 t P i μ 0 = i = 0 t i P i ω 0 ω 1 = i = t + 1 L 1 P i μ 1 = i = t + 1 L 1 i P i ω 1 μ = i = 0 L 1 i P i ( t ) = ω 0 ( μ 0 μ ) 2 + ω 1 ( μ 1 μ ) 2 = ω 0 ω 1 ( μ 0 μ 1 ) 2
For multiple thresholds ( t 1 , t 2 , , t k ) , the between-class variance v ( t 1 , t 2 , , t k ) is computed as:
v ( t 1 , t 2 , , t k ) = ω 0 ω 1 ( μ 0 μ 1 ) 2 + ω 0 ω 2 ( μ 0 μ 2 ) 2 + + ω 0 ω k ( μ 0 μ k ) 2 + ω 1 ω 2 ( μ 1 μ 2 ) 2 + + ω 1 ω 3 ( μ 1 μ 3 ) 2 + + ω k 1 ω k ( μ k 1 μ k ) 2
where ω i and μ i are calculated as:
ω i 1 = i = t i 1 + 1 t i P i , 1 i k + 1 μ i 1 = i = t i 1 + 1 t i i P i ω i 1 , 1 i k + 1
The optimal thresholds T b e s t are determined by:
T b e s t = arg m a x 0 t 1 t 2 t k [ v ( t 1 , t 2 , , t k ) ]
In this study, the performance of the MSCSO algorithm for multilevel thresholding is evaluated through comparative experiments. Five improved swarm intelligence algorithms are selected as benchmarks. All algorithms were tested with a population size of 30 and a maximum of 50 iterations, using Otsu’s method as the evaluation criterion. Five benchmark images of diverse styles were used, and four threshold counts (4, 6, 8, 10) were considered, as shown in Figure 11. Each experiment was repeated independently 30 times, with the mean and standard deviation of objective function values recorded to evaluate performance consistency. Parameter settings were maintained consistent across all methods to ensure a fair comparison.

5.1. Evaluation Index

Image segmentation performance is typically measured using PSNR, SSIM, and FSIM. Higher PSNR and SSIM indicate superior image quality and lower distortion, while increased FSIM values denote improved feature accuracy and reduced errors. The formulas for these evaluation metrics are given below [5,63,64]:
The PSNR is defined as:
P S N R = 10 l o g 10 ( 255 2 M S E )
MSE denotes the mean squared error between the original image I and its segmented counterpart I . It can be calculated as follows:
M S E = 1 M N j = 1 M   k = 1 N [ I ( j , k ) I ( j , k ) ] 2
where ( M × N ) denotes the image size, I and I are the original and segmented images, respectively, I ( j , k ) is the gray level of the original image at pixel ( j , k ) , and I ( j , k ) is the gray level of the segmented image at pixel ( j , k ) .
The Structural Similarity Index (SSIM) evaluates the resemblance between two images by considering their luminance, contrast, and structural characteristics. For a pair of images I and I , SSIM is computed as follows [63]:
S S I M ( I , I ) = ( 2 μ I μ I + C 1 ) ( 2 σ I I + C 2 ) ( μ I 2 + μ I 2 + C 1 ) ( σ I 2 + σ I 2 + C 2 )
where μ I and μ I are the mean gray levels of I and I , σ I I is the covariance between I and I , σ I 2 and σ I 2 are the variances of I and I , respectively. Typically, C 1 = K 1 L and C 2 = K 2 L 2 , K 1 = 0.01 , K 2 = 0.03 , and L representing the maximum gray level.
Since human perception of image features is often focused on points with high phase consistency, FSIM uses phase congruency as the primary feature for assessing image quality. FSIM is defined as [63]:
F S I M ( I , I ) = x Ω S L ( x ) P C m ( x ) x Ω P C m ( x )
where x refers to a single pixel and Ω represents the complete image domain, P C m ( x ) = m a x ( P C 1 ( x ) , P C 2 ( x ) ) , P C 1 ( x ) and P C 2 ( x ) being the phase congruency of the original and segmented images, respectively.

5.2. Evaluation of Otsu Thresholding Outcomes Using the MSCSO Algorithm

For multilevel thresholding on the chosen five images, the MSCSO algorithm was applied using Otsu’s method as the objective function. The quality of thresholding is evaluated by the maximum value of the Otsu objective function and the metrics PSNR, FSIM, and SSIM. Higher values in these measures correspond to better segmentation performance. The results show that MSCSO outperforms other algorithms in all four evaluation criteria.
The optimal thresholds derived from the image histograms are shown in Table 8. Table 9 reports the average and standard deviation of the best fitness values for each algorithm when Otsu’s objective function is employed, along with the Friedman ranking outcomes. Table 10, Table 11 and Table 12 summarize the mean and standard deviation of PSNR, FSIM, and SSIM metrics obtained by each method, together with their respective Friedman ranks. Thresholds were calculated for four different quantities: 4, 6, 8, and 10.
Based on Table 9, Table 10, Table 11 and Table 12 (quantitative metrics under the Otsu objective function) and Figure 12 (Friedman average rankings of each metric under Otsu’s criterion), it can be seen that the MSCSO algorithm demonstrates comprehensive and stable superiority in multilevel threshold image segmentation tasks. Its performance advantages are fully validated across four core metrics: objective function fitness, Peak Signal-to-Noise Ratio (PSNR), Feature Similarity Index (FSIM), and Structural Similarity Index (SSIM), and these advantages remain consistent across different threshold numbers (4, 6, 8, 10) and benchmark images (baboon, camera, girl, lena, terrace).
From the perspective of objective function optimization, Table 9 (mean and standard deviation of optimal fitness values) shows that MSCSO achieves the “highest mean and lowest standard deviation” across all image-threshold combinations. Taking the baboon image as an example, when the threshold number is 4, MSCSO has a mean of 3.3008 × 103, higher than SCSO (3.2684 × 103) and GWO (3.2983 × 103), with a standard deviation of only 2.4484 × 10−2, far lower than SCSO’s 2.5178 × 101. When the threshold number increases to 10, MSCSO still leads with a mean of 3.4128 × 103 compared with SBOA (3.3999 × 103) and DBO (3.4017 × 103), and its standard deviation of 2.4858 × 100 is only about one-third of HSO’s 7.2195 × 100. This indicates that MSCSO can accurately locate the optimal thresholds under Otsu’s criterion and produces highly consistent results across multiple runs, laying a solid foundation for high-quality image segmentation.
Regarding image segmentation quality metrics (PSNR, FSIM, SSIM), the data in Table 10, Table 11 and Table 12 are highly consistent with the Friedman rankings in Figure 11. In Table 10 (PSNR), MSCSO achieves the highest PSNR values in most scenarios: for the camera image with 6 thresholds, MSCSO’s PSNR mean is 23.2635, exceeding SCSO (23.0151) and WOA (22.5434); for the girl image with 8 thresholds, MSCSO achieves 27.9327, significantly higher than GWO (27.3696) and HSO (25.1686). Table 11 (FSIM) and Table 12 (SSIM) show similar patterns. For example, for the lena image with 8 thresholds, MSCSO achieves FSIM = 0.9017 and SSIM = 0.8391, both higher than other comparison algorithms. Figure 11 further visually illustrates MSCSO’s advantages through the Friedman average rankings: MSCSO consistently ranks first in fitness value, PSNR, FSIM, and SSIM, with average rankings far lower than SCSO (rank 6–8) and HSO (rank 8), demonstrating that its segmentation performance is statistically superior and better preserves image details and structural information.
Furthermore, regarding the impact of threshold numbers on performance, as the threshold number increases from 4 to 10 (increasing segmentation complexity), MSCSO’s advantages become more pronounced. For example, for the terrace image with 4 thresholds, MSCSO’s SSIM mean is 0.8031, close to WOA (0.8030); however, when the threshold number increases to 10, MSCSO achieves 0.8804, significantly higher than SCSO (0.8422) and GWO (0.8234). This trend is also reflected in Figure 11: as the number of thresholds increases, the ranking gap between MSCSO and other algorithms widens, confirming that its adaptive strategy and global optimum guidance mechanism can effectively balance exploration and exploitation in complex segmentation tasks, avoid local optima, and consistently produce high-quality segmentation results.

6. Conclusions

The modified Sand Cat Swarm Optimization algorithm (MSCSO) proposed in this study effectively addresses the core limitations of the original SCSO algorithm in global optimization and multilevel threshold image segmentation. In global optimization experiments based on the CEC2020 and CEC2022 benchmark sets, MSCSO demonstrates superior performance in both 10-dimensional and 20-dimensional scenarios due to its adaptive strategy selection, crossover mutation, and global optimum-guided boundary control mechanism. In terms of convergence speed, the algorithm quickly approaches the optimal solution region in the early iterations; for example, for CEC2020-F1 (10 dimensions), the fitness value reaches the order of 102 at iteration 50, far outperforming SCSO, which achieves only the order of 109. Regarding numerical accuracy and stability, MSCSO attains the “best mean and smallest standard deviation” across all test functions, and the Friedman test confirms that it consistently ranks first in the overall ranking for all dimensions. The computational complexity remains O ( T × N ×   d i m ) , without introducing additional computational burden.
In the context of multilevel thresholding for image segmentation, experimental results using Otsu’s criterion as the objective function indicate that the MSCSO algorithm consistently delivers outstanding performance across five benchmark images with threshold counts varying from 4 to 10. Data in Table 9, Table 10, Table 11 and Table 12 indicate that MSCSO outperforms comparison algorithms such as GWO, WOA, and SCSO across four core metrics: objective function fitness, PSNR, FSIM, and SSIM. For instance, for the baboon image with 10 thresholds, the mean fitness value is 3.4128 × 103; for the girl image with 8 thresholds, the PSNR mean is 27.9327. The Friedman rankings in Figure 11 further validate that MSCSO consistently ranks first across all four metrics, and its advantages become even more pronounced as the number of thresholds increases (i.e., segmentation complexity rises), providing an efficient and reliable solution for practical image segmentation tasks.

Author Contributions

Conceptualization, X.Y. and Z.Z.; methodology, X.Y. and Z.Y.; software, X.Y. and Y.Z.; validation, X.Y. and Z.Z.; formal analysis, X.Y. and Y.Z.; investigation, X.Y. and Z.Y.; resources, X.Y. and Z.Z.; data curation, X.Y. and Z.Y.; writing—original draft preparation, X.Y. and Z.Y.; writing—review and editing, X.Y. and Z.Z.; visualization, X.Y. and Y.Z.; supervision, X.Y. and Z.Z.; funding acquisition, X.Y. and Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chen, L.; Bentley, P.; Mori, K.; Misawa, K.; Fujiwara, M.; Rueckert, D. DRINet for Medical Image Segmentation. IEEE Trans. Med. Imaging 2018, 37, 2453–2462. [Google Scholar] [CrossRef]
  2. LaLonde, R.; Xu, Z.; Irmakci, I.; Jain, S.; Bagci, U. Capsules for biomedical image segmentation. Med. Image Anal. 2020, 68, 101889. [Google Scholar] [CrossRef]
  3. Shi, Q.; Li, Y.; Di, H.; Wu, E. Self-Supervised Interactive Image Segmentation. IEEE Trans. Circuits Syst. Video Technol. 2023, 34, 6797–6808. [Google Scholar] [CrossRef]
  4. Chen, Y.; Wang, M.; Heidari, A.A.; Shi, B.; Hu, Z.; Zhang, Q.; Chen, H.; Mafarja, M.; Turabieh, H. Multi-threshold Image Segmentation using a Multi-strategy Shuffled Frog Leaping Algorithm. Expert Syst. Appl. 2022, 194, 116511. [Google Scholar] [CrossRef]
  5. Wang, J.; Bei, J.; Song, H.; Zhang, H.; Zhang, P. A whale optimization algorithm with combined mutation and removing similarity for global optimization and multilevel thresholding image segmentation. Appl. Soft Comput. 2023, 137, 110130. [Google Scholar] [CrossRef]
  6. Shi, J.; Chen, Y.; Wang, C.; Heidari, A.A.; Liu, L.; Chen, H.; Chen, X.; Sun, L. Multi-threshold image segmentation using new strategies enhanced whale optimization for lupus nephritis pathological images. Displays 2024, 84, 102799. [Google Scholar] [CrossRef]
  7. Fan, Q.; Ma, Y.; Wang, P.; Bai, F. Otsu Image Segmentation Based on a Fractional Order Moth–Flame Optimization Algorithm. Fractal Fract. 2024, 8, 87. [Google Scholar] [CrossRef]
  8. Guo, H.; Wang, J.G.; Liu, Y. Multi-threshold image segmentation algorithm based on Aquila optimization. Vis. Comput. 2023, 40, 2905–2932. [Google Scholar] [CrossRef]
  9. Zheng, J.; Gao, Y.; Zhang, H.; Lei, Y.; Zhang, J. OTSU Multi-Threshold Image Segmentation Based on Improved Particle Swarm Algorithm. Appl. Sci. 2022, 12, 11514. [Google Scholar] [CrossRef]
  10. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  11. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  12. Meidani, K.; Hemmasian, A.; Mirjalili, S.; Barati Farimani, A. Adaptive grey wolf optimizer. Neural Comput. Appl. 2022, 34, 7711–7731. [Google Scholar] [CrossRef]
  13. Rodríguez, A.; Camarena, O.; Cuevas, E.; Aranguren, I.; Valdivia-G, A.; Morales-Castañeda, B.; Zaldívar, D.; Pérez-Cisneros, M. Group-based synchronous-asynchronous Grey Wolf Optimizer. Appl. Math. Model. 2020, 93, 226–243. [Google Scholar] [CrossRef]
  14. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  15. Nadimi-Shahraki, M.H.; Zamani, H.; Mirjalili, S. Enhanced whale optimization algorithm for medical feature selection: A COVID-19 case study. Comput. Biol. Med. 2022, 148, 105858. [Google Scholar] [CrossRef]
  16. Sun, Y.; Chen, Y. Multi-population improved whale optimization algorithm for high dimensional optimization. Appl. Soft Comput. 2021, 112, 107854. [Google Scholar] [CrossRef]
  17. Ding, Y.; Chen, Z.; Zhang, H.; Wang, X.; Guo, Y.J.R.E. A short-term wind power prediction model based on CEEMD and WOA-KELM. Renew. Energy 2022, 189, 188–198. [Google Scholar] [CrossRef]
  18. Fu, S.; Li, K.; Huang, H.; Ma, C.; Fan, Q.; Zhu, Y. Red-billed blue magpie optimizer: A novel metaheuristic algorithm for 2D/3D UAV path planning and engineering design problems. Artif. Intell. Rev. 2024, 57, 134. [Google Scholar] [CrossRef]
  19. Lu, Z.; Yu, X.; Xu, F.; Jing, L.; Cheng, X. Multi-objective optimal scheduling of islanded microgrid based on ISSA. J. Renew. Sustain. Energy 2025, 17, 025301. [Google Scholar] [CrossRef]
  20. Kouhpah Esfahani, K.; Mohammad Hasani Zade, B.; Mansouri, N. Multi-objective feature selection algorithm using Beluga Whale Optimization. Chemom. Intell. Lab. Syst. 2024, 257, 105295. [Google Scholar] [CrossRef]
  21. Abdel-Basset, M.; Mohamed, R.; Abouhawwash, M. Crested Porcupine Optimizer: A new nature-inspired metaheuristic. Knowl. Based Syst. 2024, 284, 111257. [Google Scholar] [CrossRef]
  22. Dorigo, M.; Birattari, M.; Stützle, T. Ant Colony Optimization. Comput. Intell. Mag. IEEE 2006, 1, 28–39. [Google Scholar] [CrossRef]
  23. Yang, X.-S.; He, X. Bat algorithm: Literature review and applications. Int. J. Bio-Inspired Comput. 2013, 5, 141–149. [Google Scholar] [CrossRef]
  24. Abdel-Basset, M.; Mohamed, R.; Jameel, M.; Abouhawwash, M.J.K.-B.S. Nutcracker optimizer: A novel nature-inspired metaheuristic algorithm for global optimization and engineering design problems. Knowl. Based Syst. 2023, 262, 110248. [Google Scholar] [CrossRef]
  25. Wang, R.-B.; Hu, R.-B.; Geng, F.-D.; Xu, L.; Chu, S.-C.; Pan, J.-S.; Meng, Z.-Y.; Mirjalili, S. The Animated Oat Optimization Algorithm: A nature-inspired metaheuristic for engineering optimization and a case study on Wireless Sensor Networks. Knowl. Based Syst. 2025, 318, 113589. [Google Scholar] [CrossRef]
  26. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A novel meta-heuristic optimization algorithm. Knowl. Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  27. Yu, D.; Ji, Y.; Xia, Y. Projection-Iterative-Methods-based Optimizer: A novel metaheuristic algorithm for continuous optimization problems and feature selection. Knowl. Based Syst. 2025, 326, 113978. [Google Scholar] [CrossRef]
  28. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2022, 79, 7305–7336. [Google Scholar] [CrossRef]
  29. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  30. Yapici, H.; Cetinkaya, N. A new meta-heuristic optimizer: Pathfinder algorithm. Appl. Soft Comput. 2019, 78, 545–568. [Google Scholar] [CrossRef]
  31. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  32. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  33. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper Optimisation Algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef]
  34. Jia, H.; Peng, X.; Lang, C. Remora optimization algorithm. Expert Syst. Appl. 2021, 185, 115665. [Google Scholar] [CrossRef]
  35. Hayyolalam, V.; Kazem, A.A.P. Black widow optimization algorithm: A novel meta-heuristic approach for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2020, 87, 103249. [Google Scholar] [CrossRef]
  36. Mohammadi-Balani, A.; Nayeri, M.D.; Azar, A.; Taghizadeh-Yazdi, M. Golden eagle optimizer: A nature-inspired metaheuristic algorithm. Comput. Ind. Eng. 2021, 152, 107050. [Google Scholar] [CrossRef]
  37. Zamani, H.; Nadimi-Shahraki, M.H.; Gandomi, A.H. Starling murmuration optimizer: A novel bio-inspired algorithm for global and engineering optimization. Comput. Methods Appl. Mech. Eng. 2022, 392, 114616. [Google Scholar] [CrossRef]
  38. Jia, H.; Rao, H.; Wen, C.; Mirjalili, S. Crayfish optimization algorithm. Artif. Intell. Rev. 2023, 56, 1919–1979. [Google Scholar] [CrossRef]
  39. Pierezan, J.; Coelho, L.D.S. Coyote Optimization Algorithm: A new metaheuristic for global optimization problems. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 2633–2640. [Google Scholar]
  40. Khishe, M.; Mosavi, M.R. Chimp optimization algorithm. Expert Syst. Appl. 2020, 149, 113338. [Google Scholar] [CrossRef]
  41. Wang, L.; Cao, Q.; Zhang, Z.; Mirjalili, S.; Zhao, W. Artificial rabbits optimization: A new bio-inspired meta-heuristic algorithm for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2022, 114, 105082. [Google Scholar] [CrossRef]
  42. Guan, Z.; Ren, C.; Niu, J.; Wang, P.; Shang, Y. Great Wall Construction Algorithm: A novel meta-heuristic algorithm for engineer problems. Expert Syst. Appl. 2023, 233, 120905. [Google Scholar] [CrossRef]
  43. Wang, T.-L.; Gu, S.-W.; Liu, R.-J.; Chen, L.-Q.; Wang, Z.; Zeng, Z.-Q. Cuckoo catfish optimizer: A new meta-heuristic optimization algorithm. Artif. Intell. Rev. 2025, 58, 326. [Google Scholar] [CrossRef]
  44. Ghasemi, M.; Khodadadi, N.; Trojovský, P.; Li, L.; Mansor, Z.; Abualigah, L.; Alharbi, A.H.; El-Kenawy, E.-S.M. Kirchhoff’s law algorithm (KLA): A novel physics-inspired non-parametric metaheuristic algorithm for optimization problems. Artif. Intell. Rev. 2025, 58, 325. [Google Scholar] [CrossRef]
  45. Wang, J.; Chen, Y.; Lu, C.; Heidari, A.A.; Wu, Z.; Chen, H. The status-based optimization: Algorithm and comprehensive performance analysis. Neurocomputing 2025, 647, 130603. [Google Scholar] [CrossRef]
  46. Manisha, N.; Kumar, P. An improved osprey optimization algorithm to analyse the steady state performance of stainless steel utensil manufacturing unit. Life Cycle Reliab. Saf. Eng. 2025, 1–18. [Google Scholar] [CrossRef]
  47. Fu, S.; Ma, C.; Li, K.; Xie, C.; Fan, Q.; Huang, H.; Xie, J.; Zhang, G.; Yu, M. Modified LSHADE-SPACMA with new mutation strategy and external archive mechanism for numerical optimization and point cloud registration. Artif. Intell. Rev. 2025, 58, 72. [Google Scholar] [CrossRef]
  48. Punia, P.; Raj, A.; Kumar, P. Enhanced zebra optimization algorithm for reliability redundancy allocation and engineering optimization problems. Clust. Comput. 2025, 28, 267. [Google Scholar] [CrossRef]
  49. Yu, M.; Xu, J.; Liang, W.; Qiu, Y.; Bao, S.; Tang, L. Improved multi-strategy adaptive Grey Wolf Optimization for practical engineering applications and high-dimensional problem solving. Artif. Intell. Rev. 2024, 57, 277. [Google Scholar] [CrossRef]
  50. Seyyedabbasi, A.; Kiani, F. Sand Cat swarm optimization: A nature-inspired algorithm to solve global optimization problems. Eng. Comput. 2022, 39, 2627–2651. [Google Scholar] [CrossRef]
  51. Anka, F.; Aghayev, N. Advances in Sand Cat Swarm Optimization: A Comprehensive Study. Arch. Comput. Methods Eng. 2025, 32, 2669–2712. [Google Scholar] [CrossRef]
  52. Jia, H.; Zhang, J.; Rao, H.; Abualigah, L. Improved sandcat swarm optimization algorithm for solving global optimum problems. Artif. Intell. Rev. 2024, 58, 5. [Google Scholar] [CrossRef]
  53. Mohammed, B.O.; Aghdasi, H.S.; Salehpour, P. Dhole optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Clust. Comput. 2025, 28, 430. [Google Scholar] [CrossRef]
  54. Ryalat, M.H.; Dorgham, O.; Tedmori, S.; Al-Rahamneh, Z.; Al-Najdawi, N.; Mirjalili, S. Harris hawks optimization for COVID-19 diagnosis based on multi-threshold image segmentation. Neural Comput. Appl. 2022, 35, 6855–6873. [Google Scholar] [CrossRef] [PubMed]
  55. Mohamed, A.W.; Hadi, A.A.; Mohamed, A.K.; Awad, N.H. Evaluating the performance of adaptive gainingsharing knowledge based algorithm on CEC 2020 benchmark problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  56. Luo, W.; Lin, X.; Li, C.; Yang, S.; Shi, Y. Benchmark functions for CEC 2022 competition on seeking multiple optima in dynamic environments. arXiv 2022, arXiv:2201.00523. [Google Scholar] [CrossRef]
  57. Akbari, E.; Rahimnejad, A.; Gadsden, S.A. Holistic swarm optimization: A novel metaphor-less algorithm guided by whole population information for addressing exploration-exploitation dilemma. Comput. Methods Appl. Mech. Eng. 2025, 445, 118208. [Google Scholar] [CrossRef]
  58. Fu, Y.; Liu, D.; Chen, J.; He, L. Secretary bird optimization algorithm: A new metaheuristic for solving global optimization problems. Artif. Intell. Rev. 2024, 57, 123. [Google Scholar] [CrossRef]
  59. Ou, Y.; Qin, F.; Zhou, K.-Q.; Yin, P.-F.; Mo, L.-P.; Mohd Zain, A.J.S. An improved grey wolf optimizer with multi-strategies coverage in wireless sensor networks. Symmetry 2024, 16, 286. [Google Scholar] [CrossRef]
  60. Wang, W.-C.; Tian, W.-C.; Xu, D.-M.; Zang, H.-F. Arctic puffin optimization: A bio-inspired metaheuristic algorithm for solving engineering design optimization. Adv. Eng. Softw. 2024, 195, 103694. [Google Scholar] [CrossRef]
  61. Cao, L.; Wei, Q.J.B. SZOA: An Improved Synergistic Zebra Optimization Algorithm for Microgrid Scheduling and Management. Biomimetics 2025, 10, 664. [Google Scholar] [CrossRef]
  62. Suresh, S.; Lal, S. Multilevel thresholding based on Chaotic Darwinian Particle Swarm Optimization for segmentation of satellite images. Appl. Soft Comput. 2017, 55, 503–522. [Google Scholar] [CrossRef]
  63. Zhang, Y.; Wang, J.; Zhang, X.; Wang, B. ACPOA: An Adaptive Cooperative Pelican Optimization Algorithm for Global Optimization and Multilevel Thresholding Image Segmentation. Biomimetics 2025, 10, 596. [Google Scholar] [CrossRef]
  64. Jiang, Y.; Yeh, W.-C.; Hao, Z.; Yang, Z. A cooperative honey bee mating algorithm and its application in multi-threshold image segmentation. Inf. Sci. 2016, 369, 171–183. [Google Scholar] [CrossRef]
Figure 1. Adaptive strategy selection mechanism.
Figure 1. Adaptive strategy selection mechanism.
Symmetry 17 02012 g001
Figure 2. Adaptive crossover and mutation strategy diagram.
Figure 2. Adaptive crossover and mutation strategy diagram.
Symmetry 17 02012 g002
Figure 3. Global Optimum-guided Boundary Control Mechanism Diagram.
Figure 3. Global Optimum-guided Boundary Control Mechanism Diagram.
Symmetry 17 02012 g003
Figure 4. Comparison of population diversity between the MSCSO and SCSO algorithms.
Figure 4. Comparison of population diversity between the MSCSO and SCSO algorithms.
Symmetry 17 02012 g004aSymmetry 17 02012 g004b
Figure 5. Evaluation of exploration and exploitation behaviors in the MSCSO algorithm.
Figure 5. Evaluation of exploration and exploitation behaviors in the MSCSO algorithm.
Symmetry 17 02012 g005aSymmetry 17 02012 g005b
Figure 6. Evaluation of the performance of different modification strategies.
Figure 6. Evaluation of the performance of different modification strategies.
Symmetry 17 02012 g006aSymmetry 17 02012 g006b
Figure 7. Comparison of convergence speed of different algorithms on CEC2020 and CEC2022 test set.
Figure 7. Comparison of convergence speed of different algorithms on CEC2020 and CEC2022 test set.
Symmetry 17 02012 g007aSymmetry 17 02012 g007bSymmetry 17 02012 g007c
Figure 8. Boxplot analysis for different algorithms on the CEC2020 and CEC2022 test set.
Figure 8. Boxplot analysis for different algorithms on the CEC2020 and CEC2022 test set.
Symmetry 17 02012 g008aSymmetry 17 02012 g008bSymmetry 17 02012 g008c
Figure 9. Distribution of rankings of different algorithms.
Figure 9. Distribution of rankings of different algorithms.
Symmetry 17 02012 g009
Figure 10. MSCSO and SCSO average runtime on different functions.
Figure 10. MSCSO and SCSO average runtime on different functions.
Symmetry 17 02012 g010
Figure 11. The set of benchmark images.
Figure 11. The set of benchmark images.
Symmetry 17 02012 g011
Figure 12. Friedman average rank of different indicators in Otsu.
Figure 12. Friedman average rank of different indicators in Otsu.
Symmetry 17 02012 g012
Table 1. Parameter settings of the comparison algorithms.
Table 1. Parameter settings of the comparison algorithms.
AlgorithmsParameter NameParameter ValueReference
GWO a [0, 2][11]
WOA r , l , A , b [0, 1], [−1, 1], [0, 2], 1[14]
PSO c 1 , c 2 ,   w 1.49445, 1.49445, 0.9[10]
HSO α 3[57]
DBO P p e r c e n t 0.2[28]
SBOA b e t a 1.5[58]
SCSO r g , R [0, 2], [−4, 4][50]
MSCSO r g , p 1 , p 2 , α [0, 2], 0.5, 0.5, [0, 0.5]
Table 2. Results of various algorithms tested on the CEC 2020 benchmark (dim = 10).
Table 2. Results of various algorithms tested on the CEC 2020 benchmark (dim = 10).
IDMetricGWOWOAPSODBOHSOSBOASCSOMSCSO
F1mean4.3880 × 1076.2236 × 1074.8085 × 1062.4440 × 1036.4775 × 1053.6841 × 1034.4957 × 1091.0000 × 102
std1.1500 × 1088.5355 × 1073.2053 × 1062.0435 × 1033.0620 × 1063.5670 × 1032.8857 × 1093.3027 × 10−12
F3mean1.6907 × 1032.3415 × 1031.7279 × 1031.7587 × 1032.0691 × 1031.4466 × 1032.7379 × 1031.3736 × 103
std3.7305 × 1023.5651 × 1022.9444 × 1023.4602 × 1022.8915 × 1022.0750 × 1022.3733 × 1021.3268 × 102
F4mean7.3676 × 1027.8866 × 1027.4520 × 1027.5027 × 1027.4329 × 1027.2655 × 1028.1447 × 1027.1987 × 102
std1.1266 × 1012.2219 × 1018.5220 × 1001.5254 × 1011.7445 × 1019.4822 × 1001.5378 × 1013.7483 × 100
F5mean1.9029 × 1031.9095 × 1031.9055 × 1031.9102 × 1031.9050 × 1031.9013 × 1031.4276 × 1051.9012 × 103
std9.1366 × 10−17.1904 × 1006.6783 × 1006.7168 × 1003.3746 × 1004.7006 × 10−19.1726 × 1045.0393 × 10−1
F6mean1.0587 × 1053.1089 × 1059.0703 × 1035.5242 × 1031.4101 × 1044.3710 × 1035.4185 × 1051.9651 × 103
std2.0447 × 1056.5574 × 1054.6269 × 1032.8112 × 1031.5395 × 1042.6103 × 1031.3510 × 1052.7315 × 102
F7mean1.6109 × 1031.6182 × 1031.6112 × 1031.6123 × 1031.6020 × 1031.6014 × 1031.6153 × 1031.6064 × 103
std1.5371 × 1011.9973 × 1011.2382 × 1011.4828 × 1013.3114 × 1003.0275 × 1001.2841 × 1018.0832 × 100
F8mean8.0184 × 1034.8229 × 1054.9202 × 1037.0593 × 1039.6886 × 1032.9799 × 1039.8862 × 1042.2035 × 103
std4.6045 × 1038.1404 × 1052.8261 × 1032.7092 × 1031.0047 × 1041.3330 × 1031.5961 × 1050.0000 × 100
F9mean2.3097 × 1032.3196 × 1032.3530 × 1032.3586 × 1032.3272 × 1032.2977 × 1032.6628 × 1032.2959 × 103
std7.0475 × 1001.4821 × 1011.6280 × 1022.7293 × 1011.0165 × 1021.8450 × 1011.1636 × 1020.0000 × 100
F10mean2.7474 × 1032.7872 × 1032.7200 × 1032.7725 × 1032.7324 × 1032.7103 × 1032.8230 × 1032.7074 × 103
std1.0655 × 1012.4706 × 1019.0511 × 1016.6901 × 1007.2111 × 1018.4151 × 1015.1876 × 1018.7430 × 101
Table 3. Results of various algorithms tested on the CEC 2020 benchmark (dim = 20).
Table 3. Results of various algorithms tested on the CEC 2020 benchmark (dim = 20).
IDMetricGWOWOAPSODBOHSOSBOASCSOMSCSO
F1mean5.8929 × 1081.3222 × 1097.8858 × 1083.9155 × 1032.6512 × 1074.7188 × 1032.5655 × 10101.0000 × 100
std6.1118 × 1087.3775 × 1081.3243 × 1092.6242 × 1031.9422 × 1074.1637 × 1034.8521 × 1099.5206 × 10−5
F3mean2.7145 × 1034.3357 × 1033.4928 × 1032.6630 × 1033.5114 × 1032.0727 × 1035.9061 × 1031.4430 × 103
std4.9312 × 1024.5476 × 1024.3314 × 1023.9326 × 1027.3049 × 1024.1689 × 1024.5820 × 1022.2443 × 102
F4mean7.8974 × 1029.5346 × 1028.4760 × 1028.4646 × 1028.3263 × 1027.8075 × 1021.0020 × 1037.5010 × 102
std3.3782 × 1014.8186 × 1012.3099 × 1013.1759 × 1013.3357 × 1012.5556 × 1012.5976 × 1011.4143 × 101
F5mean1.9680 × 1032.6253 × 1031.9148 × 1031.9261 × 1031.9659 × 1031.9069 × 1031.0375 × 1051.9066 × 103
std2.5989 × 1021.7436 × 1034.5976 × 1001.2600 × 1012.2520 × 1022.4228 × 1001.1828 × 1053.3092 × 100
F6mean9.6341 × 1052.5858 × 1066.2541 × 1058.9403 × 1047.8430 × 1052.0310 × 1054.5941 × 1062.9448 × 104
std1.0002 × 1062.9534 × 1063.6379 × 1051.2407 × 1055.9909 × 1051.4284 × 1052.4635 × 1063.8392 × 104
F7mean1.8934 × 1031.8934 × 1031.8934 × 1031.8934 × 1031.8934 × 1031.8934 × 1031.8934 × 1031.8934 × 103
std2.0555 × 1022.0555 × 1022.0555 × 1022.0555 × 1022.0555 × 1022.0555 × 1022.0555 × 1022.0555 × 102
F8mean3.4487 × 1051.4770 × 1061.6047 × 1051.8922 × 1045.3959 × 1057.9433 × 1041.6147 × 1064.6928 × 103
std3.3413 × 1051.1702 × 1061.4323 × 1051.2989 × 1044.8905 × 1056.6005 × 1042.0493 × 1060.0000 × 100
F9mean3.6131 × 1035.0224 × 1033.0267 × 1033.5808 × 1032.5605 × 1032.3416 × 1035.2056 × 1032.3012 × 103
std1.5331 × 1031.6407 × 1031.4520 × 1039.4561 × 1028.2208 × 1022.1825 × 1028.0062 × 1020.0000 × 100
F10mean2.8771 × 1033.0204 × 1032.9212 × 1032.9166 × 1032.9946 × 1032.8403 × 1033.1563 × 1032.8432 × 103
std3.9278 × 1018.2473 × 1013.7574 × 1011.1113 × 1015.6073 × 1011.9022 × 1015.9998 × 1011.6167 × 101
Table 4. Results of various algorithms tested on the CEC 2022 benchmark (dim = 10).
Table 4. Results of various algorithms tested on the CEC 2022 benchmark (dim = 10).
IDMetricGWOWOAPSODBOHSOSBOASCSOMSCSO
F1mean2.8671 × 1032.9037 × 1032.8742 × 1032.8683 × 1032.8699 × 1032.8660 × 1032.9098 × 1032.8686 × 103
std4.2194 × 1004.4644 × 1011.5493 × 1011.2767 × 1011.0278 × 1014.2531 × 1013.3198 × 1011.0191 × 101
F3mean4.3154 × 1024.6058 × 1024.2541 × 1024.5877 × 1024.2831 × 1024.1564 × 1028.1525 × 1024.0706 × 101
std2.3620 × 1018.5696 × 1013.1096 × 1012.5834 × 1014.1442 × 1012.6194 × 1012.3964 × 1021.2720 × 101
F4mean6.0214 × 1026.3803 × 1026.0236 × 1026.1772 × 1026.0999 × 1026.0137 × 1026.4948 × 1026.0008 × 101
std1.7940 × 1001.5345 × 1011.4289 × 1003.1911 × 1007.3533 × 1002.7991 × 1008.4861 × 1002.1676 × 10−1
F5mean8.1695 × 1028.4500 × 1028.2537 × 1028.3857 × 1028.3750 × 1028.1461 × 1028.4948 × 1028.1278 × 102
std8.4203 × 1001.1152 × 1018.0516 × 1005.7249 × 1009.7664 × 1005.4470 × 1006.8967 × 1004.2904 × 100
F6mean9.1475 × 1021.6642 × 1039.0448 × 1029.1698 × 1021.0035 × 1039.0573 × 1021.4339 × 1039.0162 × 102
std1.9581 × 1015.2120 × 1022.9969 × 1003.9431 × 1019.6855 × 1011.3552 × 1011.9164 × 1023.7690 × 100
F7mean6.3150 × 1035.7413 × 1038.1202 × 1033.5423 × 1034.6882 × 1034.0528 × 1031.2429 × 1071.8306 × 103
std2.4104 × 1032.8985 × 1035.5611 × 1031.7525 × 1032.2258 × 1031.8254 × 1031.8798 × 1072.0542 × 101
F8mean2.0359 × 1032.0931 × 1032.0254 × 1032.0757 × 1032.0401 × 1032.0264 × 1032.0855 × 1032.0127 × 103
std1.6928 × 1013.2845 × 1015.3447 × 1002.5795 × 1012.2018 × 1013.4098 × 1012.1390 × 1011.0825 × 101
F9mean2.2236 × 1032.2390 × 1032.2408 × 1032.2711 × 1032.2372 × 1032.2202 × 1032.2558 × 1032.2149 × 103
std5.9108 × 1001.3191 × 1014.1141 × 1016.3306 × 1012.9356 × 1016.4915 × 1003.6190 × 1011.0321 × 101
F10mean2.5956 × 1032.6064 × 1032.5348 × 1032.6644 × 1032.5409 × 1032.5293 × 1032.6702 × 1032.5342 × 103
std3.8893 × 1015.6602 × 1011.8459 × 1013.8821 × 1012.3533 × 1012.4583 × 10−64.3119 × 1012.6826 × 101
F11mean2.5964 × 1032.6589 × 1032.6036 × 1032.6183 × 1032.5540 × 1032.5638 × 1032.5894 × 1032.5948 × 103
std1.0055 × 1022.8128 × 1021.1969 × 1021.8321 × 1026.6520 × 1015.6502 × 1018.9112 × 1014.3229 × 101
F12mean2.8061 × 1032.9303 × 1032.7759 × 1033.0019 × 1032.8108 × 1032.7280 × 1033.3043 × 1032.6801 × 103
std1.6128 × 1021.6592 × 1021.4893 × 1021.9794 × 1021.8534 × 1021.7386 × 1024.3254 × 1021.2294 × 102
Table 5. Results of various algorithms tested on the CEC 2022 benchmark (dim = 20).
Table 5. Results of various algorithms tested on the CEC 2022 benchmark (dim = 20).
IDMetricGWOWOAPSODBOHSOSBOASCSOMSCSO
F1mean1.5457 × 1043.0492 × 1046.4268 × 1037.4374 × 1033.7564 × 1044.8525 × 1034.4496 × 1046.7985 × 103
std5.0929 × 1039.4342 × 1033.0487 × 1033.5674 × 1031.2146 × 1042.3813 × 1031.4691 × 1049.0870 × 103
F3mean5.3056 × 1026.5400 × 1024.7312 × 1025.7409 × 1025.1120 × 1024.6089 × 1021.6204 × 1034.5520 × 102
std6.9008 × 1019.4503 × 1013.0598 × 1017.9325 × 1018.5735 × 1011.6374 × 1012.4155 × 1021.7508 × 101
F4mean6.0835 × 1026.7020 × 1026.1237 × 1026.3582 × 1026.3104 × 1026.0266 × 1026.8393 × 1026.0233 × 102
std5.0813 × 1009.7467 × 1005.2245 × 1004.7814 × 1001.1332 × 1013.5470 × 1009.8693 × 1002.2624 × 100
F5mean8.5880 × 1029.3200 × 1029.1644 × 1029.1788 × 1029.1141 × 1028.4498 × 1029.7314 × 1028.6619 × 102
std2.6090 × 1012.9548 × 1011.9882 × 1011.2440 × 1012.7227 × 1011.5292 × 1011.4501 × 1012.2987 × 101
F6mean1.4299 × 1034.4969 × 1031.0452 × 1031.1917 × 1032.2080 × 1031.0697 × 1033.5235 × 1031.1182 × 103
std3.5374 × 1021.5631 × 1031.2762 × 1023.2612 × 1026.1970 × 1022.2424 × 1025.2128 × 1023.0317 × 102
F7mean3.0093 × 1068.4507 × 1061.7050 × 1064.6478 × 1031.5019 × 1066.1443 × 1035.9957 × 1083.9069 × 103
std1.1673 × 1079.8651 × 1061.7858 × 1063.0753 × 1033.7864 × 1064.9488 × 1034.2815 × 1082.7484 × 103
F8mean2.0823 × 1032.2392 × 1032.1154 × 1032.1343 × 1032.1452 × 1032.0560 × 1032.2313 × 1032.0535 × 103
std4.1511 × 1016.4825 × 1015.5853 × 1013.1842 × 1014.7304 × 1012.0835 × 1015.1271 × 1013.8305 × 101
F9mean2.2645 × 1032.2757 × 1032.2942 × 1032.4887 × 1032.3188 × 1032.2329 × 1032.4819 × 1032.2262 × 103
std5.2438 × 1016.4710 × 1017.4204 × 1011.6135 × 1027.8860 × 1012.1871 × 1011.4296 × 1024.6711 × 100
F10mean2.5234 × 1032.5983 × 1032.4996 × 1032.7210 × 1032.5117 × 1032.4808 × 1032.7814 × 1032.4808 × 103
std2.2815 × 1014.3069 × 1012.2016 × 1018.7218 × 1014.3007 × 1017.3867 × 10−29.4266 × 1013.0433 × 10−9
F11mean3.5816 × 1035.0132 × 1033.9936 × 1033.5019 × 1033.0844 × 1032.8527 × 1035.3306 × 1032.5719 × 103
std8.3489 × 1021.1099 × 1031.0419 × 1037.4004 × 1029.7967 × 1025.0970 × 1021.8521 × 1031.4803 × 102
F12mean3.4442 × 1034.0416 × 1033.3228 × 1033.5884 × 1033.1495 × 1032.9275 × 1037.5410 × 1032.9007 × 103
std2.6046 × 1021.0393 × 1032.2248 × 1021.8335 × 1022.0225 × 1028.8859 × 1016.2908 × 1026.4446 × 101
Table 6. Results for various algorithms on the CEC 2020 and CEC2022.
Table 6. Results for various algorithms on the CEC 2020 and CEC2022.
Statistical ResultsGWOWOAPSODBOHSOSBOASCSO
CEC2020 dim = 10 (+/=/−)(10/0/0)(10/0/0)(8/0/2)(10/0/0)(8/0/2)(5/0/5)(10/0/0)
CEC2020 dim = 20 (+/=/−)(10/0/0)(10/0/0)(9/0/1)(10/0/0)(9/0/1)(7/0/3)(10/0/0)
CEC2022 dim = 10 (+/=/−)(11/0/1)(11/0/1)(12/0/0)(12/0/0)(9/0/3)(8/0/4)(12/0/0)
CEC2022 dim = 20 (+/=/−)(11/0/1)(12/0/0)(9/0/3)(10/0/2)(12/0/0)(6/0/6)(12/0/0)
Table 7. Friedman mean rank test result.
Table 7. Friedman mean rank test result.
SuitesCEC2020CEC2022
Dimension10201020
Algorithms M . R T . R M . R T . R M . R T . R M . R T . R
GWO3.9034.4044.0033.923
WOA6.8077.1076.5876.837
PSO4.5054.4044.3344.174
HSO5.5064.2035.4265.176
DBO4.3044.8064.3344.835
SBOA1.7022.0021.8321.752
SCSO7.9087.9087.7587.838
MSCSO1.4011.2011.7511.501
Table 8. MSCSO results of multi-level threshold segmentation with Otsu as the objective function.
Table 8. MSCSO results of multi-level threshold segmentation with Otsu as the objective function.
Images TH = 4 TH = 6 TH = 8 TH = 10
baboonSymmetry 17 02012 i001Symmetry 17 02012 i002Symmetry 17 02012 i003Symmetry 17 02012 i004
Symmetry 17 02012 i005Symmetry 17 02012 i006Symmetry 17 02012 i007Symmetry 17 02012 i008
cameraSymmetry 17 02012 i009Symmetry 17 02012 i010Symmetry 17 02012 i011Symmetry 17 02012 i012
Symmetry 17 02012 i013Symmetry 17 02012 i014Symmetry 17 02012 i015Symmetry 17 02012 i016
girlSymmetry 17 02012 i017Symmetry 17 02012 i018Symmetry 17 02012 i019Symmetry 17 02012 i020
Symmetry 17 02012 i021Symmetry 17 02012 i022Symmetry 17 02012 i023Symmetry 17 02012 i024
lenaSymmetry 17 02012 i025Symmetry 17 02012 i026Symmetry 17 02012 i027Symmetry 17 02012 i028
Symmetry 17 02012 i029Symmetry 17 02012 i030Symmetry 17 02012 i031Symmetry 17 02012 i032
terraceSymmetry 17 02012 i033Symmetry 17 02012 i034Symmetry 17 02012 i035Symmetry 17 02012 i036
Symmetry 17 02012 i037Symmetry 17 02012 i038Symmetry 17 02012 i039Symmetry 17 02012 i040
Table 9. Average and Std of optimal fitness values based on Otsu’s objective function.
Table 9. Average and Std of optimal fitness values based on Otsu’s objective function.
ImageThresholdMetricGWOWOAPSOHSODBOSBOASCSOMSCSO
baboon4Mean3.2983 × 1033.2983 × 1033.2986 × 1033.2711 × 1033.2986 × 1033.2942 × 1033.2684 × 1033.3008 × 103
Std3.1427 × 1004.2060 × 1001.5065 × 1001.6118 × 1002.5430 × 1007.7727 × 1002.5178 × 1012.4484 × 10−2
6Mean3.3673 × 1033.3676 × 1033.3650 × 1033.3366 × 1033.3651 × 1033.3588 × 1033.3425 × 1033.3715 × 103
Std4.5329 × 1002.9040 × 1003.7410 × 1001.4136 × 1015.0908 × 1009.4753 × 1001.9185 × 1011.2680 × 100
8Mean3.3931 × 1033.3924 × 1033.3921 × 1033.3724 × 1033.3876 × 1033.3856 × 1033.3784 × 1033.3993 × 103
Std5.5682 × 1007.2975 × 1003.4047 × 1009.2411 × 1005.6281 × 1007.7790 × 1001.2990 × 1011.4460 × 100
10Mean3.4077 × 1033.4072 × 1033.4061 × 1033.3886 × 1033.4017 × 1033.3999 × 1033.3983 × 1033.4128 × 103
Std4.0462 × 1004.6348 × 1002.9208 × 1007.2195 × 1005.2162 × 1006.6549 × 1008.4710 × 1002.4858 × 100
camera4Mean4.5976 × 1034.5975 × 1034.5983 × 1034.5825 × 1034.5980 × 1034.5971 × 1034.5896 × 1034.5996 × 103
Std1.9392 × 1002.8636 × 1001.6086 × 1009.2280 × 1001.7868 × 1003.7272 × 1001.0917 × 1011.2750 × 100
6Mean4.6447 × 1034.6460 × 1034.6446 × 1034.6194 × 1034.6417 × 1034.6395 × 1034.6262 × 1034.6503 × 103
Std6.2083 × 1004.5761 × 1005.0428 × 1008.8192 × 1007.0480 × 1008.6832 × 1001.2539 × 1012.6629 × 100
8Mean4.6615 × 1034.6615 × 1034.6630 × 1034.6392 × 1034.6567 × 1034.6582 × 1034.6431 × 1034.6682 × 103
Std4.8112 × 1005.2227 × 1002.7445 × 1007.5798 × 1006.3475 × 1006.4824 × 1009.5342 × 1001.8004 × 100
10Mean4.6728 × 1034.6731 × 1034.6731 × 1034.6534 × 1034.6677 × 1034.6703 × 1034.6610 × 1034.6784 × 103
Std3.6036 × 1004.4533 × 1002.5797 × 1007.9311 × 1004.3540 × 1005.7830 × 1009.2597 × 1001.7677 × 100
girl4Mean2.5316 × 1032.5318 × 1032.5324 × 1032.5087 × 1032.5331 × 1032.5267 × 1032.5050 × 1032.5339 × 103
Std4.4695 × 1003.1792 × 1001.1285 × 1001.3877 × 1011.4090 × 1008.9770 × 1002.1861 × 1014.0686 × 10−2
6Mean2.5819 × 1032.5789 × 1032.5807 × 1032.5543 × 1032.5799 × 1032.5760 × 1032.5622 × 1032.5842 × 103
Std2.7033 × 1008.6018 × 1001.7195 × 1001.3276 × 1013.9981 × 1006.7079 × 1001.2084 × 1013.4014 × 10−1
8Mean2.6012 × 1032.5990 × 1032.5995 × 1032.5758 × 1032.5982 × 1032.5947 × 1032.5855 × 1032.6052 × 103
Std3.6377 × 1005.2335 × 1002.5113 × 1001.0593 × 1014.3676 × 1008.8709 × 1001.2574 × 1011.7622 × 100
10Mean2.6123 × 1032.6105 × 1032.6097 × 1032.5048 × 1032.6078 × 1032.6072 × 1032.5960 × 1032.6158 × 103
Std3.3637 × 1003.2344 × 1002.1345 × 1004.7327 × 1023.2476 × 1004.1583 × 1001.4214 × 1011.5477 × 100
lena4Mean3.6843 × 1033.6836 × 1033.6837 × 1033.6320 × 1033.6838 × 1033.6780 × 1033.6418 × 1033.6860 × 103
Std1.8930 × 1003.5328 × 1001.2814 × 1002.3122 × 1014.5884 × 1008.2677 × 1003.1509 × 1014.3001 × 10−2
6Mean3.7566 × 1033.7579 × 1033.7590 × 1033.7201 × 1033.7553 × 1033.7490 × 1033.7329 × 1033.7652 × 103
Std7.6457 × 1006.7404 × 1003.6049 × 1001.7075 × 1018.9648 × 1001.0357 × 1011.4175 × 1011.6612 × 100
8Mean3.7864 × 1033.7874 × 1033.7869 × 1033.7565 × 1033.7831 × 1033.7798 × 1033.7750 × 1033.7943 × 103
Std6.6123 × 1007.2506 × 1002.4840 × 1001.2181 × 1016.1400 × 1008.9102 × 1009.8967 × 1001.3204 × 100
10Mean3.8034 × 1033.8046 × 1033.8021 × 1033.7827 × 1033.7981 × 1033.7979 × 1033.7925 × 1033.8106 × 103
Std5.5708 × 1004.3305 × 1002.4777 × 1007.7391 × 1006.9951 × 1005.4004 × 1009.9263 × 1002.3488 × 100
terrace4Mean2.6389 × 1032.6370 × 1032.6382 × 1032.5905 × 1032.6388 × 1032.6318 × 1032.6106 × 1032.6401 × 103
Std1.6459 × 1004.5132 × 1001.2851 × 1002.2708 × 1011.5242 × 1006.1413 × 1002.7287 × 1011.3138 × 10−1
6Mean2.6990 × 1032.6975 × 1032.6959 × 1032.6619 × 1032.6961 × 1032.6885 × 1032.6835 × 1032.7018 × 103
Std3.8927 × 1006.2845 × 1002.7906 × 1001.4787 × 1014.0348 × 1001.0536 × 1011.4741 × 1017.0304 × 10−1
8Mean2.7250 × 1032.7243 × 1032.7205 × 1032.6903 × 1032.7191 × 1032.7138 × 1032.7133 × 1032.7280 × 103
Std3.6809 × 1004.1210 × 1003.2517 × 1001.1711 × 1015.3826 × 1007.2941 × 1009.9086 × 1001.2013 × 100
10Mean2.7369 × 1032.7377 × 1032.7331 × 1032.7066 × 1032.7314 × 1032.7299 × 1032.7276 × 1032.7406 × 103
Std3.2300 × 1003.0153 × 1003.4795 × 1001.3629 × 1014.7525 × 1004.8287 × 1006.5568 × 1001.9212 × 100
Friedman-Rank2.963.144.437.804.465.686.191.34
Final-Rank23485671
Table 10. Average and Std of PSNR for all images using Otsu’s thresholding.
Table 10. Average and Std of PSNR for all images using Otsu’s thresholding.
ImageThresholdMetricGWOWOAPSOHSODBOSBOASCSOMSCSO
baboon2Mean18.1493 18.094318.171717.223818.165718.098918.273418.2262
Std2.4632 × 10−13.1731 × 10−12.7347 × 10−18.3640 × 10−12.7588 × 10−16.0652 × 10−13.9427 × 10−14.9721 × 10−2
4Mean21.0541 21.2598 20.9815 19.8701 21.0987 20.8654 20.7941 21.4333
Std5.4823 × 10−15.7954 × 10−15.8647 × 10−11.2208 × 1006.5979 × 10−17.2557 × 10−16.2612 × 10−12.7845 × 10−1
6Mean22.8148 23.1583 23.0259 21.5881 22.6541 22.5612 22.4388 23.4989
Std8.2707 × 10−16.9992 × 10−17.1504 × 10−11.0152 × 1008.4882 × 10−18.6153 × 10−16.3636 × 10−14.5074 × 10−1
8Mean24.4564 24.7115 24.2802 22.7812 23.9586 23.7621 23.9450 25.1252
Std7.4490 × 10−17.4582 × 10−17.5730 × 10−18.4034 × 10−18.6857 × 10−19.8084 × 10−17.9694 × 10−14.6661 × 10−1
camera2Mean18.2597 18.4130 18.5500 18.0111 18.4525 18.4869 18.6410 19.1343
Std6.3266 × 10−18.6138 × 10−18.9226 × 10−18.9386 × 10−19.7644 × 10−19.6432 × 10−17.9624 × 10−19.3539 × 10−1
4Mean21.2387 21.1183 21.3751 19.9234 21.3509 20.9897 21.7080 21.7106
Std9.3205 × 10−17.4111 × 10−17.4068 × 10−11.3281 × 1001.0164 × 1001.3884 × 1008.9964 × 10−15.7799 × 10−1
6Mean22.6372 22.5434 22.8017 20.8691 22.4460 21.7798 23.0151 23.2635
Std6.6296 × 10−17.9386 × 10−16.1803 × 10−11.7336 × 1001.0734 × 1001.2566 × 1001.0280 × 1005.3965 × 10−1
8Mean23.5856 23.5680 23.7264 22.4513 23.3367 23.7922 24.412124.2878
Std8.8366 × 10−15.3047 × 10−16.4547 × 10−11.6745 × 1009.6464 × 10−19.1741 × 10−11.2476 × 1007.8722 × 10−1
girl2Mean21.8882 22.117521.8843 20.9221 22.0069 21.6510 20.8480 21.9640
Std3.2139 × 10−13.2828 × 10−13.4111 × 10−18.5984 × 10−11.8573 × 10−17.8795 × 10−18.1443 × 10−16.5585 × 10−2
4Mean24.2736 24.0871 24.1995 22.9906 24.1209 23.9443 23.4992 24.5621
Std3.8378 × 10−17.4912 × 10−14.5558 × 10−11.0190 × 1005.6258 × 10−16.4298 × 10−17.3040 × 10−11.8616 × 10−1
6Mean26.2464 25.6922 25.8408 24.2435 25.7716 25.4427 24.9562 26.3249
Std2.8721 × 10−17.4048 × 10−15.8331 × 10−19.8457 × 10−16.4598 × 10−19.6108 × 10−19.2155 × 10−13.0305 × 10−1
8Mean27.3696 27.2215 27.0257 25.1686 26.9006 26.6982 26.2697 27.9327
Std5.5225 × 10−15.2545 × 10−15.0270 × 10−11.9271 × 1007.2372 × 10−17.1855 × 10−11.1809 × 1002.9163 × 10−1
lena2Mean19.0558 19.0656 19.0498 18.2694 19.0497 18.9810 18.5814 19.1231
Std6.2836 × 10−21.3902 × 10−17.5194 × 10−24.1615 × 10−11.1244 × 10−11.8626 × 10−13.9381 × 10−13.3234 × 10−2
4Mean21.5308 21.4246 21.5493 20.2585 21.4757 21.1892 20.9956 21.8206
Std3.0723 × 10−13.5129 × 10−11.8371 × 10−15.6273 × 10−13.1467 × 10−14.8674 × 10−14.8229 × 10−17.7365 × 10−2
6Mean23.0169 23.1588 23.0158 21.5529 22.9357 22.5956 22.7203 23.5328
Std4.6594 × 10−15.4389 × 10−12.0905 × 10−16.4711 × 10−15.2028 × 10−14.7264 × 10−15.1071 × 10−13.2119 × 10−1
8Mean24.3093 24.5339 24.1683 23.0027 23.9436 23.8923 23.9362 24.9913
Std5.6197 × 10−15.2161 × 10−14.9449 × 10−15.8358 × 10−17.1771 × 10−16.4628 × 10−16.5717 × 10−13.9356 × 10−1
terrace2Mean21.4377 21.3895 21.4309 20.2474 21.4338 21.2170 20.7574 21.4770
Std6.5134 × 10−21.5414 × 10−15.5599 × 10−25.3123 × 10−16.3804 × 10−22.0603 × 10−16.4343 × 10−11.6228 × 10−2
4Mean23.8533 23.7831 23.6730 22.2001 23.6964 23.2841 23.3043 23.9996
Std1.8198 × 10−13.2268 × 10−11.7236 × 10−16.2391 × 10−12.3722 × 10−15.2395 × 10−16.3563 × 10−14.4719 × 10−2
6Mean25.5876 25.5256 25.2076 23.3454 25.1551 24.7918 25.0352 25.8201
Std2.6546 × 10−13.3328 × 10−13.0039 × 10−16.1125 × 10−14.4735 × 10−15.5392 × 10−16.2222 × 10−11.0151 × 10−1
8Mean26.7888 26.8566 26.3455 24.2945 26.2112 26.0471 26.1963 27.1649
Std3.8171 × 10−13.4331 × 10−13.3763 × 10−18.1477 × 10−14.8694 × 10−14.4162 × 10−15.4517 × 10−12.9713 × 10−1
Friedman-Rank3.13 3.15 4.46 7.79 4.42 5.75 5.59 1.72
Final-Rank23584761
Table 11. Average and Std of FSIM for all images using Otsu’s thresholding.
Table 11. Average and Std of FSIM for all images using Otsu’s thresholding.
ImageThresholdMetricGWOWOAPSOHSODBOSBOASCSOMSCSO
baboon2Mean0.8193 0.8168 0.8186 0.7970 0.8190 0.8195 0.83100.8200
Std7.9078 × 10−39.0823 × 10−37.2026 × 10−32.6721 × 10−26.3435 × 10−31.5335 × 10−21.2046 × 10−21.1694 × 10−3
4Mean0.8795 0.8857 0.8795 0.8554 0.8811 0.8775 0.8884 0.8885
Std1.3059 × 10−21.6028 × 10−21.5204 × 10−23.0226 × 10−21.7986 × 10−21.7706 × 10−21.6117 × 10−27.3631 × 10−3
6Mean0.9065 0.9149 0.9118 0.8875 0.9059 0.9037 0.9158 0.9183
Std1.7822 × 10−21.7320 × 10−21.6011 × 10−22.3706 × 10−21.8938 × 10−21.8351 × 10−27.7040 × 10−39.5189 × 10−3
8Mean0.9289 0.9324 0.9255 0.9044 0.9211 0.9178 0.9348 0.9378
Std1.4688 × 10−21.4702 × 10−21.5180 × 10−21.7288 × 10−21.7722 × 10−21.9880 × 10−21.1638 × 10−21.0115 × 10−2
camera2Mean0.83620.8322 0.8322 0.8237 0.8306 0.8296 0.8281 0.8350
Std5.5615 × 10−37.4732 × 10−38.3852 × 10−31.3337 × 10−29.2505 × 10−31.1045 × 10−21.1249 × 10−26.4365 × 10−3
4Mean0.8668 0.8654 0.8687 0.8495 0.8688 0.8672 0.8676 0.8764
Std1.1986 × 10−29.5460 × 10−38.1242 × 10−31.4000 × 10−29.5678 × 10−31.3377 × 10−21.1645 × 10−23.6356 × 10−3
6Mean0.8888 0.8858 0.8919 0.8661 0.8851 0.8814 0.8852 0.9012
Std1.2282 × 10−21.3215 × 10−26.7373 × 10−31.8027 × 10−21.2012 × 10−21.3511 × 10−21.4638 × 10−24.7442 × 10−3
8Mean0.9035 0.9034 0.9047 0.8792 0.8968 0.9038 0.9068 0.9145
Std8.1086 × 10−37.9409 × 10−37.3303 × 10−32.0437 × 10−21.1984 × 10−21.0948 × 10−21.7344 × 10−26.5259 × 10−3
girl2Mean0.8266 0.8273 0.8277 0.8015 0.8290 0.8193 0.8222 0.8295
Std8.9031 × 10−33.2786 × 10−34.5167 × 10−31.5506 × 10−23.5448 × 10−31.3906 × 10−21.1189 × 10−21.2128 × 10−3
4Mean0.8678 0.8657 0.8651 0.8375 0.8688 0.8626 0.8613 0.8707
Std6.2928 × 10−38.7873 × 10−35.2682 × 10−31.6192 × 10−27.5647 × 10−31.0471 × 10−29.9539 × 10−35.6149 × 10−3
6Mean0.8977 0.8939 0.8927 0.8568 0.8934 0.8875 0.8866 0.9031
Std5.9864 × 10−37.6644 × 10−36.0988 × 10−31.5806 × 10−29.3914 × 10−31.3337 × 10−29.9004 × 10−32.2408 × 10−3
8Mean0.9172 0.9128 0.9093 0.8757 0.9092 0.9078 0.9033 0.9253
Std7.2324 × 10−39.3164 × 10−35.6939 × 10−32.9587 × 10−27.9273 × 10−39.7624 × 10−31.2396 × 10−24.1178 × 10−3
lena2Mean0.78080.7789 0.7805 0.7579 0.7797 0.7772 0.7772 0.7800
Std2.5648 × 10−33.8374 × 10−33.2323 × 10−31.3570 × 10−23.2034 × 10−35.3796 × 10−37.8314 × 10−38.5238 × 10−4
4Mean0.8422 0.8389 0.8400 0.8117 0.8396 0.8318 0.8317 0.8477
Std1.0526 × 10−21.0287 × 10−27.5120 × 10−31.4147 × 10−29.8559 × 10−31.0996 × 10−21.2691 × 10−24.2731 × 10−3
6Mean0.8702 0.8707 0.8684 0.8382 0.8666 0.8595 0.8681 0.8798
Std1.0823 × 10−21.0910 × 10−26.7903 × 10−31.6313 × 10−21.0742 × 10−21.2991 × 10−21.1450 × 10−26.9970 × 10−3
8Mean0.8896 0.8928 0.8867 0.8636 0.8824 0.8840 0.8869 0.9017
Std1.0664 × 10−21.0166 × 10−29.0087 × 10−31.3550 × 10−21.1613 × 10−21.1906 × 10−21.1436 × 10−26.5633 × 10−3
terrace2Mean0.8439 0.8416 0.8420 0.7979 0.8440 0.8380 0.8296 0.8446
Std2.7842 × 10−36.8370 × 10−33.9368 × 10−31.7583 × 10−33.6297 × 10−38.2088 × 10−31.8775 × 10−21.2077 × 10−3
4Mean0.9007 0.9007 0.8956 0.8569 0.8995 0.8885 0.8899 0.9031
Std6.7720 × 10−38.4686 × 10−38.2243 × 10−31.8544 × 10−25.9973 × 10−31.2359 × 10−21.3325 × 10−23.0479 × 10−3
6Mean0.9286 0.9280 0.9228 0.8833 0.9218 0.9113 0.9207 0.9345
Std9.1006 × 10−38.0817 × 10−38.1806 × 10−31.7819 × 10−21.0646 × 10−21.3426 × 10−21.0509 × 10−25.6119 × 10−3
8Mean0.9424 0.9444 0.9376 0.9026 0.9357 0.9294 0.9367 0.9505
Std7.0904 × 10−38.2804 × 10−39.3181 × 10−31.9703 × 10−21.1057 × 10−21.1628 × 10−29.7298 × 10−35.3017 × 10−3
Friedman-Rank3.40 3.36 4.44 7.82 4.17 5.48 5.00 2.34
Final-Rank32584761
Table 12. Average and Std of SSIM for all images using Otsu’s thresholding.
Table 12. Average and Std of SSIM for all images using Otsu’s thresholding.
ImageThresholdMetricGWOWOAPSOHSODBOSBOASCSOMSCSO
baboon2Mean0.7215 0.7191 0.7221 0.6797 0.7236 0.7212 0.74960.7250
Std1.2018 × 10−21.4208 × 10−21.3165 × 10−24.3808 × 10−21.3463 × 10−22.9537 × 10−21.8746 × 10−22.2735 × 10−3
4Mean0.8226 0.8320 0.8182 0.7807 0.8265 0.8161 0.84650.8373
Std2.2007 × 10−22.1130 × 10−22.3925 × 10−25.1256 × 10−22.6923 × 10−22.8652 × 10−21.9259 × 10−21.2086 × 10−2
6Mean0.8681 0.8806 0.8727 0.8358 0.8643 0.8610 0.8835 0.8901
Std2.6426 × 10−21.9415 × 10−22.0808 × 10−23.1318 × 10−22.8367 × 10−22.7604 × 10−21.5313 × 10−29.4263 × 10−3
8Mean0.8989 0.9067 0.8925 0.8636 0.8906 0.8819 0.9111 0.9158
Std1.9733 × 10−21.6164 × 10−21.8861 × 10−22.5397 × 10−22.1829 × 10−22.6449 × 10−21.3117 × 10−21.1468 × 10−2
camera2Mean0.6936 0.7007 0.7041 0.6943 0.7020 0.7029 0.75240.7100
Std2.9273 × 10−23.4456 × 10−23.8064 × 10−24.8552 × 10−24.1184 × 10−24.1970 × 10−23.4306 × 10−23.7574 × 10−2
4Mean0.7790 0.7780 0.7843 0.7402 0.7845 0.7763 0.81560.7989
Std3.3822 × 10−22.4747 × 10−23.0084 × 10−25.9913 × 10−23.4600 × 10−24.6916 × 10−23.0387 × 10−21.7071 × 10−2
6Mean0.8153 0.8148 0.8233 0.7736 0.8111 0.7955 0.84280.8336
Std2.5998 × 10−22.5184 × 10−22.1201 × 10−25.9942 × 10−23.4345 × 10−23.7676 × 10−23.1287 × 10−21.5109 × 10−2
8Mean0.8406 0.8396 0.8388 0.8088 0.8321 0.8431 0.8564 0.8673
Std2.3883 × 10−21.5080 × 10−21.8847 × 10−25.3411 × 10−22.9464 × 10−23.0283 × 10−21.8401 × 10−22.9894 × 10−2
girl2Mean0.7112 0.7117 0.7139 0.6713 0.7150 0.7006 0.72370.7142
Std1.2799 × 10−24.6222 × 10−35.2230 × 10−32.9352 × 10−24.4213 × 10−32.4665 × 10−21.4235 × 10−21.1010 × 10−3
4Mean0.7567 0.7630 0.7499 0.7218 0.7656 0.7535 0.77100.7633
Std1.7069 × 10−21.8842 × 10−21.2658 × 10−22.9799 × 10−21.9040 × 10−22.0523 × 10−21.7734 × 10−21.7245 × 10−2
6Mean0.7965 0.8028 0.7888 0.7438 0.8000 0.7845 0.81920.8061
Std1.1988 × 10−21.6167 × 10−21.7080 × 10−22.8151 × 10−22.1020 × 10−22.2669 × 10−21.5650 × 10−21.0810 × 10−2
8Mean0.8292 0.8245 0.8134 0.7683 0.8289 0.8178 0.8384 0.8393
Std1.5177 × 10−21.9972 × 10−21.4413 × 10−24.2396 × 10−22.0191 × 10−22.2467 × 10−21.8135 × 10−21.9169 × 10−2
lena2Mean0.6750 0.6741 0.6748 0.6550 0.6740 0.6729 0.69180.6756
Std2.0541 × 10−32.8705 × 10−32.2106 × 10−31.8605 × 10−25.4465 × 10−36.6859 × 10−31.4727 × 10−26.8122 × 10−4
4Mean0.7519 0.7458 0.7463 0.7130 0.7516 0.7348 0.76450.7558
Std1.8542 × 10−21.4344 × 10−21.3653 × 10−22.2234 × 10−21.2706 × 10−21.8211 × 10−22.0902 × 10−27.3688 × 10−3
6Mean0.7895 0.7973 0.7831 0.7575 0.7923 0.7748 0.80980.8021
Std1.7543 × 10−22.1054 × 10−21.0209 × 10−23.1114 × 10−22.4139 × 10−22.2130 × 10−22.0492 × 10−21.8228 × 10−2
8Mean0.8234 0.8306 0.8161 0.7927 0.8134 0.8158 0.84220.8391
Std2.1206 × 10−22.2532 × 10−22.4364 × 10−22.6356 × 10−22.5657 × 10−22.4842 × 10−21.7016 × 10−21.9242 × 10−2
terrace2Mean0.7180 0.7157 0.7151 0.6502 0.7178 0.7101 0.72660.7189
Std6.1246 × 10−39.7951 × 10−38.1491 × 10−33.5896 × 10−29.2031 × 10−31.9561 × 10−21.8131 × 10−22.6326 × 10−3
4Mean0.8000 0.8030 0.7944 0.7389 0.7999 0.7831 0.81590.8031
Std1.1367 × 10−21.4760 × 10−21.6856 × 10−23.1845 × 10−21.5835 × 10−22.5196 × 10−21.3461 × 10−27.0059 × 10−3
6Mean0.8473 0.8480 0.8397 0.7762 0.8406 0.8223 0.8549 0.8650
Std1.7424 × 10−21.5870 × 10−21.8075 × 10−22.7075 × 10−22.0203 × 10−22.7091 × 10−21.3566 × 10−21.3493 × 10−2
8Mean0.8732 0.8800 0.8610 0.8087 0.8680 0.8506 0.89190.8804
Std1.6628 × 10−21.7667 × 10−21.9411 × 10−23.8607 × 10−22.5826 × 10−22.4051 × 10−21.4321 × 10−21.6522 × 10−2
Friedman-Rank4.03 3.95 4.93 7.69 4.27 5.46 3.51 2.16
Final-Rank43685721
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yuan, X.; Zhu, Z.; Yang, Z.; Zhang, Y. MSCSO: A Modified Sand Cat Swarm Optimization for Global Optimization and Multilevel Thresholding Image Segmentation. Symmetry 2025, 17, 2012. https://doi.org/10.3390/sym17112012

AMA Style

Yuan X, Zhu Z, Yang Z, Zhang Y. MSCSO: A Modified Sand Cat Swarm Optimization for Global Optimization and Multilevel Thresholding Image Segmentation. Symmetry. 2025; 17(11):2012. https://doi.org/10.3390/sym17112012

Chicago/Turabian Style

Yuan, Xuanqi, Zihao Zhu, Zhengxing Yang, and Yongnian Zhang. 2025. "MSCSO: A Modified Sand Cat Swarm Optimization for Global Optimization and Multilevel Thresholding Image Segmentation" Symmetry 17, no. 11: 2012. https://doi.org/10.3390/sym17112012

APA Style

Yuan, X., Zhu, Z., Yang, Z., & Zhang, Y. (2025). MSCSO: A Modified Sand Cat Swarm Optimization for Global Optimization and Multilevel Thresholding Image Segmentation. Symmetry, 17(11), 2012. https://doi.org/10.3390/sym17112012

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop