Previous Article in Journal
A Systematic Review of Metal Composite Bone Grafts in Preclinical Spinal Fusion Models
Previous Article in Special Issue
DRIME: A Distributed Data-Guided RIME Algorithm for Numerical Optimization Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ACPOA: An Adaptive Cooperative Pelican Optimization Algorithm for Global Optimization and Multilevel Thresholding Image Segmentation

South Korea College of Design, Hanyang University, Ansan 15588, Republic of Korea
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(9), 596; https://doi.org/10.3390/biomimetics10090596 (registering DOI)
Submission received: 12 August 2025 / Revised: 1 September 2025 / Accepted: 2 September 2025 / Published: 6 September 2025

Abstract

Multi-threshold image segmentation plays an irreplaceable role in extracting discriminative structural information from complex images. It is one of the core technologies for achieving accurate target detection and regional analysis, and its segmentation accuracy directly affects the analysis quality and decision reliability in key fields such as medical imaging, remote sensing interpretation, and industrial inspection. However, most existing image segmentation algorithms suffer from slow convergence speeds and low solution accuracy. Therefore, this paper proposes an Adaptive Cooperative Pelican Optimization Algorithm (ACPOA), an improved version of the Pelican Optimization Algorithm (POA), and applies it to global optimization and multilevel threshold image segmentation tasks. ACPOA integrates three innovative strategies: the elite pool mutation strategy guides the population toward high-quality regions by constructing an elite pool composed of the three individuals with the best fitness, effectively preventing the premature loss of population diversity; the adaptive cooperative mechanism enhances search efficiency in high-dimensional spaces by dynamically allocating subgroups and dimensions and performing specialized updates to achieve division of labor and global information sharing; and the hybrid boundary handling technique adopts a probabilistic hybrid approach to deal with boundary violations, balancing exploitation, exploration, and diversity while retaining more useful search information. Comparative experiments with eight advanced algorithms on the CEC2017 and CEC2022 benchmark test suites validate the superior optimization performance of ACPOA. Moreover, when applied to multilevel threshold image segmentation tasks, ACPOA demonstrates better accuracy, stability, and efficiency in solving practical problems, providing an effective solution for complex optimization challenges.

1. Introduction

Image segmentation is a technique that decomposes a digital image into multiple subregions with specific features and identifies key targets. It plays a fundamental role in the fields of computer vision and pattern recognition [1,2]. With technological advancements, this technique has found significant applications in various domains such as medical image analysis, satellite remote sensing, autonomous driving, precision agriculture, and aerospace engineering. Currently, mainstream image segmentation methods can be broadly categorized into four types: threshold-based segmentation techniques, region-growing methods, clustering-based algorithms, and deep learning-based semantic segmentation networks [3,4,5].
Among these techniques, threshold segmentation has become one of the most widely adopted foundational methods in both industry and academia due to its simplicity, high computational efficiency, and reliable results [6]. Depending on specific application needs, threshold segmentation can be further divided into two forms: single-threshold segmentation, which separates foreground from background using a single threshold, and multilevel threshold segmentation, which requires determining a set of thresholds to achieve fine-grained image partitioning. Since the choice of thresholds directly affects segmentation quality and ultimately impacts the accuracy of subsequent analysis and recognition, efficiently obtaining the optimal thresholds has become a core research focus in this area. At present, Otsu’s maximum between-class variance method and Kapur’s maximum entropy method are the two most commonly used criteria for threshold selection [7,8,9].
The multilevel threshold segmentation problem is essentially a complex combinatorial optimization task. In practical applications, achieving more refined segmentation often requires setting multiple thresholds, which poses significant challenges for traditional optimization methods in terms of both computational efficiency and solution quality [10,11]. In contrast, metaheuristic algorithms inspired by the collective intelligence behaviors observed in nature have demonstrated notable advantages. These algorithms can provide high-quality solutions to multilevel threshold segmentation problems within a reasonable amount of time.
Bionics, as a bridge connecting the wisdom of natural organisms with engineering innovation, provides abundant inspiration for the design of metaheuristic algorithms. In nature, the cooperative, adaptive, and optimizing capabilities that biological populations have developed through long-term evolution serve as crucial sources for advancing algorithmic performance. For instance, ant colonies efficiently search for food paths through pheromone communication, inspiring the Ant Colony Optimization algorithm [12]; similarly, the hierarchical cooperation of gray wolves during prey hunting has given rise to the Grey Wolf Optimizer [13].These natural behaviors closely align with the core requirements of complex optimization problems, thereby offering an effective tool for their solution.
In recent years, various bio-inspired intelligent algorithms have been successfully applied in this domain. These include the Particle Swarm Optimization (PSO) algorithm inspired by the foraging behavior of bird and fish swarms [14]; the Ant Colony Optimization (ACO) algorithm based on the social foraging behavior of ants [12]; the Grey Wolf Optimizer (GWO) that mimics the social hierarchy and hunting strategy of grey wolves [13]; the Whale Optimization Algorithm (WOA), inspired by the hunting behavior of humpback whales involving search, encircling, and spiral updating phases [15]; the Dung Beetle Optimizer (DBO), which simulates behaviors such as rolling, dancing, foraging, stealing, and breeding in dung beetles [16]; the Secretary Bird Optimization Algorithm (SBOA), based on the survival behaviors of secretary birds [17]; and the Crested Porcupine Optimizer (CPO), inspired by various defense strategies of crested porcupines [18]. By simulating the intelligent behaviors of different biological populations, these algorithms have offered novel and effective approaches to solving the multilevel threshold image segmentation problem.
Bhandari et al. improved the Artificial Bee Colony (ABC) algorithm and applied it to satellite image segmentation, using Kapur, Otsu, and Tsallis functions as objective criteria to determine the optimal thresholds with ABC [19]. To enhance the real-time performance of image segmentation, Huang et al. introduced the Fruit Fly Optimization Algorithm (FOA) into Otsu-based segmentation, developing the FOA-OTSU segmentation algorithm [3]. Lin Lan proposed a novel improved African Vulture Optimization Algorithm, OLAVOA, for multilevel threshold segmentation in medical imaging [20]. Mohamed employed a hybrid approach based on the Whale Optimization Algorithm (WOA) and Moth-Flame Optimization (MFO) to determine the optimal multilevel thresholds for image segmentation tasks [21]. Abdalla introduced a WOA-based method for liver segmentation in MRI images, which extracts different clusters from abdominal images to facilitate the segmentation process [22]. Mookiah et al. proposed an Enhanced Sine Cosine Algorithm (ESCA) for determining optimal thresholds in color image segmentation [23]. Aranguren et al. developed a multilevel threshold segmentation method based on LSHADE, specifically for MRI brain imaging, and tested the proposed method using three sets of reference images [24]. Baby Resma presented a novel multilevel thresholding algorithm using the metaheuristic Krill Herd Optimization (KHO) algorithm to address the image segmentation problem. The optimal thresholds were determined by maximizing the Kapur or Otsu objective function using the KHO technique, which significantly reduced the computational time required to obtain optimal multilevel thresholds [25]. Dikshit Chauhan proposed an Artificial Electric Field Optimization (AEFO) algorithm with crossover for multilevel image segmentation [26]. Fan et al. [27] addressed the shortcomings of the traditional Moth-Flame Optimization (MFO) based Otsu image segmentation algorithm, such as low segmentation accuracy, slow convergence, and susceptibility to local optima, by proposing a fractional-order MFO-based Otsu segmentation algorithm. The approach leverages the memory and heritability properties of fractional-order calculus to control moth position updates. An adaptive fractional order is used to adjust the update mechanism based on the moth’s position, thereby improving convergence speed. The improved MFO algorithm is combined with the two-dimensional Otsu method, employing its discrete matrix to optimize the objective function [27].
As one of the most widely applied threshold-based methods for fine-grained image partitioning, multi-threshold segmentation is essentially a complex combinatorial optimization problem. With the increase in the number of thresholds (e.g., eight-level thresholding), the search space expands exponentially, rendering traditional optimization approaches (such as exhaustive search) computationally infeasible. Metaheuristic algorithms have thus become the mainstream solution, yet existing methods still face trade-offs: Particle Swarm Optimization (PSO) and Grey Wolf Optimizer (GWO) exhibit strong global exploration capability but weak local exploitation in high-dimensional threshold spaces [14]; Whale Optimization Algorithm (WOA) and Dung Beetle Optimizer (DBO) improve stability but suffer from slow convergence when handling boundary optima [15].
The Pelican Optimization Algorithm (POA) demonstrates unique advantages in multi-threshold segmentation: its simple two-phase (exploration–exploitation) framework naturally aligns with the “threshold searching–refinement” process, while its limited number of tunable parameters reduces the complexity of threshold calibration [28]. However, POA’s inherent limitations—slow convergence and low boundary accuracy—directly constrain its segmentation performance. Consequently, the combination of multi-threshold segmentation + POA emerges as a natural research focus: leveraging POA’s simplicity while addressing its drawbacks can effectively bridge the gap in efficient and accurate threshold optimization for complex images.
However, POA also has certain limitations when addressing complex optimization tasks, including slow convergence speed, low convergence accuracy, and a tendency to get trapped in local optima [5,29,30], which leads to low precision when applied to multilevel threshold image segmentation. Accordingly, many researchers have proposed improvements to POA. For example, SeyedDavoud introduced an Improved Pelican Optimization Algorithm (IPOA) that integrates three motion strategies with a predefined knowledge-sharing factor to more accurately describe the stochastic foraging behavior of pelicans, while employing a Dimension-based Hunting Learning (DHL) strategy to preserve population diversity [31]. Hao-Ming Song proposed another variant of POA that incorporates chaotic disturbance factors and basic mathematical functions: ten different chaotic disturbance factors are introduced in the exploration phase, and the best-performing scheme is then combined with six different basic functions in the exploitation phase to enhance optimization performance [32]. To address the limitations of simple strategies and susceptibility to local optima in three-dimensional UAV path planning, Guiliang Zhou developed an improved Lévy-based POA (LPOA) [33]. Furthermore, by integrating iterative chaotic mapping with refracted opposition-based learning, nonlinear inertia weight factors, Lévy flight, and an adaptive t-distribution mutation strategy, another multi-strategy improved POA (IPOA) was proposed for UAV path planning in urban environments [34]. Although these enhancements have improved POA to some extent, the algorithm still suffers from slow convergence and remains prone to premature entrapment in local optima.
To address these issues, this paper proposes an improved Adaptive Cooperative Pelican Optimization Algorithm (ACPOA). Based on the standard POA, the proposed algorithm integrates three innovative strategies: the elite pool mutation strategy guides population evolution by constructing an elite pool, thereby enhancing global exploration capability and local exploitation accuracy; the adaptive cooperative mechanism improves search efficiency in high-dimensional spaces through subgroup-based division and collaboration; and the hybrid boundary handling technique effectively preserves boundary information and improves search performance near optimal boundary solutions.
The main contributions of this paper are as follows:
(1)
Proposal of the improved algorithm ACPOA: By integrating three innovative strategies into the Pelican Optimization Algorithm (POA), an Adaptive Cooperative Pelican Optimization Algorithm (ACPOA) is proposed. These strategies—namely the elite pool mutation strategy, adaptive cooperative mechanism, and hybrid boundary handling technique—effectively enhance the algorithm’s global exploration ability, local exploitation accuracy, high-dimensional search efficiency, and boundary processing capability. Together, they address the limitations of standard POA, such as premature convergence to local optima and rapid loss of population diversity in complex optimization problems;
(2)
Validation of optimization performance: The performance of ACPOA is evaluated on the CEC2017 and CEC2022 benchmark test suite, in comparison with eight state-of-the-art algorithms (such as Particle Swarm Optimization (PSO), Grey Wolf Optimizer (GWO), Whale Optimization Algorithm (WOA), etc.). Statistical indicators including mean, standard deviation, and ranking are used to comprehensively demonstrate the superiority of ACPOA in solving optimization problems of various dimensions and function types, particularly in terms of convergence speed, solution accuracy, and stability;
(3)
Application to multilevel threshold image segmentation: ACPOA is applied to multilevel threshold image segmentation tasks using Otsu’s method as the objective function. Segmentation is performed at 2, 4, 6, and 8 threshold levels on five benchmark images. The results, evaluated by Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Feature Similarity Index (FSIM), show that ACPOA outperforms the comparison algorithms, confirming its effectiveness in practical image processing applications.
The remainder of this paper is organized as follows: Section 2 describes the original POA and the improved ACPOA. Section 3 validates the effectiveness of ACPOA using the CEC2017 test suite and analyzes its performance. Section 4 applies the ACPOA to multi-threshold image segmentation. Section 5 concludes the study and outlines directions for future work.

2. Pelican Optimization Algorithm and the Proposed Methodology

2.1. Pelican Optimization Algorithm

2.1.1. Inspiration of POA

Pelicans are large, social birds characterized by their long beaks and large throat pouches, which are used to catch and store prey. They typically weigh between 2.75 and 15 kg, stand 1.06 to 1.83 m tall, and have a wingspan ranging from 0.5 to 3 m. Pelicans primarily feed on fish, but occasionally hunt frogs, turtles, and crustaceans. They usually hunt in groups, diving from heights of 10 to 20 m to drive fish into shallow waters before capturing them. During hunting, a pelican’s beak scoops in a large volume of water, after which the excess water is expelled and the prey is swallowed [28,35]. This efficient hunting strategy demonstrates the pelican’s intelligence and cooperative behavior, which inspired the design of the Pelican Optimization Algorithm (POA). By simulating the pelican’s hunting behavior, POA achieves efficient solutions to complex optimization problems.

2.1.2. Mathematical Model of POA

The POA is a population-based optimization algorithm, where each pelican represents a member of the population. In population-based algorithms, each member acts as a candidate solution, with its position in the search space corresponding to a potential solution to the optimization problem. Each population member suggests variable values for the optimization task based on its position in the search space. During the initialization phase, the positions of population members are randomly generated within the problem’s lower and upper bounds using Equation (1), which lays the foundation for the subsequent optimization process:
                                            X = l b + r a n d × u b l b
where, X represents the value of the candidate solution, u b and l b denote the upper and lower bounds of the problem, respectively, and r a n d is a random number in the range [0, 1].
POA simulates the behaviors and strategies of pelicans during hunting and attacking prey to update candidate solutions. This search strategy is modeled in two stages: movement toward the prey (exploration phase) and spreading wings on the water surface (exploitation phase).
(1)
Exploration Phase: Movement Toward the Prey
In the first phase, pelicans identify the prey’s position and move toward that region. This behavior is modeled as the core strategy of POA, enabling the algorithm to scan and explore different areas within the search space, thereby enhancing its global exploration capability. A key feature of POA is that the prey’s position is randomly generated within the search space, which further increases the diversity and accuracy when exploring the solution space. The above concept and the pelican’s movement toward the prey are mathematically modeled by Equation (2):
X 1 t + 1 = X t + r a n d × P I × X t ,                     i f     f P < f X t X t + r a n d × X t P ,                                 i f     f P f X t
where, X 1 denotes the pelican’s new state based on the first phase, P is the prey’s position, and f(∙) is the objective function representing the fitness value. I is a randomly chosen integer equal to 1 or 2, selected independently for each iteration and for each member.
(2)
Exploitation Phase: Spreading Wings on the Water Surface
In the second phase, after reaching the water surface, pelicans spread their wings to drive fish toward shallow waters and then collect prey using their throat pouches. This strategy enables pelicans to capture more fish within the target area. In the Pelican Optimization Algorithm (POA), this behavior is modeled as the local search process, allowing the algorithm to converge toward better solutions within the search space. This phase enhances POA’s local search and exploitation capabilities, enabling it to find improved solutions near promising regions. Mathematically, the algorithm achieves this by examining points in the vicinity of each pelican’s position to iteratively converge toward better solutions. This hunting behavior is modeled by Equation (3), which provides theoretical support for the algorithm’s local optimization:
X 2 t + 1 = X t + R × 1 t / T × 2 × r a n d 1 × X t
where, X 2 denotes the pelican’s new state based on the second phase, R is a constant equal to 0.2, t is the current iteration number, and T is the maximum number of iterations.
During both phases, if the objective function value improves at the new position, the pelican’s position is updated; otherwise, the new position is rejected. This type of update, known as an “effective update,” prevents the algorithm from moving to non-optimal regions. This process is modeled by Equation (4):
X t + 1 = X s t + 1 ,                     i f     f X s t + 1 < f X t X t ,                                     i f     f X s t + 1 f X t
where, X s denotes the pelican’s new state in each phase, with s representing the phase and taking values 1 or 2.

2.2. Adaptive Cooperative Pelican Optimization Algorithm

2.2.1. Elite-Pool Mutation Strategy

The standard POA algorithm in the foraging phase approaches the global optimum only through simple random perturbations. This mechanism tends to cause rapid loss of population diversity and often results in premature convergence to local optima when optimizing complex multimodal functions. To address this issue, this paper proposes an elite pool mutation strategy. The strategy constructs an elite pool consisting of the top three individuals with the best fitness. With a 10% probability, the mean of the elite individuals is used as the guiding target, while with a 90% probability, a single elite individual is randomly selected as the guiding target. The core updating formula is given by:
              X i , j 1 t + 1 = X t + r a n d × P I × X t ,                                           i f     f P < f X t E l i t e j t + r a n d × E l i t e j t X i , j t ,             i f     f P f X t
where the elite target E l i t e is generated by Equation (6):
E l i t e = 1 3 k = 1 3 X k ,         i f   r a n d < 0.1 X k , k ~ U 1 ,   2 ,   3 ,         o t h e r w i s e
where, X k   k = 1 ,   2 ,   3 denote the global best, second best, and third best solutions, respectively.
By introducing the elite pool mutation strategy, the algorithm not only retains guidance from the best individuals in the population but also maintains diversity through the random selection mechanism. This effectively enhances the local exploitation ability while avoiding premature convergence, thereby significantly improving the algorithm’s global search performance in optimizing complex multimodal functions.

2.2.2. Adaptive Cooperative Mechanism

The standard Pelican Optimization Algorithm (POA) employs a unified search strategy to update all dimensions of the solution vector simultaneously. This single approach struggles to adapt to the heterogeneous characteristics of different dimensions in high-dimensional optimization problems, which may lead to low search efficiency and premature convergence. To overcome this limitation, we propose an innovative adaptive cooperative mechanism that realizes specialized division of labor in the optimization process through multiple subgroups working collaboratively. This mechanism consists of two core components:
Subgroup-Dimension Allocation: Using a roulette wheel selection method, the search space is dynamically partitioned. As shown in Figure 1, the probability p s , d of assigning dimension d to the specialized subgroup s is defined as:
p s , d = 1 S   i n i t i a l   v a l u e ,   S = m i n 4 , d i m
here, S denotes the number of subgroups, and the probability matrix p of size S × d i m guides the allocation process.
Specialized Dimension Update: Each subgroup updates only the dimensions it is responsible for, employing the standard POA two-stage update process as follows:
    X i , d s 1 t + 1 = X i , d s t + r a n d × P I × X i , d s t ,                                               i f     f P < f X t E l i t e d s t + r a n d × E l i t e d s t X i , d s t ,                         i f     f P f X t
  X i , d s 2 t + 1 = X i , d s t + R × 1 t T × 2 × r a n d 1 × X i , d s t
where, d s represents the dimensions assigned to subgroup s .
This mechanism constructs a cooperative ecosystem where subgroups specialize in searching their assigned dimensions while sharing global information through the elite solution E l i t e d s . This approach effectively balances specialization and collaboration, making it particularly suitable for high-dimensional problems requiring diverse search strategies.

2.2.3. Hybrid Boundary Handing

The standard POA uses a simple boundary truncation method to handle out-of-bound individuals. This approach leads to the loss of gradient information near the boundaries, resulting in low search efficiency around boundary optima. To address this issue, this paper proposes a Hybrid Boundary Handling (HBH) technique that employs a probabilistic mixture repair strategy for out-of-bound individuals. As shown in Figure 2, with a 40% probability, the individual moves toward the global best solution; with another 40% probability, a mirror reflection is applied; and with the remaining 20% probability, the position is randomly reset:
X i j = P Z j + N 0,0.1 × u b j t X b e s t j ,                     i f   P 0.4 2 u b j ( t ) X i j ,                                                                                               i f   P 0.8 l b j ( t ) + U 0,1 × u b j t l b j t ,                               i f   P > 0.8
where, P is a uniformly distributed random number between 0 and 1.
This strategy offers multiple advantages by balancing exploitation (elite guidance), exploration (reflection), and diversity (random reset) in a 2:2:1 ratio. Moreover, it intelligently selects the repair method based on the degree of boundary violation and the optimization phase. Compared to the simple truncation method used in standard POA, HBH preserves more original search direction information, thereby improving boundary search efficiency.
The pseudocode for ACPOA is depicted as Algorithm 1 and the flowchart of ACPOA is shown in Figure 3.
Algorithm 1 Pseudo-Code of ACPOA.
1 :   Initialize   setting   ( population N , d i m ,   u b , l b ) ,   Max   iterations T 2 :   Initialize   E l i t e   P o o l   =   t o p   3 individuals sorted by fitness.
3 :   for each   swarm   i 1 , K
4 :             Find   S j   swarm   members :   Let   d s be   the   set   of   corresponding   dimensions   of   S j : d s | S w a r m t a b l e j , d s = 1
5 :             for   t   =   1 : T
6:          Generate the position of the prey at random
7 :                   for i   =   1 : N
8:              Phase 1: Moving towards prey (exploration phase)
9:              Calculate new status of Pelican using Equation (8)
10:              Update the population member of Pelican using Equation (4)
11:              Boundary handing using Equation (10)
12:             Phase 2: Winging on the water surface (exploitation phase)
13:             Calculate new status of Pelican using Equation (9)
14:             Update the population member of Pelican using Equation (4)
15:             Boundary handing using Equation (10)
16:           end
17:           Update the best candidate solution
18 :                       t = t + 1
19:         end
20: end for
21: Output   best   candidate   solution   X b e s t

3. Experimental Results and Analysis

3.1. Test Function and Compare Algorithms Parameter Settings

This section evaluates the performance of the proposed ACPOA using the challenging numerical optimization benchmark test suites CEC2017 [36] and CEC2022 [37], and compares it with other algorithms. The comparison algorithms include Velocity Pausing Particle Swarm Optimization (VPPSO) [38], Improved multi-strategy adaptive Grey Wolf Optimizer (IAGWO) [39], Multi-population Evolution Whale Optimization Algorithm (MEWOA) [40], Crayfish Optimization Algorithm (COA) [41], Dung Beetle Optimizer (DBO) [16], Gold Rush Optimizer [42], Crested Porcupine Optimizer (CPO) [18], Artificial rabbits optimization (ARO) [43], and Pelican Optimization Algorithm (POA) [28]. The parameter settings for the algorithms are detailed in Table 1. To ensure fairness and eliminate randomness effects, all algorithms were configured with the same parameters: a population size of 30 and a maximum of 500 iterations. Each algorithm was independently executed 30 times. The experimental results report the mean, standard deviation (Std), and ranking (Rank), with the best results highlighted in bold.
All experiments were conducted under the following computational environment: Windows 10 operating system, equipped with a 13th generation Intel (R) Core (TM) i5-13400 processor (2.5 GHz), 16 GB of RAM, and MATLAB 2024b as the software platform. This unified experimental setup and statistical methodology ensure the reliability and comparability of the results.

3.2. Ablation Experiment

To evaluate the independent contributions and synergistic effects of the three enhancement strategies—Elite-pool mutation strategy, Adaptive cooperative mechanism, and Hybrid boundary handling—an ablation study is conducted using the CEC2017 benchmark test set with dimension D = 30 . Four comparative variants are designed: POA_EP (integrating only the Elite-pool mutation strategy), POA_AC (integrating only the Adaptive cooperative mechanism), POA_HB (integrating only the Hybrid boundary handling), and ACPOA (incorporating all three strategies). The impact of each strategy is assessed through convergence curves (Figure 4) and average rankings (Figure 5).
From Figure 4, it can be observed that the original POA converges slowly on most functions and is prone to local optima; for instance, in F12, the fitness value stagnates after 200 iterations. In contrast, POA_EP consistently outperforms the baseline, reaching stability on F1 after 100 iterations and reducing the final fitness of F12 by approximately 30%, demonstrating the guiding effect of the elite-pool mutation strategy on the search direction. POA_AC shows a clear advantage in high-dimensional functions such as F20, where after 150 iterations its fitness is lower than that of POA_EP, effectively addressing the dimensional coupling issue in the original POA. POA_HB performs better on boundary-sensitive functions such as F8, reducing the final fitness by about 25% after 250 iterations compared with the original POA, thereby effectively preserving boundary search information. ACPOA achieves the best performance across all functions: in F1, it approaches the global optimum within 80 iterations; in F12, no stagnation is observed; and in F24, the convergence is the smoothest. These results confirm that the three strategies synergistically form a complete search framework of “global exploration—local exploitation—boundary optimization.”
Figure 5 presents the average rankings of all algorithmic variants based on the Friedman test, where a smaller rank indicates better overall performance. The original POA records the worst average rank of 4.73. POA_HB achieves an average rank of 3.33, about a 30% improvement over the original POA, though its benefits are limited to boundary optimization. POA_AC ranks slightly better at 3.17, showing stronger adaptability in high-dimensional problems. POA_EP achieves an average rank of 2.23, a 53% improvement over the original POA, and stands out as the most effective single strategy. Finally, ACPOA attains the best rank of 1.53, a 68% improvement over the original POA, significantly outperforming all single-strategy variants. This demonstrates that the three strategies do not merely add up but instead form a closed-loop optimization mechanism of “elite guidance—dimensional cooperation—boundary preservation”, thereby jointly enhancing algorithmic performance.

3.3. Assessing Performance with CEC2017 and CEC2022 Test Suite

In this subsection, the performance of ACPOA is evaluated using the CEC2017 (dimension = 30) and CEC2022 (dimensions = 10/20) benchmark suites. The experimental statistical results are presented in Table 2, Table 3 and Table 4, which detail the mean (Mean) and standard deviation (Std) achieved by each algorithm. The best-performing results are highlighted in boldface. Additionally, convergence curves are illustrated in Figure 6.
The experimental results on the CEC2017 (dim = 30), CEC2022 (dim = 10), and CEC2022 (dim = 20) benchmark suites (Table 2, Table 3 and Table 4 and Figure 6) fully demonstrate the superior performance of ACPOA.
For the CEC2017 (30-dimensional) benchmark suite, the data in Table 2 shows that ACPOA achieves the best mean and standard deviation values in most functions. For example, on function F1, ACPOA attains a mean value of 3.9708 × 103, which is significantly lower than POA’s 1.6176 × 1010 and VPPSO’s 1.1375 × 107, with a smaller standard deviation, indicating both strong optimization capability and stability in unimodal high-dimensional problems. On function F9, ACPOA achieves a mean of 1.4125 × 103, outperforming DBO’s 6.9243 × 103 and others, demonstrating its advantage in handling multimodal problems. According to the Friedman ranking, ACPOA ranks first with an average rank of 1.47, leading other algorithms. The convergence curves for this benchmark suite (e.g., F1, F5, F10) shown in Figure 1 indicate that ACPOA’s fitness rapidly decreases in the early iterations and stabilizes sooner, whereas MEWOA, MELGWO, and others converge more slowly. Specifically, for function F1, ACPOA approaches the optimum within 100 iterations, while other algorithms remain in higher-value regions, confirming its fast convergence capability.
For the CEC2022 (10-dimensional) benchmark suite, Table 3 reveals a clear advantage of ACPOA. On function F3, ACPOA achieves the theoretical optimum of 6.0000 × 102, comparable to MELGWO and CPO, but with the smallest standard deviation of 1.2467 × 10−3, indicating higher convergence precision. On function F6, ACPOA’s mean value is 1.8339 × 103, close to CPO, but with a standard deviation only 24.5% that of CPO, demonstrating better stability. With an average rank of 2.08, ACPOA ranks first overall. The convergence curve for CEC2022-F1 (dim = 10) in Figure 6 shows that ACPOA’s convergence speed significantly exceeds that of POA and MEWOA, with a more stable trajectory.
For the CEC2022 (20-dimensional) benchmark suite, Table 4 demonstrates the strong scalability of ACPOA in medium-to-high-dimensional scenarios. On function F1, ACPOA achieves a mean value of 2.2912 × 103, which is only 23.1% of POA and 6.1% of WOA, with a lower standard deviation. On function F10, the mean value reaches 2.4814 × 103, significantly lower than those of PSO and GWO, highlighting the effectiveness of elite pool-based information sharing. With an average rank of 1.42, ACPOA again ranks first. The convergence curve for CEC2022-F1 (dim = 20) in Figure 6 further confirms ACPOA’s rapid and stable convergence, which can be attributed to the subpopulation–dimension dynamic allocation mechanism that improves high-dimensional search efficiency.
On CEC2022 (10D vs. 20D), ACPOA’s average rank increases by only 0.66 (from 2.08 to 1.42), while POA’s rank drops by 0.7 (from 7.42 to 8.17) (Table 3 and Table 4). This indicates ACPOA’s adaptive cooperative mechanism reduces dimensional coupling—subgroup-dimension allocation (S = min(4, D)) ensures each subgroup focuses on ≤D/4 dimensions, avoiding “dimension disaster” (Figure 6, CEC2022-F1 (20D) convergence curve: ACPOA’s fitness is 2.2912 × 103, 76.9% lower than POA’s 8.9836 × 103).
For unimodal functions (e.g., CEC2017-F1), ACPOA’s mean fitness is 3.9708 × 103 (POA: 1.6176 × 1010) due to elite pool mutation accelerating convergence; for multimodal functions (e.g., CEC2017-F9), ACPOA’s mean (1.4125 × 103) outperforms DBO (6.9243 × 103) because adaptive cooperation maintains diversity; for composite functions (e.g., CEC2017-F30), ACPOA’s low std (1.3243 × 104) reflects hybrid boundary handling preserving boundary information (Table 2).
Moreover, Figure 1 illustrates ACPOA’s “global exploration followed by local exploitation” convergence pattern. For example, in CEC2017-F11, the algorithm initially conducts broad global exploration, then gradually focuses on promising regions for refinement, ultimately converging to superior solutions. This behavior results from the balance between exploration and exploitation achieved through the elite pool-based mutation and adaptive cooperative mechanisms. Overall, ACPOA demonstrates outstanding convergence speed, accuracy, and stability across benchmark suites with different dimensionalities.
The rank distribution in Figure 7 further verifies the robustness of ACPOA. In the CEC2017 (dim = 30) suite, ACPOA ranks first on 16 out of 30 functions, with particularly strong performance on high-dimensional multimodal functions such as F12 and F15. In the CEC2022 (dim = 10/20) suites, its rankings are consistently concentrated in the top 1–2 positions, whereas algorithms like POA and COA exhibit large rank fluctuations. This indicates that ACPOA is more adaptable to different types of functions and less affected by specific problem characteristics.
As shown in Figure 8, ACPOA achieves the lowest average ranks across all three benchmark scenarios (1.47, 2.08, and 1.42), with a significant lead over the second-best algorithms. This consistent advantage is attributed to the integration of elite pool mutation, adaptive cooperation, and hybrid boundary handling strategies, which effectively enhance the algorithm’s generality and robustness across varying dimensionalities and problem complexities.
In summary, ACPOA demonstrates superior convergence speed, optimization accuracy, and stability across various benchmark functions, which can be attributed to the integration of three key improvement strategies. First, the elite pool mutation strategy constructs an elite pool composed of the top three individuals with the best fitness values. This enhances global exploration and improves local exploitation accuracy, while preventing premature loss of population diversity—thus reducing the risk of getting trapped in local optima when optimizing complex multimodal functions. Second, the adaptive cooperative learning mechanism employs subpopulation–dimension dynamic allocation and specialized updating to enable task specialization and global information sharing. This significantly improves the search efficiency in high-dimensional spaces and allows the algorithm to better adapt to the varying characteristics of different dimensionalities. Third, the hybrid boundary handling technique maintains population stability while preserving valuable search information. Compared to simple truncation methods, it enables more effective exploration near boundary-optimal regions.
The synergy of these three strategies collectively underpins ACPOA’s performance advantage, which is comprehensively validated through statistical results, convergence curves, and ranking distributions.

3.4. Convergence Behavior Analysis

Figure 9 systematically illustrates the performance enhancement of ACPOA over the standard POA through the average fitness history curves, search trajectories, convergence curves, and position variation visualizations.
In the average fitness history curves (second column), ACPOA consistently maintains lower fitness values than POA throughout the iterations, with the performance gap gradually widening. For instance, in CEC2017-F25, ACPOA’s fitness value drops to less than 50% of that of POA within the first 100 iterations and continues to approach the global optimum in subsequent steps. This improvement is largely attributed to the elite pool mutation strategy, which—with a 10% probability of using the elite mean and 90% probability of randomly selecting elite individuals—enhances the directionality of the population toward the global optimum and avoids the slow convergence issue caused by random perturbations in POA.
The search trajectories (third column) reveal ACPOA’s characteristic of “broad early exploration followed by fine-tuned exploitation.” As shown in CEC2017-F11, the algorithm initially explores a wide range along the first dimension (exploration phase) and later focuses on minor adjustments within promising regions (exploitation phase). This dynamic balance results from the adaptive cooperative learning mechanism, where dimension-specific updates by subgroups preserve diversity during global exploration and improve precision in local exploitation. In contrast, POA’s unified update strategy often leads to premature stagnation in local areas.
The convergence curves (fourth column) further confirm ACPOA’s fast convergence ability. For functions such as CEC2017-F1 and F10, ACPOA exhibits steeper curves, reaching stable values with fewer iterations. For example, in F1, it converges near the global optimum within 200 iterations, whereas POA requires more than 300 iterations and still yields a higher final fitness value. This advantage stems from the hybrid boundary handling technique, which incorporates a 40% tendency toward elites, 40% mirror reflection, and 20% random reinitialization. This design reduces interference from boundary-violating individuals, preserves useful information, and accelerates convergence.
The position variation visualizations (fifth column) indicate that ACPOA’s search paths are more targeted. Initially, solutions are uniformly distributed across the search space (ensuring exploration breadth), then gradually converge toward the region near the global optimum (improving exploitation accuracy). For instance, in F11, the position distribution transitions from scattered to concentrated, whereas POA exhibits irregular aggregation patterns that often lead to premature convergence. These results validate the synergistic effects of the three core strategies: the elite pool provides high-quality guidance, the adaptive cooperative mechanism optimizes search roles, and the hybrid boundary technique maintains population stability—all working together to enable ACPOA’s superior performance on complex optimization problems.

3.5. Computing Time Analysis

Computation time is one of the key indicators for evaluating the practicality of optimization algorithms. In scenarios such as image segmentation and engineering optimization, where real-time performance is crucial, algorithmic efficiency directly determines its application value. To quantitatively assess the computational cost of ACPOA, it is first noted that prior experimental results have confirmed the overall superiority of ACPOA compared with the traditional POA in terms of optimization performance. This section therefore provides a more detailed comparative analysis of their computational costs, with a particular focus on differences in execution time. To ensure fairness, all algorithmic parameters were standardized: the population size was set to 30, the maximum number of iterations was fixed at 500, and each algorithm was independently executed 30 times. Figure 10 illustrates the average computation time (in seconds) of the compared algorithms when solving the CEC2017 test functions (D = 30).
The figure presents the average running time of each algorithm (in seconds), with ACPOA recording 0.253 s. In comparison, algorithms such as VPSO (0.599 s) and MELGWO (0.378 s) incur much higher costs due to their complex population interaction mechanisms. CPO (0.219 s) shows a runtime close to that of ACPOA, reflecting a similar trade-off between computational overhead and performance improvement. Algorithms such as COA (0.181 s), DBO (0.190 s), GRO (0.148 s), ARO (0.118 s), and POA (0.194 s) exhibit shorter runtimes owing to their relatively streamlined processes or efficient resource utilization.
Overall, although ACPOA is not the most time-efficient algorithm, it achieves significant improvements in optimization accuracy and convergence through strategies such as elite-pool mutation and adaptive cooperation. In practical scenarios where solution quality is prioritized, the additional computational cost remains acceptable. Therefore, the choice of algorithm should be made by balancing execution efficiency with optimization effectiveness.

3.6. Time Complexity Analysis

Time complexity is a key metric to evaluate the efficiency of optimization algorithms, which reflects the growth trend of computational overhead with the increase of problem scale (e.g., population size N, number of dimensions dim, maximum iterations T). This section analyzes the time complexity of ACPOA and 9 comparison algorithms (VPPSO, MELGWO, MEWOA, COA, DBO, GRO, CPO, ARO, POA) based on their core search mechanisms, and the results are shown in Table 5.
All algorithms have the same asymptotic time complexity (O (T × N × dim)) because they are all population-based iterative optimization algorithms. The difference lies in the constant term (hidden in the big O notation), where ACPOA introduces a small constant overhead due to the three improvement strategies, but this overhead is negligible compared to the dominant term (T × N × dim), which is consistent with the average computation time results in Section 3.5.

4. Experimental Results for Multilevel Thresholding

In image threshold segmentation, two commonly used methods for determining the optimal threshold are the Otsu method and the Kapur entropy method. For different thresholds, the variance between the target and background regions varies. The optimal threshold ( T b e s t ) is the one that maximizes this variance. The Kapur method calculates the sum of the entropies of the segmented target and background regions based on their probability distributions and selects the threshold that maximizes this sum. Both methods take the image pixels as input and determine the optimal threshold based on their respective computational principles [6,44,45]. In this study, we adopt the Otsu method to find the optimal threshold.
The Otsu method determines the optimal threshold based on the between-class variance criterion. Suppose the image I contains a total of L gray levels, and the number of pixels with gray level i is n i , Then, the total number of pixels in the image is:
                                                          N = i = 0 L 1 n i
The probability of a pixel having gray level i is:
                                                              P i = n i N , i = 0,1 , , L 1
where P i 0 , a n d   P 0 + P 1 + + P L 1 = 1 .
Let the number of thresholds be k . If a gray value t is chosen as the threshold, the image is divided into two regions: the pixels with gray levels in [ 0 ,   t ] are considered the foreground (target), and those in [ t + 1 ,   L 1 ] are the background. If the proportion of target pixels in the entire image is ω 0 with an average gray level of μ 0 ,and the proportion of background pixels is ω 1 with an average gray level of μ 1 ,then the total mean gray level of the image is μ , and the between-class variance is denoted by v . The formulas for ω 0 , μ 0 , ω 1 , μ 1 , μ and v are as follows [46]:
ω 0 = i = 0 t P i μ 0 = i = 0 t i P i ω 0 ω 1 = i = t + 1 L 1 P i μ 1 = i = t + 1 L 1 i P i ω 1 μ = i = 0 L 1 i P i ( t ) = ω 0 ( μ 0 μ ) 2 + ω 1 ( μ 1 μ ) 2 = ω 0 ω 1 ( μ 0 μ 1 ) 2
The formula for calculating the optimal threshold T b e s t is:
                                                              t best   = arg m a x 0 t L 1 v ( t )
By extension, the between-class variance for k thresholds can be calculated using the following formula:
                                        v ( t 1 , t 2 , , t k ) = ω 0 ω 1 ( μ 0 μ 1 ) 2 + ω 0 ω 2 ( μ 0 μ 2 ) 2 + + ω 0 ω k ( μ 0 μ k ) 2 + ω 1 ω 2 ( μ 1 μ 2 ) 2 + + ω 1 ω 3 ( μ 1 μ 3 ) 2 + + ω k 1 ω k ( μ k 1 μ k ) 2
where the formulas for ω i and μ i are given as:
                                                ω i 1 = i = t i 1 + 1 t i P i , 1 i k + 1 μ i 1 = i = t i 1 + 1 t i i P i ω i 1 , 1 i k + 1
Let the optimal threshold set based on Equation (14) be denoted as T b e s t , then it can be calculated by:
                                                                T b e s t = arg m a x 0 t 1 t 2 t k [ v ( t 1 , t 2 , , t k ) ]
In this study, comparative experiments were conducted to validate the performance of the ACPOA algorithm in multi-level thresholding. Five improved population intelligence algorithms were selected as baselines. All algorithms were configured with a unified population size of N = 30 and a maximum number of iterations T = 25. The Otsu method was employed as the objective function. Experiments were carried out on five benchmark images with varying styles, using four different numbers of thresholds (2, 4, 6, and 8), as illustrated in Figure 11. Each set of experiments was independently run 30 times. The mean (Mean) and standard deviation (Std) of the objective function values were recorded to evaluate the accuracy and stability of each algorithm. To ensure fairness in comparison, all algorithm parameters were kept consistent with those described earlier.

4.1. Evaluation Index

The evaluation metrics used to assess image segmentation performance include Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Feature Similarity Index Measure (FSIM). Higher values of PSNR and SSIM indicate lower distortion and better visual quality of the segmented image. Similarly, a higher FSIM value corresponds to lower error rates. The formulas for these evaluation metrics are as follows [6,47]:
The PSNR is calculated as:
                                                                  PSNR = 10 l o g 10 ( 255 2 M S E )
where MSE is the Mean Squared Error between the original image I and the segmented image I . The MSE is defined as:
          MSE = 1 M N j = 1 M k = 1 N [ I ( j , k ) I ( j , k ) ] 2
where ( M × N ) is the image size, I and I are the original and segmented images, respectively, and I ( j , k ) represent the pixel intensities at position ( j , k ) in the original and segmented grayscale images.
SSIM evaluates the similarity between two images by comparing their luminance, contrast, and structural information. Given two images I and I , SSIM is defined as:
                                          SSIM ( I , I ) = ( 2 μ I μ I + C 1 ) ( 2 σ I I + C 2 ) ( μ I 2 + μ I 2 + C 1 ) ( σ I 2 + σ I 2 + C 2 )
where μ I and μ I are the mean pixel intensities of I and I , respectively; σ I I is the covariance between I and I ; σ I 2 and σ I 2 are the variances of I and I , respectively. Constants C 1 and C 2 are used to stabilize the division and are typically defined as C 1 = K 1 L , C 2 = K 2 L 2 , where K 1 = 0.01 , K 2 = 0.03 , and L is the maximum grayscale intensity value.
The Structural Similarity Index (SSIM) ranges from −1 to 1, reaching its maximum value of 1 when two images are identical. This index is grounded in human visual perception and decomposes image quality into three independent components: luminance, contrast, and structural information. Luminance is estimated using local means, contrast is characterized by standard deviation, and structure similarity is measured via covariance. By evaluating these three factors separately, SSIM effectively quantifies the level of distortion in an image. A higher SSIM value, closer to 1, indicates better structural preservation, which reflects superior segmentation performance in image processing tasks [11,48].
Since perceptually significant image features are typically located at points of high phase consistency, FSIM adopts phase congruency as the primary feature for evaluating image quality. As phase congruency is not affected by contrast—a factor that often influences image quality—FSIM introduces image gradient as a secondary feature to enhance the quality assessment [6].
Let e n and o n represent the even-symmetric and odd-symmetric filters at scale n , respectively, which form an orthogonal pair. At a point x ,each orthogonal pair forms a corresponding vector [ e n ( x ) , o n ( x ) ] , and the amplitude at scale n is given by A n ( x ) = e n ( x ) 2 + o n ( x ) 2 . Assuming E ( x ) = ( n e n ( x ) ) 2 + ( n o n ( x ) ) 2 , the phase congruency ( P C ) for a one-dimensional signal is defined as:
                                                      P C ( x ) = E ( x ) ε + n A n ( x )
where ε is a small positive constant to avoid division by zero.
For quadrature filters, the LogGabor filter is commonly used. In the frequency domain, the transfer function of a LogGabor filter is defined as:
                                                                      G ( ω ) = e x p ( l o g ( ω / ω 0 ) ) 2 2 σ r 2
where ω 0 is the center frequency of the filter, and σ r controls the bandwidth.
To extend the one-dimensional LogGabor filter into two dimensions, a spreading function is applied in the vertical direction. When a Gaussian function is used as the spreading function, the transfer function of the 2D LogGabor filter is given by:
                      G 2 ( W , θ j ) = e x p ( l o g ( ω / ω 0 ) ) 2 2 σ r 2 × e x p ( θ θ j ) 2 2 σ θ 2
where, θ j = j π / J , j = { 0 , 1 , , J 1 } denotes the orientation angles of the filters, J is the total number of orientations, and σ θ is the angular bandwidth.
By adjusting ω 0 and θ j , and convolving the 2D filter G 2 with the input image, each pixel yields a response denoted as [ e n , θ j ( x ) , o n , θ j ( x ) ] . The amplitude in direction θ j is computed as:
                                                  A n , θ j ( x ) = e n , θ j ( x ) 2 + o n , θ j ( x ) 2
The directional energy in direction θ j is defined as:
                                                E θ j ( x ) = F θ j ( x ) 2 + H θ j ( x ) 2
where F θ j ( x ) = n e n , θ j ( x ) , H θ j ( x ) = n o n , θ j ( x ) . Consequently, the 2D phase congruency at pixel xxx is computed as:
                                                  P C 2 D x = j E θ j ( x ) ε + n j A n , θ j ( x )
The gradient can be represented using convolution kernels, with the three most commonly used methods for computing gradients being the Sobel operator, Prewitt operator, and Scharr operator. The gradient magnitude (GM) of an image is defined as G = ( G x 2 + G y 2 ) 0.5 , where G x and G y represent the gradients computed in the horizontal and vertical directions, respectively.
After obtaining phase consistencies P C 1 and P C 2 , where P C 1 and P C 2 are the phase consistency values extracted from two images, and G 1 and G 2 are their corresponding gradient magnitudes, the image quality metrics S P C x and S G M x are calculated as follows:
                                                  S P C ( x ) = 2 P C 1 ( x ) P C 2 ( x ) + T 1 P C 1 2 ( x ) + P C 2 2 ( x ) + T 1
                                                          S G M ( x ) = 2 G 1 ( x ) G 2 ( x ) + T 2 G 1 2 ( x ) + G 2 2 ( x ) + T 2
where, T 1 and T 2 are small positive constants introduced for numerical stability, and G 1 and G 2 correspond to the gradient magnitudes of the original and segmented images, respectively.
By combining S P C x and S G M x ,the similarity measure S L x between the reference image and the distorted image can be obtained:
                                                                  S L ( x ) = = [ S P C ( x ) ] α [ S G ( x ) ] β
where α and β are parameters for adjustment, typically set to 1.
The Feature Similarity Index (FSIM) between the reference image I and the distorted image I is calculated as:
                                                              F S I M ( I , I ) = x Ω S L ( x ) P C m ( x ) x Ω P C m ( x )
in this equation, x represents a pixel location, Ω denotes the entire image domain, and P C m ( x ) = m a x ( P C 1 ( x ) , P C 2 ( x ) ) , where P C 1 ( x ) and P C 2 ( x ) are the phase consistency values of the original and segmented images, respectively.

4.2. Analysis of Otsu Results Based on ACPOA

Using the ACPOA algorithm with Otsu’s method as the objective function, multilevel thresholding segmentation was performed on five selected images. The effectiveness of the thresholding algorithm is reflected by the maximum value of the Otsu objective function, as well as the PSNR, FSIM, and SSIM metrics. Higher values of these three evaluation indicators indicate better image segmentation performance.
The results demonstrate that the proposed ACPOA algorithm outperforms other algorithms in terms of the objective function value, PSNR, FSIM, and SSIM. Table 6 presents the distribution of the optimally selected thresholds on the histograms. Table 7 shows the mean and standard deviation of the best fitness values obtained by each algorithm using Otsu’s objective function, along with the average ranking results for each algorithm. Table 8, Table 9 and Table 10 report the mean and standard deviation of PSNR, FSIM, and SSIM values obtained by each algorithm, respectively, as well as their average ranks. Optimal thresholds were searched using Otsu’s objective function with threshold numbers set to 2, 4, 6, and 8.
Figure 12 displays the Otsu fitness value curves of different algorithms. These curves allow for a comparison of the convergence speed and the final fitness values achieved by various algorithms. As observed in the figure, the standard POA and other algorithms show a relatively small gap compared to ACPOA when the threshold is set to 2. However, as the threshold increases, the advantage of ACPOA becomes increasingly evident, with its fitness value rising most significantly and eventually stabilizing at a higher level. Compared to the curves of other algorithms, the ACPOA algorithm not only converges faster, rapidly improving the fitness value with fewer threshold increments, but also achieves the highest final fitness value. This indicates that as the threshold increases, the proposed ACPOA algorithm performs better in optimizing the Otsu fitness function, demonstrating the best fitness value performance.
Table 7 presents the mean and standard deviation of the best fitness values obtained by different algorithms for multilevel thresholding (2, 4, 6, and 8 levels) using Otsu’s objective function on five images (baboon, bank, camera, face, and lena). The results show that ACPOA demonstrates superior performance in almost all cases. For instance, in the 8-level thresholding of the baboon image, ACPOA achieved a fitness mean of 3.40 × 103, slightly higher than POA’s 3.39 × 103. More importantly, its standard deviation was only 1.70 × 10−1, significantly lower than POA’s 2.12 × 100, indicating that ACPOA not only finds better threshold combinations but also exhibits greater stability across repeated experiments. Similarly, for the 6-level thresholding of the camera image, ACPOA’s mean fitness value was 4.65 × 103, equal to that of POA. However, the standard deviation for ACPOA (7.40 × 10−2) was markedly lower than POA’s 2.72, demonstrating that the elite pool mutation strategy effectively guides the population toward high-quality regions, thereby reducing performance fluctuations caused by random disturbances. As shown in Figure 13, ACPOA ranks first with an average rank of 2.25, significantly outperforming POA, which ranks second at 4.17. This advantage is especially pronounced at higher threshold levels (6 and 8), indicating that ACPOA maintains efficient optimization capability even in complex threshold spaces.
Table 8 records the Peak Signal-to-Noise Ratio (PSNR) results of image segmentation from different algorithms. PSNR reflects the distortion level between the segmented and original images, with higher values indicating better performance. ACPOA demonstrates significant advantages across all scenarios. For example, in the 8-level thresholding of the bank image, ACPOA achieves a mean PSNR of 25.2639, outperforming POA (24.7618), MELGWO (24.6335), and VPPSO (24.7295). Moreover, its standard deviation (5.18 × 10−2) is the lowest, indicating segmentation results closer to the original image with better detail preservation. In the 6-level thresholding of the face image, ACPOA attains a PSNR of 22.5815, superior to POA’s 22.4368 and MEWOA’s 21.8273. As shown in Figure 14, ACPOA ranks first with an average rank of 2.22, significantly ahead of MEWOA (6.87) and standard POA (4.57). This advantage is especially notable at higher threshold levels (6 and 8), highlighting ACPOA’s ability to maintain efficient optimization in complex threshold spaces. This success is attributed to the adaptive cooperative mechanism, which, through specialized subgroup division of labor, precisely locates the optimal solutions in high-dimensional threshold spaces and mitigates the search inefficiency caused by dimensional coupling. Additionally, ACPOA’s PSNR values in 8-level thresholding are generally higher than those in lower-level scenarios, with its superiority increasing as the number of thresholds grows, confirming its adaptability in complex segmentation tasks.
Table 9 presents the Feature Similarity Index (FSIM) results, which evaluate how well the segmented image retains the original image’s features such as edges and textures. Higher FSIM values indicate better feature consistency. ACPOA stands out in this regard. For instance, in the 8-level thresholding of the lena image, ACPOA achieves a mean FSIM of 0.8840, considerably higher than POA (0.8741), VPPSO (0.8706), and MELGWO (0.8685). This reflects the effectiveness of the hybrid boundary handling strategy in preserving more meaningful feature information during threshold search. In the 6-level thresholding of the baboon image, ACPOA’s FSIM reaches 0.8861, slightly surpassing POA’s 0.8859, with a standard deviation (2.54 × 10−3) only 23% of POA’s, demonstrating greater stability in feature retention across repeated experiments. Similarly, as shown in Figure 15, ACPOA ranks first with an average rank of 3.17. Notably, ACPOA’s advantage is more pronounced in texture-rich images such as baboon and lena. This is attributed to the adaptive cooperative mechanism dynamically allocating dimensions so that subgroups focus on capturing thresholds corresponding to different features, thereby enhancing feature similarity.
Table 10 reports the Structural Similarity Index (SSIM), which measures the consistency between the segmented and original images in terms of luminance, contrast, and structure. Values closer to 1 indicate better structural preservation. ACPOA shows a significant advantage on this metric. For example, in the 8-level thresholding of the face image, ACPOA achieves a mean SSIM of 0.8576, notably higher than POA’s 0.8416, MELGWO’s 0.8386, and MEWOA’s 0.8172, indicating better maintenance of the overall image structure. In the 6-level thresholding of the bank image, ACPOA’s SSIM is 0.8152, outperforming POA’s 0.8103 and COA’s 0.8041. As illustrated in Figure 16, ACPOA ranks first with an average rank of 3.39, far ahead of standard POA’s 5.07. This superiority is attributed to the elite pool mutation strategy, which integrates optimal individual information to guide the algorithm’s focus on threshold optimization in structurally critical regions. Additionally, ACPOA’s SSIM values remain stable across all threshold levels, with standard deviations generally lower than those of other algorithms. This reflects its effective balance between exploration and exploitation, preventing structural information loss caused by premature convergence.
Collectively, these results consistently demonstrate that POA outperforms comparative algorithms across multilevel thresholding image segmentation tasks. Its performance advantage arises from the synergistic effect of three key improvement strategies: the elite pool mutation strategy enhances the optimization accuracy of the Otsu objective function by guiding with optimal individuals; the adaptive cooperative mechanism improves search efficiency in high-dimensional threshold spaces through specialized subgroup division and global information sharing, resulting in better PSNR, FSIM, and SSIM scores; and the hybrid boundary handling strategy reduces ineffective fluctuations in threshold search, leading to smaller standard deviations and greater algorithm stability.
Whether dealing with simple images (such as bank) or complex ones (such as baboon), and whether performing low-level (2-level) or high-level (8-level) thresholding, ACPOA consistently produces superior results, fully validating its practicality and superiority in multilevel image segmentation.

5. Summary and Prospect

The proposed ACPOA algorithm achieves comprehensive performance improvements through three core enhancement strategies. First, the elite pool mutation strategy constructs a guidance mechanism based on the top three individuals with the best fitness values. This not only broadens the scope of global exploration but also enhances local exploitation precision, effectively preventing premature loss of population diversity and reducing the risk of getting trapped in local optima in complex multimodal functions. Second, the adaptive cooperative learning mechanism dynamically allocates subgroups and dimensions to specialized updates, enabling efficient search in high-dimensional spaces and addressing the adaptability shortcomings of standard algorithms in such problems. Third, the hybrid boundary handling technique significantly outperforms traditional truncation methods in optimization efficiency near boundary regions by preserving effective search information and maintaining population stability.
On both the CEC2017 and CEC2022 benchmark test sets, ACPOA achieves comprehensive performance improvements through the integration of its three core strategies. Specifically, on CEC2017 (30 dimensions) and CEC2022 (10 D/20 D), ACPOA attains average rankings of 1.47, 2.08, and 1.42, respectively, ranking first in all cases. Notably, ACPOA secures the top position on 16 out of 30 functions in CEC2017 and consistently maintains a top-2 ranking on CEC2022, significantly outperforming the original POA and competing algorithms such as VPPSO, MELGWO, and COA, whose average rankings range between 4.17–8.07. Statistical analyses, convergence curves, and rank distributions all validate ACPOA’s superiority in terms of convergence speed, optimization accuracy, and stability.
When applied to the multilevel threshold image segmentation task, ACPOA also demonstrates outstanding practical performance. For example, in the eight-level threshold segmentation of the “bank” image, ACPOA achieves a PSNR of 25.2639 dB (an improvement of 0.5 dB over POA), a FSIM of 0.9089 (+0.0054), and a SSIM of 0.8556 (+0.0063). Across five benchmark images and four threshold levels, ACPOA surpasses competing algorithms in PSNR, FSIM, and SSIM in 80% of the scenarios, fully demonstrating the synergistic effectiveness of its three enhancement strategies and laying a solid foundation for subsequent engineering applications.
Future research can further advance along the following directions: (1) designing an adaptive mechanism to dynamically adjust the elite pool size and update probability according to different types of optimization problems (e.g., unimodal/multimodal, low/high dimensional), thereby enabling the elite pool mutation strategy to flexibly balance exploration and exploitation; (2) extending the adaptive cooperative learning mechanism to broader applications, such as developing subgroup-based cooperative Pareto optimal search strategies for multi-objective optimization, enhancing the algorithm’s capability to handle conflicting objectives; (3) tailoring the hybrid boundary handling technique to specific real-world problems—such as the boundary ambiguity in medical images or energy constraints in wireless sensor networks—to improve practical applicability.
Additionally, integrating ACPOA with deep learning methods could be explored, for example by leveraging neural networks to predict fitness trends of elite pool individuals, further accelerating convergence and broadening the algorithm’s applicability and performance advantages across diverse domains.

Author Contributions

Conceptualization, Y.Z. and J.W.; methodology, Y.Z. and J.W.; software, Y.Z. and J.W.; validation, X.Z. and B.W.; formal analysis, X.Z. and B.W.; investigation, X.Z. and B.W.; re- sources, Y.Z. and J.W.; data curation, Y.Z. and J.W.; writing original draft preparation, Y.Z. and J.W.; writing review and editing, X.Z. and B.W.; visualization, X.Z.; supervision, X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All data in this paper are included in the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chen, L.; Bentley, P.; Mori, K.; Misawa, K.; Fujiwara, M.; Rueckert, D. DRINet for Medical Image Segmentation. IEEE Trans. Med. Imaging 2018, 37, 2453–2462. [Google Scholar] [CrossRef]
  2. LaLonde, R.; Xu, Z.; Irmakci, I.; Jain, S.; Bagci, U. Capsules for biomedical image segmentation. Med. Image Anal. 2020, 68, 101889. [Google Scholar] [CrossRef] [PubMed]
  3. Huang, C.; Li, X.; Wen, Y. AN OTSU image segmentation based on fruitfly optimization algorithm. Alex. Eng. J. 2020, 60, 183–188. [Google Scholar] [CrossRef]
  4. Li, Y.; Feng, X. A multiscale image segmentation method. Pattern Recognit. 2016, 52, 332–345. [Google Scholar] [CrossRef]
  5. Mohan, Y.; Yadav, R.K.; Manjul, M. EECRPOA: Energy efficient clustered routing using pelican optimization algorithm in WSNs. Wirel. Netw. 2025, 31, 3743–3769. [Google Scholar] [CrossRef]
  6. Wang, J.; Bei, J.; Song, H.; Zhang, H.; Zhang, P. A whale optimization algorithm with combined mutation and removing similarity for global optimization and multilevel thresholding image segmentation. Appl. Soft Comput. 2023, 137, 110130. [Google Scholar] [CrossRef]
  7. Abd Elaziz, M.; Bhattacharyya, S.; Lu, S. Swarm selection method for multilevel thresholding image segmentation. Expert Syst. Appl. 2019, 138, 112818. [Google Scholar] [CrossRef]
  8. Ben Ishak, A. A two-dimensional multilevel thresholding method for image segmentation. Appl. Soft Comput. 2016, 52, 306–322. [Google Scholar] [CrossRef]
  9. Chen, Y.; Wang, M.; Heidari, A.A.; Shi, B.; Hu, Z.; Zhang, Q.; Chen, H.; Mafarja, M.; Turabieh, H. Multi-threshold Image Segmentation using a Multi-strategy Shuffled Frog Leaping Algorithm. Expert Syst. Appl. 2022, 194, 116511. [Google Scholar] [CrossRef]
  10. Guo, H.; Wang, J.g.; Liu, Y. Multi-threshold image segmentation algorithm based on Aquila optimization. Vis. Comput. 2023, 40, 2905–2932. [Google Scholar] [CrossRef]
  11. Zheng, J.; Gao, Y.; Zhang, H.; Lei, Y.; Zhang, J. OTSU Multi-Threshold Image Segmentation Based on Improved Particle Swarm Algorithm. Appl. Sci. 2022, 12, 11514. [Google Scholar] [CrossRef]
  12. Dorigo, M.; Birattari, M.; Stützle, T. Ant Colony Optimization. Comput. Intell. Mag. IEEE 2006, 1, 28–39. [Google Scholar] [CrossRef]
  13. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  14. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 1944, pp. 1942–1948. [Google Scholar]
  15. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  16. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2022, 79, 7305–7336. [Google Scholar] [CrossRef]
  17. Fu, Y.; Liu, D.; Chen, J.; He, L. Secretary bird optimization algorithm: A new metaheuristic for solving global optimization problems. Artif. Intell. Rev. 2024, 57, 123. [Google Scholar] [CrossRef]
  18. Abdel-Basset, M.; Mohamed, R.; Abouhawwash, M. Crested Porcupine Optimizer: A new nature-inspired metaheuristic. Knowl. -Based Syst. 2024, 284, 111257. [Google Scholar] [CrossRef]
  19. Bhandari, A.K.; Kumar, A.; Singh, G.K. Modified artificial bee colony based computationally efficient multilevel thresholding for satellite image segmentation using Kapur’s, Otsu and Tsallis functions. Expert Syst. Appl. 2015, 42, 1573–1601. [Google Scholar] [CrossRef]
  20. Lan, L.; Wang, S. Improved African vultures optimization algorithm for medical image segmentation. Multimed. Tools Appl. 2023, 83, 45241–45290. [Google Scholar] [CrossRef]
  21. Aziz, M.A.E.; Ewees, A.A.; Hassanien, A.E. Whale Optimization Algorithm and Moth-Flame Optimization for multilevel thresholding image segmentation. Expert Syst. Appl. 2017, 83, 242–256. [Google Scholar] [CrossRef]
  22. Mostafa, A.; Hassanien, A.E.; Houseni, M.; Hefny, H. Liver segmentation in MRI images based on whale optimization algorithm. Multimed. Tools Appl. 2017, 76, 24931–24954. [Google Scholar] [CrossRef]
  23. Mookiah, S.; Parasuraman, K.; Kumar Chandar, S. Color image segmentation based on improved sine cosine optimization algorithm. Soft Comput. 2022, 26, 13193–13203. [Google Scholar] [CrossRef]
  24. Aranguren, I.; Valdivia, A.; Morales-Castañeda, B.; Oliva, D.; Abd Elaziz, M.; Perez-Cisneros, M. Improving the segmentation of magnetic resonance brain images using the LSHADE optimization algorithm. Biomed. Signal Process. Control 2020, 64, 102259. [Google Scholar] [CrossRef]
  25. Baby Resma, K.P.; Nair, M.S. Multilevel thresholding for image segmentation using Krill Herd Optimization algorithm. J. King Saud Univ.-Comput. Inf. Sci. 2018, 33, 528–541. [Google Scholar] [CrossRef]
  26. Chauhan, D.; Yadav, A. A crossover-based optimization algorithm for multilevel image segmentation. Soft Comput. 2023, 1–33. [Google Scholar] [CrossRef]
  27. Fan, Q.; Ma, Y.; Wang, P.; Bai, F. Otsu Image Segmentation Based on a Fractional Order Moth–Flame Optimization Algorithm. Fractal Fract. 2024, 8, 87. [Google Scholar] [CrossRef]
  28. Trojovský, P.; Dehghani, M. Pelican Optimization Algorithm: A Novel Nature-Inspired Algorithm for Engineering Applications. Sensors 2022, 22, 855. [Google Scholar] [CrossRef] [PubMed]
  29. Eluri, R.K.; Devarakonda, N. Chaotic Binary Pelican Optimization Algorithm for Feature Selection. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2023, 31, 497–530. [Google Scholar] [CrossRef]
  30. Ge, X.; Li, C.; Li, Y.; Yi, C.; Fu, H. A hyperchaotic map with distance-increasing pairs of coexisting attractors and its application in the pelican optimization algorithm. Chaos Solitons Fractals 2023, 173, 113636. [Google Scholar] [CrossRef]
  31. SeyedGarmroudi, S.; Kayakutlu, G.; Kayalica, M.O.; Çolak, Ü. Improved Pelican optimization algorithm for solving load dispatch problems. Energy 2023, 289, 129811. [Google Scholar] [CrossRef]
  32. Song, H.-M.; Xing, C.; Wang, J.-S.; Wang, Y.-C.; Liu, Y.; Zhu, J.-H.; Hou, J.-N. Improved pelican optimization algorithm with chaotic interference factor and elementary mathematical function. Soft Comput. 2023, 27, 10607. [Google Scholar] [CrossRef]
  33. Zhou, G.; Lü, S.; Mao, L.; Xu, K.; Bao, T.; Bao, X. Path Planning of UAV Using Levy Pelican Optimization Algorithm In Mountain Environment. Appl. Artif. Intell. 2024, 38, 2368343. [Google Scholar] [CrossRef]
  34. Qiu, S.; Dai, J.; Zhao, D. Path Planning of an Unmanned Aerial Vehicle Based on a Multi-Strategy Improved Pelican Optimization Algorithm. Biomimetics 2024, 9, 647. [Google Scholar] [CrossRef] [PubMed]
  35. Louchart, A.; Tourment, N.; Carrier, J. The earliest known pelican reveals 30 million years of evolutionary stasis in beak morphology. J. Ornithol. 2011, 152, 15–20. [Google Scholar] [CrossRef]
  36. Wu, G.; Mallipeddi, R.; Suganthan, P. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition and Special Session on Constrained Single Objective Real-Parameter Optimization; Nanyang Technological University: Singapore, 2016. [Google Scholar]
  37. Luo, W.; Lin, X.; Li, C.; Yang, S.; Shi, Y. Benchmark functions for CEC 2022 competition on seeking multiple optima in dynamic environments. arXiv 2022, arXiv:2201.00523. [Google Scholar] [CrossRef]
  38. Shami, T.M.; Mirjalili, S.; Al-Eryani, Y.; Daoudi, K.; Izadi, S.; Abualigah, L. Velocity pausing particle swarm optimization: A novel variant for global optimization. Neural Comput. Appl. 2023, 35, 9193–9223. [Google Scholar] [CrossRef]
  39. Yu, M.; Xu, J.; Liang, W.; Qiu, Y.; Bao, S.; Tang, L. Improved multi-strategy adaptive Grey Wolf Optimization for practical engineering applications and high-dimensional problem solving. Artif. Intell. Rev. 2024, 57, 277. [Google Scholar] [CrossRef]
  40. Shen, Y.; Zhang, C.; Soleimanian Gharehchopogh, F.; Mirjalili, S. An improved whale optimization algorithm based on multi-population evolution for global optimization and engineering design problems. Expert Syst. Appl. 2023, 215, 119269. [Google Scholar] [CrossRef]
  41. Jia, H.; Rao, H.; Wen, C.; Mirjalili, S. Crayfish optimization algorithm. Artif. Intell. Rev. 2023, 56, 1919–1979. [Google Scholar] [CrossRef]
  42. Zolf, K. Gold rush optimizer: A new population-based metaheuristic algorithm. Oper. Res. Decis. 2023, 33, 113–150. [Google Scholar] [CrossRef]
  43. Wang, L.; Cao, Q.; Zhang, Z.; Mirjalili, S.; Zhao, W. Artificial rabbits optimization: A new bio-inspired meta-heuristic algorithm for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2022, 114, 105082. [Google Scholar] [CrossRef]
  44. Ryalat, M.H.; Dorgham, O.; Tedmori, S.; Al-Rahamneh, Z.; Al-Najdawi, N.; Mirjalili, S. Harris hawks optimization for COVID-19 diagnosis based on multi-threshold image segmentation. Neural Comput. Appl. 2022, 35, 6855–6873. [Google Scholar] [CrossRef] [PubMed]
  45. Suresh, S.; Lal, S. Multilevel thresholding based on Chaotic Darwinian Particle Swarm Optimization for segmentation of satellite images. Appl. Soft Comput. 2017, 55, 503–522. [Google Scholar] [CrossRef]
  46. Shi, J.; Chen, Y.; Wang, C.; Heidari, A.A.; Liu, L.; Chen, H.; Chen, X.; Sun, L. Multi-threshold image segmentation using new strategies enhanced whale optimization for lupus nephritis pathological images. Displays 2024, 84, 102799. [Google Scholar] [CrossRef]
  47. Jiang, Y.; Yeh, W.-C.; Hao, Z.; Yang, Z. A cooperative honey bee mating algorithm and its application in multi-threshold image segmentation. Inf. Sci. 2016, 369, 171–183. [Google Scholar] [CrossRef]
  48. Dong, Y.; Li, M.; Zhou, M. Multi-Threshold Image Segmentation Based on the Improved Dragonfly Algorithm. Mathematics 2024, 12, 854. [Google Scholar] [CrossRef]
Figure 1. Adaptive cooperative mechanism.
Figure 1. Adaptive cooperative mechanism.
Biomimetics 10 00596 g001
Figure 2. Hybrid boundary handing.
Figure 2. Hybrid boundary handing.
Biomimetics 10 00596 g002
Figure 3. Flowchart of ACPOA.
Figure 3. Flowchart of ACPOA.
Biomimetics 10 00596 g003
Figure 4. Convergence curves of POA improved with different strategies.
Figure 4. Convergence curves of POA improved with different strategies.
Biomimetics 10 00596 g004
Figure 5. The average ranking of POA improved with different strategies.
Figure 5. The average ranking of POA improved with different strategies.
Biomimetics 10 00596 g005
Figure 6. Convergence curves obtained by different algorithms.
Figure 6. Convergence curves obtained by different algorithms.
Biomimetics 10 00596 g006aBiomimetics 10 00596 g006b
Figure 7. Ranking distribution of different algorithms.
Figure 7. Ranking distribution of different algorithms.
Biomimetics 10 00596 g007
Figure 8. Average rank ranking.
Figure 8. Average rank ranking.
Biomimetics 10 00596 g008
Figure 9. Convergence behavior of ACPOA and POA.
Figure 9. Convergence behavior of ACPOA and POA.
Biomimetics 10 00596 g009
Figure 10. Average computation time of different algorithms.
Figure 10. Average computation time of different algorithms.
Biomimetics 10 00596 g010
Figure 11. The set of benchmark images.
Figure 11. The set of benchmark images.
Biomimetics 10 00596 g011
Figure 12. Otsu fitness value curves of different algorithms.
Figure 12. Otsu fitness value curves of different algorithms.
Biomimetics 10 00596 g012
Figure 13. Average rank ranking of the fitness values in Otsu.
Figure 13. Average rank ranking of the fitness values in Otsu.
Biomimetics 10 00596 g013
Figure 14. Average rank of PSNR in Otsu.
Figure 14. Average rank of PSNR in Otsu.
Biomimetics 10 00596 g014
Figure 15. Average rank of FSIM in Otsu.
Figure 15. Average rank of FSIM in Otsu.
Biomimetics 10 00596 g015
Figure 16. Average rank of SSIM in Otsu.
Figure 16. Average rank of SSIM in Otsu.
Biomimetics 10 00596 g016
Table 1. Compare algorithm parameter settings.
Table 1. Compare algorithm parameter settings.
Algorithms Name of the ParameterValue of the Parameter
VPPSO α , N 1 , N 2 0.3, 0.15, 0.15
IAGWO a 0,2
MEWOA b , r , l , α 1, [0, 1], [−1, 1], [0, 2]
COA C 1 , C 2 , μ , σ 0.2, 3, 25, 3
DBO P p e r c e n t 0.2
GRO r 1 , r 2 , r 3 , m 0 , 1 , 0 , 1 , 0 , 1 , 0 , 1 ,
CPO α , N m i n , T f , T 0.1, 80, 0.5, 2
ARO n 1 , n 2 N ( 0 ,   1 ) ,   N ( 0 ,   1 )
POA I , R {1, 2}, 0.2
ACPOA I , R , P {1, 2}, 0.2, [0, 1]
Table 2. Results obtained by different algorithms on CEC2017 test suite (dim = 30).
Table 2. Results obtained by different algorithms on CEC2017 test suite (dim = 30).
F~MetricVPPSOMELGWOMEWOACOADBOGROCPOAROPOAACPOA
F1Mean1.1375 × 1071.8308 × 1095.7291 × 1098.6894 × 1083.1419 × 1081.0261 × 1087.6019 × 1053.1496 × 1071.6176 × 10103.9708 × 103
Std3.4628 × 1071.5588 × 1092.6310 × 1098.1337 × 1082.6897 × 1086.9240 × 1076.0890 × 1052.4351 × 1075.0407 × 1094.6275 × 103
F2Mean1.2711 × 10233.4443 × 10316.6795 × 10332.2143 × 10271.4254 × 10336.7416 × 10245.8416 × 10207.6964 × 10221.2596 × 10339.8896 × 1022
Std5.3032 × 10231.3276 × 10322.5667 × 10348.4364 × 10276.6217 × 10331.4303 × 10252.9370 × 10212.7150 × 10233.7946 × 10335.4168 × 1023
F3Mean4.9827 × 1044.3420 × 1047.6186 × 1041.2071 × 1058.9042 × 1046.4340 × 1046.5215 × 1045.8754 × 1044.2778 × 1043.5586 × 104
Std1.2072 × 1049.8805 × 1039.7287 × 1033.7089 × 1041.4271 × 1041.2942 × 1041.1539 × 1049.9056 × 1038.5261 × 1039.9549 × 103
F4Mean5.3082 × 1026.2890 × 1029.0888 × 1026.0387 × 1026.9545 × 1025.5125 × 1025.2024 × 1025.4133 × 1022.4250 × 1034.8022 × 102
Std3.7175 × 1011.2765 × 1022.5649 × 1028.7124 × 1011.6924 × 1022.4724 × 1011.5100 × 1012.8898 × 1011.3609 × 1032.9142 × 101
F5Mean6.5220 × 1026.6833 × 1028.0535 × 1027.4572 × 1027.4933 × 1026.0438 × 1026.9201 × 1026.3877 × 1027.6623 × 1025.9121 × 102
Std2.7897 × 1013.6359 × 1014.1191 × 1016.2452 × 1014.3479 × 1011.9339 × 1011.3670 × 1013.4135 × 1013.6758 × 1011.6589 × 101
F6Mean6.3577 × 1026.4762 × 1026.6788 × 1026.5519 × 1026.4706 × 1026.0683 × 1026.0191 × 1026.1521 × 1026.6142 × 1026.0018 × 102
Std1.1280 × 1011.0425 × 1018.6949 × 1001.0440 × 1011.1190 × 1012.0843 × 1001.0067 × 1007.4614 × 1006.0952 × 1002.1633 × 10−1
F7Mean9.4449 × 1021.0007 × 1031.2296 × 1031.2441 × 1031.0351 × 1038.5168 × 1029.4051 × 1029.5214 × 1021.2676 × 1038.2402 × 102
Std6.0901 × 1015.4809 × 1011.0337 × 1021.0238 × 1029.5025 × 1012.3671 × 1012.4305 × 1018.0706 × 1015.7617 × 1012.5279 × 101
F8Mean9.2337 × 1029.4264 × 1021.0340 × 1039.8202 × 1021.0328 × 1039.0508 × 1029.8309 × 1029.0285 × 1029.9329 × 1028.9669 × 102
Std2.9430 × 1013.0132 × 1013.1492 × 1013.2143 × 1016.2159 × 1011.9836 × 1011.5220 × 1012.5415 × 1012.2220 × 1011.7811 × 101
F9Mean3.6677 × 1034.0001 × 1038.1401 × 1037.5663 × 1036.9243 × 1031.2713 × 1031.3021 × 1032.8463 × 1035.7281 × 1031.4125 × 103
Std1.2226 × 1037.8868 × 1021.3441 × 1031.9048 × 1031.7899 × 1032.2074 × 1023.2902 × 1028.6433 × 1027.4255 × 1024.5988 × 102
F10Mean4.9128 × 1035.0444 × 1037.2141 × 1036.1829 × 1036.8688 × 1035.8291 × 1037.5081 × 1034.4758 × 1035.3002 × 1034.2705 × 103
Std7.4816 × 1025.2892 × 1027.7563 × 1028.8919 × 1021.1659 × 1034.7016 × 1023.8199 × 1025.5755 × 1024.4992 × 1025.6168 × 102
F11Mean1.4462 × 1031.4731 × 1033.2492 × 1031.7790 × 1031.8311 × 1031.3434 × 1031.2784 × 1031.3483 × 1032.3659 × 1031.2021 × 103
Std1.6075 × 1024.0029 × 1021.2634 × 1034.8034 × 1025.4897 × 1027.3209 × 1012.7920 × 1011.1555 × 1021.1036 × 1032.6893 × 101
F12Mean2.9636 × 1074.5530 × 1073.4326 × 1081.2786 × 1076.6984 × 1074.3046 × 1061.2139 × 1063.7488 × 1061.3619 × 1091.3642 × 106
Std2.5541 × 1074.8988 × 1073.1841 × 1088.0430 × 1068.1928 × 1073.4547 × 1067.1367 × 1052.5990 × 1061.4644 × 1091.1044 × 106
F13Mean9.3463 × 1041.5714 × 1052.0549 × 1073.3740 × 1055.0478 × 1061.0275 × 1052.0402 × 1041.7617 × 1041.4489 × 1089.4868 × 103
Std5.3504 × 1044.0772 × 1053.6758 × 1076.6836 × 1057.5956 × 1061.4033 × 1051.1504 × 1041.4142 × 1044.6875 × 1088.6454 × 103
F14Mean1.1363 × 1052.3274 × 1058.3826 × 1055.3779 × 1052.8492 × 1053.0620 × 1042.3852 × 1039.2270 × 1042.9205 × 1048.9124 × 104
Std1.3570 × 1052.9271 × 1058.0865 × 1059.9421 × 1053.3388 × 1053.2617 × 1041.3412 × 1031.5947 × 1053.8543 × 1041.0446 × 105
F15Mean3.8031 × 1042.3062 × 1044.6878 × 1062.8996 × 1047.0496 × 1042.5453 × 1045.1066 × 1034.5295 × 1034.5049 × 1041.9796 × 103
Std1.9962 × 1041.3341 × 1041.3037 × 1072.9043 × 1047.5036 × 1041.6858 × 1043.7134 × 1032.6816 × 1033.1998 × 1046.7979 × 102
F16Mean2.9233 × 1032.8686 × 1033.5036 × 1032.9869 × 1033.5284 × 1032.5235 × 1033.0865 × 1032.6728 × 1033.0648 × 1032.3041 × 103
Std4.2665 × 1023.4690 × 1024.0548 × 1023.8779 × 1024.1437 × 1021.9900 × 1021.8006 × 1022.5857 × 1022.9187 × 1021.6933 × 102
F17Mean2.2173 × 1032.2770 × 1032.4988 × 1032.2903 × 1032.6191 × 1031.9040 × 1032.0582 × 1032.1663 × 1032.2947 × 1031.9909 × 103
Std1.6317 × 1022.4677 × 1022.3346 × 1022.0758 × 1022.5341 × 1029.7707 × 1011.3675 × 1022.1816 × 1022.2626 × 1021.5693 × 102
F18Mean1.3513 × 1061.1908 × 1066.7675 × 1063.0641 × 1064.7491 × 1064.6233 × 1051.5521 × 1053.9553 × 1053.4603 × 1052.0732 × 105
Std1.7307 × 1061.1436 × 1067.0456 × 1063.5154 × 1065.8993 × 1063.5555 × 1051.4850 × 1054.2848 × 1053.8950 × 1052.1897 × 105
F19Mean2.0797 × 1062.7473 × 1055.0617 × 1062.1012 × 1045.7832 × 1063.2846 × 1045.8061 × 1039.6496 × 1031.1777 × 1063.8020 × 103
Std1.3157 × 1061.0500 × 1066.8168 × 1062.2412 × 1041.1055 × 1075.6650 × 1043.6581 × 1036.2807 × 1031.1486 × 1062.5795 × 103
F20Mean2.4563 × 1032.6103 × 1032.7958 × 1032.7211 × 1032.6765 × 1032.3393 × 1032.4494 × 1032.4309 × 1032.4947 × 1032.2506 × 103
Std1.8709 × 1021.7798 × 1021.9587 × 1022.4077 × 1021.8525 × 1028.8035 × 1011.2807 × 1021.7563 × 1021.3419 × 1021.1985 × 102
F21Mean2.4318 × 1032.4564 × 1032.5681 × 1032.4696 × 1032.5673 × 1032.3989 × 1032.4850 × 1032.4107 × 1032.5529 × 1032.3926 × 103
Std3.6844 × 1013.3051 × 1013.9999 × 1013.7966 × 1014.3237 × 1011.9409 × 1011.4974 × 1012.9969 × 1014.3677 × 1011.9150 × 101
F22Mean3.7881 × 1035.1707 × 1036.5027 × 1034.0338 × 1035.2794 × 1032.3565 × 1032.3098 × 1032.3472 × 1035.9579 × 1033.8288 × 103
Std2.0541 × 1031.8701 × 1032.7016 × 1032.4042 × 1032.3332 × 1031.7858 × 1012.5448 × 1004.5840 × 1011.6821 × 1031.5900 × 103
F23Mean2.8124 × 1032.8197 × 1032.9730 × 1032.8643 × 1032.9853 × 1032.7521 × 1032.8459 × 1032.7860 × 1033.0295 × 1032.7482 × 103
Std3.5110 × 1014.4436 × 1017.0886 × 1016.5980 × 1017.3775 × 1011.7045 × 1011.9810 × 1013.8880 × 1017.2357 × 1011.9297 × 101
F24Mean2.9629 × 1032.9717 × 1033.1026 × 1033.0250 × 1033.1508 × 1032.9169 × 1033.0174 × 1032.9578 × 1033.1927 × 1032.9552 × 103
Std3.0418 × 1015.0854 × 1015.4099 × 1018.9759 × 1018.5100 × 1011.5911 × 1012.0416 × 1013.7326 × 1017.4481 × 1013.7275 × 101
F25Mean2.9537 × 1032.9894 × 1033.1396 × 1032.9660 × 1032.9781 × 1032.9344 × 1032.9150 × 1032.9704 × 1033.2801 × 1032.8925 × 103
Std2.3983 × 1015.1357 × 1019.5714 × 1013.0666 × 1011.4119 × 1022.0018 × 1011.6907 × 1012.4124 × 1012.2383 × 1021.2564 × 101
F26Mean4.8078 × 1036.1398 × 1036.7816 × 1036.0131 × 1036.9286 × 1034.2501 × 1034.9561 × 1035.1848 × 1037.2540 × 1034.5868 × 103
Std1.2737 × 1037.6364 × 1021.6242 × 1031.8671 × 1031.0119 × 1036.6665 × 1021.2473 × 1039.8753 × 1021.4921 × 1036.7159 × 102
F27Mean3.2980 × 1033.2939 × 1033.3729 × 1033.2864 × 1033.3443 × 1033.2621 × 1033.2746 × 1033.2790 × 1033.3929 × 1033.2244 × 103
Std4.4947 × 1014.7424 × 1019.8957 × 1014.8187 × 1017.8180 × 1011.4906 × 1011.2546 × 1012.8122 × 1019.9678 × 1011.3185 × 101
F28Mean3.3128 × 1033.4574 × 1033.5863 × 1033.3904 × 1033.6925 × 1033.3070 × 1033.2864 × 1033.3538 × 1034.0754 × 1033.2095 × 103
Std3.0797 × 1011.3534 × 1021.0354 × 1021.0071 × 1027.2494 × 1022.3882 × 1011.8118 × 1014.5569 × 1014.6386 × 1023.2677 × 101
F29Mean4.3276 × 1034.3871 × 1034.9068 × 1034.1970 × 1034.4064 × 1033.8027 × 1033.9421 × 1033.9766 × 1034.5631 × 1033.6825 × 103
Std2.9691 × 1023.4271 × 1025.0121 × 1023.0088 × 1023.3855 × 1022.0339 × 1021.3313 × 1022.0729 × 1023.1988 × 1021.3905 × 102
F30Mean7.5835 × 1063.8007 × 1064.2896 × 1078.4434 × 1051.8720 × 1064.5062 × 1051.0977 × 1058.5338 × 1041.2651 × 1071.6616 × 104
Std4.8053 × 1063.2224 × 1063.5696 × 1078.6091 × 1052.5307 × 1063.4936 × 1055.1837 × 1041.0351 × 1051.0297 × 1071.3243 × 104
Mean. Rank4.87 6.03 9.30 6.80 8.03 3.13 3.73 3.57 8.07 1.47
Friedman5 6 10 7 8 2 4 3 9 1
Table 3. Results obtained by different algorithms on CEC2022 test suite (dim = 10).
Table 3. Results obtained by different algorithms on CEC2022 test suite (dim = 10).
F~MetricVPPSOMELGWOMEWOACOADBOGROCPOAROPOAACPOA
F1Mean3.4657 × 1023.0429 × 1021.9358 × 1033.9057 × 1032.0106 × 1034.8105 × 1023.9349 × 1025.0408 × 1027.9300 × 1023.0003 × 102
Std1.0999 × 1021.3128 × 1011.7070 × 1032.7774 × 1031.7332 × 1033.7233 × 1028.9270 × 1013.4324 × 1029.1688 × 1028.3926 × 10−2
F2Mean4.1114 × 1024.1037 × 1024.3779 × 1024.1615 × 1024.3807 × 1024.0268 × 1024.0064 × 1024.0507 × 1024.2054 × 1024.0118 × 102
Std1.7082 × 1011.6896 × 1013.4958 × 1012.6395 × 1013.6601 × 1013.3673 × 1001.4780 × 1001.2343 × 1012.6738 × 1011.9302 × 100
F3Mean6.0734 × 1026.0792 × 1026.2528 × 1026.0749 × 1026.1192 × 1026.0004 × 1026.0000 × 1026.0015 × 1026.2215 × 1026.0000 × 102
Std6.1923 × 1007.6987 × 1001.0269 × 1019.6619 × 1007.4745 × 1002.2921 × 10−23.1990 × 10−32.8397 × 10−11.1809 × 1011.2467 × 10−3
F4Mean8.1767 × 1028.1575 × 1028.2971 × 1028.2968 × 1028.3440 × 1028.0968 × 1028.2145 × 1028.1669 × 1028.1901 × 1028.1775 × 102
Std5.7903 × 1006.8940 × 1008.1023 × 1005.7547 × 1001.2651 × 1012.8398 × 1005.1475 × 1006.8492 × 1005.9016 × 1005.4649 × 100
F5Mean9.1339 × 1029.7621 × 1021.1846 × 1031.0600 × 1031.0028 × 1039.0006 × 1029.0000 × 1029.1070 × 1021.0933 × 1039.0275 × 102
Std2.1344 × 1019.0952 × 1012.0945 × 1022.0665 × 1021.2525 × 1021.1693 × 10−16.5088 × 10−41.9006 × 1011.1774 × 1025.3010 × 100
F6Mean4.4722 × 1033.7368 × 1031.6579 × 1044.7699 × 1035.2819 × 1032.8071 × 1031.8282 × 1032.7588 × 1032.8474 × 1031.8409 × 103
Std2.2332 × 1032.1388 × 1031.5632 × 1041.8412 × 1032.2808 × 1039.9904 × 1021.3051 × 1011.1952 × 1031.3497 × 1031.1364 × 102
F7Mean2.0402 × 1032.0412 × 1032.0550 × 1032.0266 × 1032.0415 × 1032.0146 × 1032.0111 × 1032.0143 × 1032.0394 × 1032.0017 × 103
Std1.3565 × 1012.2983 × 1012.1811 × 1012.3171 × 1012.3737 × 1018.4558 × 1005.3486 × 1009.4758 × 1001.5977 × 1015.0237 × 100
F8Mean2.2250 × 1032.2285 × 1032.2303 × 1032.2253 × 1032.2378 × 1032.2211 × 1032.2175 × 1032.2191 × 1032.2222 × 1032.2135 × 103
Std4.5283 × 1002.3298 × 1015.3590 × 1009.5938 × 1003.0783 × 1015.4061 × 1005.6260 × 1004.9851 × 1008.1068 × 1009.5202 × 100
F9Mean2.5325 × 1032.5308 × 1032.5615 × 1032.5342 × 1032.5620 × 1032.5293 × 1032.5293 × 1032.5294 × 1032.5447 × 1032.5293 × 103
Std4.8412 × 1007.0711 × 1004.0100 × 1012.6826 × 1014.9916 × 1011.9516 × 10−25.2841 × 10−34.4776 × 10−12.0188 × 1015.6355 × 10−10
F10Mean2.5452 × 1032.5907 × 1032.5304 × 1032.5757 × 1032.5437 × 1032.5076 × 1032.5157 × 1032.5156 × 1032.5398 × 1032.5076 × 103
Std5.9916 × 1011.6575 × 1025.4505 × 1011.3449 × 1026.1479 × 1012.7629 × 1013.9638 × 1013.9109 × 1016.0838 × 1015.8674 × 101
F11Mean2.7651 × 1032.7579 × 1032.7460 × 1032.7806 × 1032.8091 × 1032.6049 × 1032.6233 × 1032.6225 × 1032.7512 × 1032.6795 × 103
Std1.7757 × 1021.7065 × 1021.1913 × 1021.7163 × 1022.0082 × 1022.1820 × 1018.9763 × 1015.2027 × 1011.4235 × 1021.1109 × 102
F12Mean2.8640 × 1032.8658 × 1032.8650 × 1032.8697 × 1032.8766 × 1032.8646 × 1032.8653 × 1032.8679 × 1032.8723 × 1032.8646 × 103
Std1.1965 × 1003.9116 × 1001.7109 × 1001.3017 × 1011.8227 × 1017.1820 × 10−17.5474 × 10−14.4122 × 1001.8535 × 1011.3917 × 100
Mean. Rank5.17 5.33 9.00 6.92 8.67 3.42 2.92 4.08 7.42 2.08
Friedman5 6 10 7 9 3 2 4 8 1
Table 4. Results obtained by different algorithms on CEC2022 test suite (dim = 20).
Table 4. Results obtained by different algorithms on CEC2022 test suite (dim = 20).
F~MetricVPPSOMELGWOMEWOACOADBOGROCPOAROPOAACPOA
F1Mean6.2969 × 1035.6957 × 1031.6995 × 1044.8599 × 1043.8080 × 1041.1417 × 1041.2658 × 1041.4647 × 1048.9836 × 1032.2912 × 103
Std2.7746 × 1032.7018 × 1034.4983 × 1031.3467 × 1041.1222 × 1043.5870 × 1033.2792 × 1033.9141 × 1032.8026 × 1031.3883 × 103
F2Mean4.8107 × 1025.1462 × 1025.8912 × 1025.0038 × 1025.0458 × 1024.6271 × 1024.5915 × 1024.8339 × 1026.4422 × 1024.4991 × 102
Std2.8896 × 1013.9989 × 1016.5907 × 1013.6146 × 1018.0071 × 1011.0024 × 1011.0142 × 1013.2460 × 1011.0716 × 1021.9784 × 101
F3Mean6.3019 × 1026.2784 × 1026.5362 × 1026.3468 × 1026.3436 × 1026.0167 × 1026.0033 × 1026.0539 × 1026.5430 × 1026.0001 × 102
Std1.1186 × 1011.1766 × 1011.1987 × 1011.7618 × 1011.0589 × 1015.1073 × 10−11.3581 × 10−14.3495 × 1009.3883 × 1002.1218 × 10−2
F4Mean8.5977 × 1028.6492 × 1029.1100 × 1028.8575 × 1029.1892 × 1028.4402 × 1029.0038 × 1028.5377 × 1028.8068 × 1028.6319 × 102
Std1.5268 × 1011.5199 × 1011.5716 × 1011.5254 × 1012.4063 × 1017.8358 × 1001.3335 × 1011.7456 × 1011.2753 × 1011.9148 × 101
F5Mean1.5191 × 1031.6320 × 1032.8787 × 1032.6005 × 1032.1874 × 1039.2266 × 1029.2261 × 1021.2945 × 1032.2097 × 1031.2438 × 103
Std3.6595 × 1023.6887 × 1025.0999 × 1025.9181 × 1025.2628 × 1021.2432 × 1013.9376 × 1012.7261 × 1021.7774 × 1022.4311 × 102
F6Mean4.7761 × 1036.4213 × 1039.5032 × 1068.6735 × 1031.4453 × 1067.3043 × 1042.9895 × 1043.3147 × 1031.0228 × 1062.5973 × 103
Std3.4746 × 1035.7911 × 1031.1634 × 1077.0119 × 1034.7778 × 1061.2657 × 1052.8984 × 1041.5890 × 1031.8377 × 1069.2865 × 102
F7Mean2.1021 × 1032.1284 × 1032.1466 × 1032.1606 × 1032.1469 × 1032.0507 × 1032.0629 × 1032.0632 × 1032.1151 × 1032.0297 × 103
Std4.0420 × 1016.7217 × 1013.4365 × 1011.0040 × 1026.1833 × 1011.0990 × 1019.8261 × 1003.1257 × 1013.8444 × 1011.2374 × 101
F8Mean2.2661 × 1032.2762 × 1032.2812 × 1032.2748 × 1032.3359 × 1032.2296 × 1032.2315 × 1032.2323 × 1032.2642 × 1032.2220 × 103
Std6.4708 × 1015.8535 × 1016.3978 × 1015.3474 × 1019.5727 × 1012.2382 × 1001.8118 × 1003.0063 × 1015.3938 × 1011.3001 × 100
F9Mean2.5098 × 1032.5006 × 1032.5600 × 1032.4822 × 1032.5094 × 1032.4835 × 1032.4819 × 1032.4865 × 1032.5495 × 1032.4808 × 103
Std1.9572 × 1011.7239 × 1014.1348 × 1012.7193 × 1002.6095 × 1011.1662 × 1007.1576 × 10−13.4248 × 1003.1790 × 1012.6827 × 10−4
F10Mean3.0863 × 1033.9147 × 1033.6152 × 1033.7418 × 1033.4571 × 1032.5383 × 1032.5387 × 1032.6418 × 1033.5533 × 1032.4802 × 103
Std9.1275 × 1026.4224 × 1021.3826 × 1031.4204 × 1031.0735 × 1036.3597 × 1017.7304 × 1012.0358 × 1021.0430 × 1038.7741 × 101
F11Mean2.9600 × 1033.1076 × 1033.8688 × 1033.3376 × 1033.1834 × 1033.0163 × 1032.9152 × 1032.9490 × 1034.8660 × 1032.9200 × 103
Std2.5409 × 1022.2789 × 1026.4080 × 1027.5569 × 1025.1257 × 1028.0592 × 1017.3399 × 1016.8034 × 1018.7819 × 1024.0680 × 101
F12Mean2.9879 × 1032.9828 × 1033.0129 × 1032.9995 × 1033.0316 × 1032.9682 × 1032.9886 × 1032.9829 × 1033.0570 × 1032.9520 × 103
Std4.6786 × 1012.7982 × 1014.3618 × 1013.7041 × 1014.5918 × 1011.1385 × 1011.0707 × 1012.0398 × 1016.2539 × 1011.0873 × 101
Mean. Rank4.42 5.75 9.00 7.25 7.67 3.25 4.17 3.92 8.17 1.42
Friedman5 6 10 7 8 2 4 3 9 1
Table 5. Time complexity comparison of different algorithms.
Table 5. Time complexity comparison of different algorithms.
AlgorithmTime ComplexityComplexity Analysis
VPPSOO (T × N × dim)The velocity pause mechanism adds a constant-time judgment operation (O (1)) for each particle, and the overall complexity is dominated by iteration (T), population size (N), Reverse Learning (N),and dimension (dim)
MELGWOO (T × N × dim)The adaptive strategy introduces a constant-time parameter adjustment (O (1)), and the hierarchy update of grey wolves is O (N) (sorting), so the overall complexity is O (T × (N × dim + N)) = O (T × N × dim)
MEWOAO (T × N × dim)The moulting behavior requires a constant-time boundary judgment (O (1)), and the foraging update is O (N × dim), so the overall complexity is O (T × N × dim)
COAO (T × N × dim)The moulting behavior requires a constant-time boundary judgment (O (1)), and the foraging update is O (N × dim), so the overall complexity is O (T × N × dim)
DBOO (T × N × dim)The stealing behavior adds a constant-time random selection (O (1)) for each individual, and the dominant term is still O (T × N × dim)
GROO (T × N × dim)The prospecting direction update is O (N × dim), and the digging depth adjustment is O (1), so the complexity is O (T × N × dim)
CPOO (T × N × dim)The quill defense mechanism adds a constant-time distance calculation (O (1)), and the overall complexity is dominated by iteration and population-dimension update
AROO (T × N × dim)The hiding behavior requires a constant-time position randomization (O (1)), and the foraging update is O (N×dim), so the complexity is O (T × N × dim)
POAO (T × N × dim)The two-phase update (exploration + exploitation) is O (N × dim) per iteration, and no additional high-complexity operations are introduced
ACPOAO (T × N × dim)1. Elite pool mutation: Selecting top 3 individuals requires O (N) sorting (constant-time for small N), so O (1); 2. Adaptive cooperative mechanism: Subgroup-dimension allocation uses roulette wheel selection (O(dim) per subgroup, S = min(4, dim) is constant), so O (dim) = O (1); 3. Hybrid boundary handling: Probabilistic repair is O (1) per individual. The dominant term is still O (T × N × dim), which is consistent with the baseline POA
Table 6. ACPOA results of multi-level threshold segmentation with Otsu as the objective function.
Table 6. ACPOA results of multi-level threshold segmentation with Otsu as the objective function.
ImageThreshold = 2Threshold = 4Threshold = 6Threshold = 8
baboonBiomimetics 10 00596 i001Biomimetics 10 00596 i002Biomimetics 10 00596 i003Biomimetics 10 00596 i004
Biomimetics 10 00596 i005Biomimetics 10 00596 i006Biomimetics 10 00596 i007Biomimetics 10 00596 i008
bankBiomimetics 10 00596 i009Biomimetics 10 00596 i010Biomimetics 10 00596 i011Biomimetics 10 00596 i012
Biomimetics 10 00596 i013Biomimetics 10 00596 i014Biomimetics 10 00596 i015Biomimetics 10 00596 i016
cameraBiomimetics 10 00596 i017Biomimetics 10 00596 i018Biomimetics 10 00596 i019Biomimetics 10 00596 i020
Biomimetics 10 00596 i021Biomimetics 10 00596 i022Biomimetics 10 00596 i023Biomimetics 10 00596 i024
faceBiomimetics 10 00596 i025Biomimetics 10 00596 i026Biomimetics 10 00596 i027Biomimetics 10 00596 i028
Biomimetics 10 00596 i029Biomimetics 10 00596 i030Biomimetics 10 00596 i031Biomimetics 10 00596 i032
lenaBiomimetics 10 00596 i033Biomimetics 10 00596 i034Biomimetics 10 00596 i035Biomimetics 10 00596 i036
Biomimetics 10 00596 i037Biomimetics 10 00596 i038Biomimetics 10 00596 i039Biomimetics 10 00596 i040
Table 7. Mean and Std of the optimal fitness values with Otsu as the objective function.
Table 7. Mean and Std of the optimal fitness values with Otsu as the objective function.
ImageThresholdMetricVPPSOMELGWOMEWOACOADBOGROCPOAROPOAACPOA
baboon2Mean3.02 × 1033.02 × 1033.02 × 1033.02 × 1033.02 × 1033.02 × 1033.02 × 1033.02 × 1033.02 × 1033.02 × 103
Std8.35 × 10−31.16 × 10−13.86 × 10−11.14 × 1003.48 × 10−25.19 × 10−18.03 × 10−11.23 × 10−11.85 × 10−121.85 × 10−12
4Mean3.30 × 1033.30 × 1033.29 × 1033.29 × 1033.30 × 1033.30 × 1033.29 × 1033.30 × 1033.30 × 1033.30 × 103
Std8.82 × 10−14.13 × 1004.01 × 1001.15 × 1012.44 × 1003.83 × 1005.80 × 1003.09 × 1001.19 × 1001.50 × 10−2
6Mean3.37 × 1033.37 × 1033.36 × 1033.36 × 1033.36 × 1033.36 × 1033.36 × 1033.36 × 1033.37 × 1033.37 × 103
Std3.35 × 1003.69 × 1001.05 × 1011.05 × 1015.16 × 1005.09 × 1004.53 × 1004.31 × 1002.32 × 1001.03 × 10−1
8Mean3.40 × 1033.39 × 1033.38 × 1033.38 × 1033.39 × 1033.39 × 1033.39 × 1033.39 × 1033.40 × 1033.40 × 103
Std3.14 × 1007.06 × 1001.03 × 1016.94 × 1005.14 × 1003.29 × 1003.73 × 1003.79 × 1002.12 × 1001.70 × 10−1
bank2Mean3.34 × 1033.34 × 1033.34 × 1033.34 × 1033.34 × 1033.34 × 1033.34 × 1033.34 × 1033.34 × 1033.34 × 103
Std0.00 × 1006.34 × 10−14.20 × 10−12.08 × 1003.50 × 10−21.99 × 10−15.75 × 10−12.06 × 10−10.00 × 1000.00 × 100
4Mean3.60 × 1033.60 × 1033.59 × 1033.59 × 1033.60 × 1033.60 × 1033.59 × 1033.60 × 1033.60 × 1033.60 × 103
Std6.66 × 10−12.90 × 1001.35 × 1017.87 × 1002.84 × 1004.29 × 1003.47 × 1002.25 × 1009.81 × 10−13.81 × 10−2
6Mean3.68 × 1033.67 × 1033.66 × 1033.67 × 1033.67 × 1033.67 × 1033.67 × 1033.67 × 1033.68 × 1033.68 × 103
Std3.48 × 1006.42 × 1001.20 × 1019.13 × 1007.59 × 1005.03 × 1005.65 × 1003.18 × 1003.01 × 1001.33 × 10−1
8Mean3.70 × 1033.70 × 1033.70 × 1033.70 × 1033.70 × 1033.70 × 1033.70 × 1033.70 × 1033.71 × 1033.71 × 103
Std4.05 × 1007.51 × 1006.21 × 1008.53 × 1006.76 × 1004.20 × 1003.61 × 1002.97 × 1003.10 × 1003.12 × 10−1
camera2Mean4.48 × 1034.48 × 1034.48 × 1034.48 × 1034.48 × 1034.48 × 1034.48 × 1034.48 × 1034.48 × 1034.48 × 103
Std2.66 × 10−21.30 × 10−17.61 × 10−21.14 × 1001.99 × 10−20.00 × 1002.88 × 10−17.52 × 10−27.10 × 10−20.00 × 100
4Mean4.60 × 1034.60 × 1034.60 × 1034.60 × 1034.60 × 1034.60 × 1034.60 × 1034.60 × 1034.60 × 1034.60 × 103
Std1.13 × 1001.54 × 1003.41 × 1002.19 × 1002.36 × 1002.02 × 1002.15 × 1001.62 × 1009.80 × 10−11.34 × 10−2
6Mean4.65 × 1034.65 × 1034.64 × 1034.64 × 1034.64 × 1034.64 × 1034.64 × 1034.64 × 1034.65 × 1034.65 × 103
Std4.49 × 1005.42 × 1006.58 × 1008.38 × 1005.78 × 1003.92 × 1004.71 × 1003.20 × 1002.72 × 1007.40 × 10−2
8Mean4.67 × 1034.66 × 1034.66 × 1034.66 × 1034.66 × 1034.66 × 1034.66 × 1034.66 × 1034.66 × 1034.67 × 103
Std2.85 × 1003.66 × 1005.40 × 1004.91 × 1004.75 × 1003.14 × 1002.73 × 1002.98 × 1002.43 × 1004.05 × 10−1
face2Mean1.91 × 1031.91 × 1031.91 × 1031.91 × 1031.91 × 1031.91 × 1031.91 × 1031.91 × 1031.91 × 1031.91 × 103
Std1.68 × 10−15.84 × 10−13.55 × 10−17.11 × 10−19.91 × 10−21.56 × 10−16.32 × 10−13.14 × 10−11.52 × 10−26.94 × 10−13
4Mean2.12 × 1032.12 × 1032.12 × 1032.12 × 1032.12 × 1032.12 × 1032.11 × 1032.12 × 1032.12 × 1032.12 × 103
Std6.95 × 10−13.51 × 1006.15 × 1007.79 × 1002.75 × 1003.25 × 1003.98 × 1002.52 × 1001.75 × 1005.48 × 10−2
6Mean2.18 × 1032.18 × 1032.17 × 1032.17 × 1032.17 × 1032.17 × 1032.17 × 1032.18 × 1032.18 × 1032.18 × 103
Std2.60 × 1004.89 × 1008.35 × 1007.89 × 1009.25 × 1003.79 × 1004.20 × 1003.29 × 1003.23 × 1001.44 × 10−1
8Mean2.21 × 1032.20 × 1032.19 × 1032.20 × 1032.20 × 1032.20 × 1032.20 × 1032.20 × 1032.20 × 1032.21 × 103
Std3.18 × 1004.26 × 1009.80 × 1007.21 × 1006.61 × 1003.41 × 1003.71 × 1002.64 × 1003.02 × 1002.61 × 10−1
lena2Mean3.30 × 1033.30 × 1033.30 × 1033.30 × 1033.30 × 1033.30 × 1033.30 × 1033.30 × 1033.30 × 1033.30 × 103
Std1.10 × 10−18.64 × 10−12.98 × 10−11.16 × 1002.08 × 10−12.86 × 10−11.85 × 10−123.65 × 10−17.88 × 10−11.85 × 10−12
4Mean3.69 × 1033.68 × 1033.67 × 1033.68 × 1033.69 × 1033.68 × 1033.68 × 1033.68 × 1033.69 × 1033.69 × 103
Std8.52 × 10−13.66 × 1001.56 × 1011.63 × 1011.13 × 1004.48 × 1004.94 × 1003.05 × 1008.54 × 10−12.28 × 10−2
6Mean3.76 × 1033.76 × 1033.75 × 1033.75 × 1033.75 × 1033.75 × 1033.75 × 1033.76 × 1033.76 × 1033.77 × 103
Std4.63 × 1006.52 × 1001.22 × 1011.29 × 1011.05 × 1015.26 × 1005.89 × 1004.02 × 1002.74 × 1007.58 × 10−2
8Mean3.79 × 1033.79 × 1033.78 × 1033.78 × 1033.79 × 1033.79 × 1033.78 × 1033.79 × 1033.79 × 1033.80 × 103
Std2.86 × 1005.84 × 1006.92 × 1005.82 × 1005.44 × 1004.83 × 1004.26 × 1002.39 × 1002.05 × 1002.44 × 10−1
Average rank2.93 4.36 7.52 6.95 5.86 7.03 7.96 5.96 4.17 2.25
Rank24975810631
Table 8. Mean and Std of all test images for PSNR in Otsu.
Table 8. Mean and Std of all test images for PSNR in Otsu.
ImageThresholdMetricVPPSOMELGWOMEWOACOADBOGROCPOAROPOAACPOA
baboon2Mean13.336513.3290 13.3249 13.3267 13.3344 13.3200 13.3101 13.342213.3364 13.3364
Std7.73 × 10−43.38 × 10−25.42 × 10−25.79 × 10−21.17 × 10−25.23 × 10−28.01 × 10−23.88 × 10−29.03 × 10−159.03 × 10−15
4Mean0.7202 0.7256 0.7277 17.9880 18.1284 18.0846 17.8792 18.1788 18.247318.1973
Std7.92 × 10−38.22 × 10−32.06 × 10−25.15 × 10−13.07 × 10−15.48 × 10−15.67 × 10−13.72 × 10−11.66 × 10−13.48 × 10−2
6Mean21.2406 21.0986 21.0529 20.9320 20.6578 21.0330 20.7988 21.1048 21.3365 21.3841
Std5.05 × 10−15.77 × 10−19.22 × 10−18.76 × 10−17.55 × 10−16.08 × 10−17.65 × 10−15.21 × 10−14.31 × 10−11.21 × 10−1
8Mean23.0891 23.0674 22.6065 22.4727 22.7880 22.9362 22.7874 23.1520 23.2291 23.7296
Std8.13 × 10−17.10 × 10−17.74 × 10−11.02 × 1006.94 × 10−17.77 × 10−17.45 × 10−14.39 × 10−16.86 × 10−11.51 × 10−1
bank2Mean16.1237 16.1124 16.129916.1213 16.1230 16.1192 16.1282 16.1109 16.1237 16.1237
Std1.08 × 10−143.07 × 10−24.33 × 10−27.95 × 10−26.99 × 10−33.96 × 10−26.46 × 10−23.36 × 10−21.08 × 10−141.08 × 10−14
4Mean20.1953 20.1507 19.9025 20.0087 20.1336 20.1161 20.0426 20.0276 20.1821 20.2465
Std7.94 × 10−21.27 × 10−13.50 × 10−12.25 × 10−11.13 × 10−11.64 × 10−11.94 × 10−11.67 × 10−19.39 × 10−21.25 × 10−2
6Mean22.9213 22.8241 22.3843 22.3978 22.6451 22.5867 22.5365 22.6517 22.8934 23.0945
Std1.84 × 10−12.36 × 10−15.38 × 10−13.73 × 10−13.76 × 10−13.10 × 10−12.99 × 10−12.21 × 10−11.82 × 10−12.99 × 10−2
8Mean24.7295 24.6335 24.1407 24.0948 24.2443 24.2816 24.2610 24.4456 24.7618 25.2639
Std3.15 × 10−15.40 × 10−14.30 × 10−16.28 × 10−14.71 × 10−13.66 × 10−12.87 × 10−12.66 × 10−13.08 × 10−15.18 × 10−2
camera2Mean15.0255 15.0097 15.0406 14.9997 15.0432 15.0081 15.0181 15.0084 15.052615.0526
Std3.88 × 10−26.10 × 10−22.62 × 10−21.63 × 10−13.07 × 10−26.20 × 10−21.05 × 10−17.48 × 10−20.00 × 1000.00 × 100
4Mean18.7913 18.2094 18.3714 18.4111 18.3811 18.6016 18.5922 18.7677 19.5536 19.8549
Std8.56 × 10−16.30 × 10−11.03 × 1009.71 × 10−18.88 × 10−17.56 × 10−19.21 × 10−18.74 × 10−15.00 × 10−13.70 × 10−2
6Mean21.2128 21.2606 20.7990 20.6149 21.4165 21.0631 21.1030 21.3045 21.5659 21.8909
Std8.96 × 10−18.21 × 10−11.17 × 1001.17 × 1007.76 × 10−18.61 × 10−11.07 × 1008.14 × 10−15.25 × 10−15.73 × 10−2
8Mean22.8667 22.7975 22.5461 22.4843 22.5379 22.7340 22.6390 22.7709 22.7929 22.9852
Std5.64 × 10−17.86 × 10−11.08 × 1009.80 × 10−17.58 × 10−17.87 × 10−17.56 × 10−18.73 × 10−17.20 × 10−11.72 × 10−1
face2Mean14.3031 14.2966 14.325414.3093 14.3079 14.2966 14.2850 14.2922 14.3202 14.3225
Std6.25 × 10−21.08 × 10−15.65 × 10−21.00 × 10−14.26 × 10−28.22 × 10−21.43 × 10−18.78 × 10−21.24 × 10−21.81× 10−15
4Mean19.6645 19.6634 19.4970 19.5810 19.6764 19.5600 19.3900 19.6174 19.6275 19.7572
Std1.19 × 10−11.28 × 10−12.90 × 10−12.94 × 10−11.69 × 10−12.28 × 10−13.61 × 10−11.93 × 10−11.72 × 10−12.59 × 10−2
6Mean22.3993 22.3944 21.8273 22.0686 22.0867 22.0759 22.0064 22.1338 22.4368 22.5815
Std2.76 × 10−13.13 × 10−15.47 × 10−14.27 × 10−14.88 × 10−14.05 × 10−14.08 × 10−13.71 × 10−12.85 × 10−15.48 × 10−2
8Mean24.3443 24.1981 23.5769 23.8893 23.8315 23.7570 23.8162 24.1000 24.3424 24.9131
Std4.00 × 10−14.09 × 10−16.92 × 10−16.04 × 10−15.97 × 10−13.11 × 10−14.07 × 10−13.05 × 10−13.17 × 10−11.00 × 10−1
lena2Mean14.9888 14.9666 14.9883 14.9503 14.995714.9672 14.9481 14.9729 14.9860 14.9910
Std4.00 × 10−25.50 × 10−24.76 × 10−27.25 × 10−23.88 × 10−25.27 × 10−28.02 × 10−24.41 × 10−23.90 × 10−23.89 × 10−2
4Mean19.0903 19.0530 18.9173 18.9320 19.0882 18.9664 18.9328 19.0154 19.0863 19.1352
Std5.02 × 10−26.16 × 10−22.76 × 10−12.98 × 10−15.71 × 10−21.14 × 10−11.31 × 10−19.60 × 10−27.08 × 10−22.94 × 10−2
6Mean21.6002 21.5676 21.0365 21.0846 21.3631 21.3089 21.2994 21.4763 21.5880 21.8932
Std2.75 × 10−12.92 × 10−15.29 × 10−15.58 × 10−14.43 × 10−13.07 × 10−13.18 × 10−12.24 × 10−11.78 × 10−14.51 × 10−2
8Mean23.0934 22.9918 22.8177 22.8242 22.9642 22.9932 22.8443 23.0844 23.2527 23.7823
Std3.14 × 10−13.08 × 10−14.35 × 10−14.15 × 10−14.52 × 10−13.83 × 10−14.11 × 10−13.40 × 10−12.67 × 10−11.41 × 10−1
Average rank4.45 5.04 6.87 7.00 5.87 6.65 7.09 5.25 4.57 2.22
Rank24896710531
Table 9. Mean and Std of all test images for FSIM in Otsu.
Table 9. Mean and Std of all test images for FSIM in Otsu.
ImageThresholdMetricVPPSOMELGWOMEWOACOADBOGROCPOAROPOAACPOA
baboon2Mean0.6871 0.6869 0.6864 0.6867 0.6871 0.6864 0.6863 0.6867 0.68710.6871
Std1.11 × 10−41.30 × 10−33.09 × 10−32.25 × 10−32.16 × 10−43.04 × 10−34.35 × 10−31.41 × 10−30.00 × 1000.00 × 100
4Mean0.8189 0.8200 0.8229 0.8142 0.8186 0.8169 0.8142 0.8188 0.8217 0.8233
Std3.86 × 10−34.61 × 10−31.08 × 10−21.31 × 10−27.39 × 10−31.30 × 10−21.54 × 10−29.51 × 10−34.46 × 10−31.05 × 10−3
6Mean0.8842 0.8798 0.8833 0.8792 0.8703 0.8799 0.8762 0.8824 0.8859 0.8861
Std1.22 × 10−21.45 × 10−22.33 × 10−22.28 × 10−21.88 × 10−21.61 × 10−22.08 × 10−21.29 × 10−21.11 × 10−22.54 × 10−3
8Mean0.9113 0.9121 0.9051 0.9027 0.9071 0.9101 0.9077 0.9150 0.9158 0.9223
Std1.66 × 10−21.65 × 10−21.73 × 10−22.16 × 10−21.65 × 10−21.82 × 10−21.76 × 10−21.27 × 10−21.65 × 10−23.85 × 10−3
bank2Mean0.75300.7529 0.7529 0.7525 0.7529 0.7528 0.7529 0.7529 0.75300.7530
Std1.13 × 10−169.80 × 10−41.00 × 10−32.09 × 10−31.92 × 10−45.77 × 10−41.15 × 10−36.41 × 10−41.13 × 10−161.13× 10−16
4Mean0.8344 0.8337 0.8340 0.8308 0.8342 0.8342 0.8320 0.8332 0.8343 0.8352
Std1.62 × 10−32.75 × 10−38.66 × 10−37.64 × 10−33.55 × 10−34.77 × 10−34.82 × 10−34.02 × 10−32.11 × 10−34.20 × 10−4
6Mean0.8785 0.8760 0.8717 0.8738 0.8770 0.8761 0.8736 0.8749 0.8775 0.8816
Std3.60 × 10−36.54 × 10−38.52 × 10−37.52 × 10−35.84 × 10−35.17 × 10−35.85 × 10−34.77 × 10−33.29 × 10−34.36 × 10−4
8Mean0.9017 0.9011 0.8971 0.8957 0.8989 0.8994 0.8977 0.9024 0.9035 0.9089
Std5.38 × 10−37.32 × 10−39.08 × 10−38.24 × 10−36.83 × 10−35.65 × 10−36.74 × 10−35.00 × 10−33.96 × 10−32.20 × 10−3
camera2Mean0.7662 0.76630.7661 0.7661 0.7662 0.7662 0.7656 0.7660 0.7661 0.7661
Std5.42 × 10−44.69 × 10−42.85 × 10−48.36 × 10−42.14 × 10−47.33 × 10−41.60 × 10−31.09 × 10−30.00 × 1000.00 × 100
4Mean0.83420.8332 0.8290 0.8307 0.8327 0.8280 0.8297 0.8317 0.8302 0.8326
Std5.69 × 10−38.65 × 10−39.18 × 10−39.55 × 10−37.27 × 10−39.30 × 10−38.87 × 10−38.28 × 10−36.16 × 10−38.65 × 10−4
6Mean0.8697 0.8654 0.8629 0.8628 0.8667 0.8625 0.8663 0.8677 0.8718 0.8781
Std8.02 × 10−31.24 × 10−21.12 × 10−29.41 × 10−31.00 × 10−21.19 × 10−21.13 × 10−29.31 × 10−37.72 × 10−31.12 × 10−3
8Mean0.8932 0.8929 0.8844 0.8835 0.8879 0.8898 0.8879 0.8906 0.8920 0.9016
Std8.31 × 10−31.02 × 10−21.51 × 10−21.05 × 10−21.02 × 10−29.53 × 10−31.02 × 10−21.10 × 10−28.85 × 10−32.47 × 10−3
face2Mean0.6045 0.6048 0.60490.6049 0.6047 0.6043 0.6042 0.6044 0.6048 0.6049
Std7.08 × 10−49.85 × 10−44.26 × 10−49.39 × 10−46.80 × 10−49.15 × 10−41.49 × 10−31.12 × 10−31.74 × 10−42.26× 10−16
4Mean0.7532 0.7518 0.7486 0.7510 0.7536 0.7490 0.7471 0.7517 0.7518 0.7542
Std2.32 × 10−34.77 × 10−36.66 × 10−37.39 × 10−33.88 × 10−37.27 × 10−38.49 × 10−35.73 × 10−35.02 × 10−38.56 × 10−4
6Mean0.8384 0.8339 0.8181 0.8225 0.8273 0.8240 0.8214 0.8283 0.8358 0.8435
Std6.14 × 10−31.02 × 10−21.23 × 10−21.40 × 10−21.47 × 10−29.81 × 10−39.43 × 10−38.92 × 10−35.58 × 10−31.40 × 10−3
8Mean0.8819 0.8738 0.8552 0.8669 0.8655 0.8613 0.8645 0.8726 0.8779 0.8950
Std7.98 × 10−39.89 × 10−31.76 × 10−21.46 × 10−21.51 × 10−29.46 × 10−31.11 × 10−29.08 × 10−37.67 × 10−31.53 × 10−3
lena2Mean0.6711 0.6709 0.6710 0.6692 0.6707 0.6707 0.6690 0.6703 0.6713 0.6713
Std1.23 × 10−31.33 × 10−31.52 × 10−34.85 × 10−31.90 × 10−31.57 × 10−33.50 × 10−32.52 × 10−31.09 × 10−31.05 × 10−3
4Mean0.7810 0.7793 0.7752 0.7768 0.7806 0.7791 0.7782 0.7789 0.78100.7797
Std1.91 × 10−33.09 × 10−38.27 × 10−39.51 × 10−31.80 × 10−34.11 × 10−35.29 × 10−33.54 × 10−32.36 × 10−36.78 × 10−4
6Mean0.8414 0.8412 0.8319 0.8336 0.8358 0.8354 0.8340 0.8398 0.8439 0.8510
Std9.05 × 10−37.91 × 10−31.46 × 10−21.38 × 10−21.14 × 10−28.81 × 10−31.03 × 10−28.12 × 10−37.25 × 10−31.82 × 10−3
8Mean0.8706 0.8685 0.8657 0.8647 0.8681 0.8681 0.8652 0.8694 0.8741 0.8840
Std6.79 × 10−39.28 × 10−31.04 × 10−29.02 × 10−31.15 × 10−21.07 × 10−28.96 × 10−38.30 × 10−36.64 × 10−32.60 × 10−3
Average rank4.58 5.29 6.54 6.65 5.86 6.08 6.79 5.34 4.70 3.17
Rank24896710531
Table 10. Mean and Std of all test images for SSIM in Otsu.
Table 10. Mean and Std of all test images for SSIM in Otsu.
ImageThresholdMetricVPPSOMELGWOMEWOACOADBOGROCPOAROPOAACPOA
baboon2Mean0.7202 0.72560.72770.4679 0.4686 0.4674 0.4666 0.4689 0.4688 0.4688
Std7.92 × 10−38.22 × 10−32.06 × 10−24.14 × 10−31.01 × 10−33.48 × 10−35.66 × 10−32.88 × 10−32.85 × 10−162.82 × 10−16
4Mean0.7213 0.7171 0.72710.7136 0.7201 0.7185 0.7097 0.7231 0.7261 0.7239
Std1.27 × 10−21.39 × 10−21.15 × 10−22.49 × 10−21.51 × 10−22.45 × 10−22.86 × 10−21.66 × 10−28.40 × 10−31.63 × 10−3
6Mean0.8291 0.8238 0.8276 0.8197 0.8107 0.8236 0.8164 0.8268 0.8328 0.8352
Std1.73 × 10−22.12 × 10−23.55 × 10−23.49 × 10−22.99 × 10−22.35 × 10−23.22 × 10−21.85 × 10−21.68 × 10−24.33 × 10−3
8Mean0.8727 0.8756 0.8648 0.8596 0.8662 0.8692 0.8683 0.8786 0.8794 0.8917
Std2.25 × 10−22.05 × 10−22.32 × 10−23.08 × 10−22.20 × 10−22.37 × 10−22.28 × 10−21.53 × 10−22.03 × 10−24.40 × 10−3
bank2Mean0.6361 0.6362 0.63660.6352 0.6363 0.6361 0.6356 0.6363 0.6364 0.6364
Std4.52 × 10−45.94 × 10−41.29 × 10−34.17 × 10−32.14 × 10−41.38 × 10−32.56 × 10−31.06 × 10−30.00 × 1000.00 × 100
4Mean0.7428 0.7415 0.7441 0.7402 0.7433 0.74620.7421 0.7422 0.7434 0.7453
Std3.58 × 10−35.22 × 10−31.27 × 10−21.22 × 10−26.60 × 10−38.64 × 10−38.81 × 10−37.24 × 10−33.65 × 10−34.46 × 10−4
6Mean0.8105 0.8074 0.8040 0.8066 0.8091 0.8089 0.8041 0.8044 0.8103 0.8152
Std8.60 × 10−31.06 × 10−21.37 × 10−21.41 × 10−21.15 × 10−21.13 × 10−21.25 × 10−29.54 × 10−37.16 × 10−31.44 × 10−3
8Mean0.8436 0.8457 0.8413 0.8368 0.8432 0.8438 0.8404 0.8467 0.8493 0.8556
Std9.99 × 10−31.05 × 10−21.46 × 10−21.14 × 10−21.07 × 10−21.00 × 10−21.29 × 10−27.47 × 10−35.74 × 10−33.02 × 10−3
camera2Mean0.6364 0.6363 0.6359 0.6195 0.6203 0.6198 0.6197 0.6198 0.6204 0.6404
Std1.13× 10−161.06 × 10−31.82 × 10−32.83 × 10−33.32 × 10−46.50 × 10−41.01 × 10−37.13 × 10−41.13× 10−160.00 × 100
4Mean0.7137 0.6917 0.6947 0.6995 0.6948 0.7056 0.6965 0.7087 0.7445 0.7583
Std3.60 × 10−22.74 × 10−24.70 × 10−24.17 × 10−23.87 × 10−23.54 × 10−24.65 × 10−23.85 × 10−22.30 × 10−22.48 × 10−3
6Mean0.7803 0.7787 0.7641 0.7610 0.7826 0.7737 0.7784 0.7841 0.7948 0.8036
Std2.71 × 10−22.89 × 10−24.07 × 10−24.24 × 10−23.13 × 10−23.57 × 10−24.02 × 10−22.97 × 10−22.06 × 10−22.41 × 10−3
8Mean0.8201 0.8181 0.8119 0.8131 0.8088 0.8182 0.8145 0.8193 0.8191 0.8300
Std1.81 × 10−22.42 × 10−23.55 × 10−23.09 × 10−23.26 × 10−22.52 × 10−23.27 × 10−23.08 × 10−22.39 × 10−24.78 × 10−3
face2Mean0.5228 0.5231 0.5238 0.5236 0.5233 0.5226 0.5225 0.5227 0.5237 0.5238
Std1.56 × 10−32.42 × 10−39.58 × 10−42.58 × 10−31.44 × 10−32.19 × 10−33.49 × 10−32.40 × 10−34.43 × 10−42.26 × 10−16
4Mean0.7050 0.7046 0.7006 0.7046 0.7071 0.7006 0.6984 0.7045 0.7031 0.7074
Std4.39 × 10−39.13 × 10−39.47 × 10−31.11 × 10−28.23 × 10−31.36 × 10−21.62 × 10−21.10 × 10−29.48 × 10−31.59 × 10−3
6Mean0.7952 0.7908 0.7763 0.7816 0.7859 0.7810 0.7800 0.7842 0.7923 0.7999
Std6.61 × 10−31.12 × 10−21.32 × 10−21.68 × 10−21.42 × 10−21.22 × 10−21.30 × 10−21.23 × 10−26.52 × 10−31.91 × 10−3
8Mean0.8449 0.8386 0.8172 0.8307 0.8294 0.8236 0.8272 0.8359 0.8416 0.8576
Std8.22 × 10−31.08 × 10−21.93 × 10−21.55 × 10−21.60 × 10−21.09 × 10−21.47 × 10−21.13 × 10−29.32 × 10−32.57 × 10−3
lena2Mean0.5452 0.5443 0.5449 0.5412 0.5447 0.5443 0.5406 0.5434 0.5455 0.5456
Std2.58 × 10−33.19 × 10−33.30 × 10−38.93 × 10−33.45 × 10−32.96 × 10−36.38 × 10−34.58 × 10−32.24 × 10−32.17 × 10−3
4Mean0.67560.6745 0.6727 0.6725 0.6752 0.6741 0.6744 0.6737 0.6754 0.6756
Std1.39 × 10−33.63 × 10−38.96 × 10−31.07 × 10−22.25 × 10−34.73 × 10−36.48 × 10−34.47 × 10−31.62 × 10−37.13 × 10−4
6Mean0.7468 0.7484 0.7409 0.7414 0.7426 0.7462 0.7398 0.7500 0.7535 0.7596
Std1.23 × 10−21.25 × 10−22.29 × 10−22.19 × 10−21.75 × 10−21.56 × 10−21.49 × 10−21.24 × 10−21.00 × 10−23.15 × 10−3
8Mean0.7860 0.7898 0.7914 0.7843 0.7871 0.7911 0.7896 0.7958 0.7936 0.8103
Std1.24 × 10−21.82 × 10−22.29 × 10−21.76 × 10−22.06 × 10−21.98 × 10−22.17 × 10−21.91 × 10−21.48 × 10−28.58 × 10−3
Average rank5.15 5.21 5.97 6.40 5.63 6.07 6.72 5.41 5.07 3.39
Rank34796810521
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Wang, J.; Zhang, X.; Wang, B. ACPOA: An Adaptive Cooperative Pelican Optimization Algorithm for Global Optimization and Multilevel Thresholding Image Segmentation. Biomimetics 2025, 10, 596. https://doi.org/10.3390/biomimetics10090596

AMA Style

Zhang Y, Wang J, Zhang X, Wang B. ACPOA: An Adaptive Cooperative Pelican Optimization Algorithm for Global Optimization and Multilevel Thresholding Image Segmentation. Biomimetics. 2025; 10(9):596. https://doi.org/10.3390/biomimetics10090596

Chicago/Turabian Style

Zhang, YuLong, Jianfeng Wang, Xiaoyan Zhang, and Bin Wang. 2025. "ACPOA: An Adaptive Cooperative Pelican Optimization Algorithm for Global Optimization and Multilevel Thresholding Image Segmentation" Biomimetics 10, no. 9: 596. https://doi.org/10.3390/biomimetics10090596

APA Style

Zhang, Y., Wang, J., Zhang, X., & Wang, B. (2025). ACPOA: An Adaptive Cooperative Pelican Optimization Algorithm for Global Optimization and Multilevel Thresholding Image Segmentation. Biomimetics, 10(9), 596. https://doi.org/10.3390/biomimetics10090596

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop