Next Article in Journal
Bio-Inspired Ghost Imaging: A Self-Attention Approach for Scattering-Robust Remote Sensing
Previous Article in Journal
Automated Facial Pain Assessment Using Dual-Attention CNN with Clinically Calibrated High-Reliability and Reproducibility Framework
Previous Article in Special Issue
Weibull Parameter Estimation Using Empirical and AI Methods: A Wind Energy Assessment in İzmir
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Threshold Image Segmentation Based on the Hybrid Strategy Improved Dingo Optimization Algorithm

College of Design, Hanyang University, Ansan 15588, Gyeonggi-do, Republic of Korea
*
Author to whom correspondence should be addressed.
Biomimetics 2026, 11(1), 52; https://doi.org/10.3390/biomimetics11010052
Submission received: 8 December 2025 / Revised: 26 December 2025 / Accepted: 5 January 2026 / Published: 8 January 2026
(This article belongs to the Special Issue Bio-Inspired Machine Learning and Evolutionary Computing)

Abstract

This study proposes a Hybrid Strategy Improved Dingo Optimization Algorithm (HSIDOA), designed to address the limitations of the standard DOA in complex optimization tasks, including its tendency to fall into local optima, slow convergence speed, and inefficient boundary search. The HSIDOA integrates a quadratic interpolation search strategy, a horizontal crossover search strategy, and a centroid-based opposition learning boundary-handling mechanism. By enhancing local exploitation, global exploration, and out-of-bounds correction, the algorithm forms an optimization framework that excels in convergence accuracy, speed, and stability. On the CEC2017 (30-dimensional) and CEC2022 (10/20-dimensional) benchmark suites, the HSIDOA achieves significantly superior performance in terms of average fitness, standard deviation, convergence rate, and Friedman test rankings, outperforming seven mainstream algorithms including MLPSO, MELGWO, MHWOA, ALA, HO, RIME, and DOA. The results demonstrate strong robustness and scalability across different dimensional settings. Furthermore, HSIDOA is applied to multi-level threshold image segmentation, where Otsu’s maximum between-class variance is used as the objective function, and PSNR, SSIM, and FSIM serve as evaluation metrics. Experimental results show that HSIDOA consistently achieves the best segmentation quality across four threshold levels (4, 6, 8, and 10 levels). Its convergence curves exhibit rapid decline and early stabilization, with stability surpassing all comparison algorithms. In summary, HSIDOA delivers comprehensive improvements in global exploration capability, local exploitation precision, convergence speed, and high-dimensional robustness. It provides an efficient, stable, and versatile optimization method suitable for both complex numerical optimization and image segmentation tasks.

1. Introduction

In the rapid development of artificial intelligence, computer vision, and engineering optimization, the scale and complexity of optimization problems continue to grow. Traditional deterministic optimization methods—due to their reliance on gradient information and their tendency to become trapped in local optima—are increasingly unable to meet the demands of high-dimensional, nonlinear, and multimodal problem settings [1,2]. As stochastic optimization methods inspired by natural phenomena or biological behaviors, metaheuristic algorithms have emerged as essential tools for solving complex optimization problems. Owing to their gradient-free nature, strong global search capability, and high robustness [3,4], they have been widely applied to numerical optimization, image segmentation [5], path planning [6], neural network parameter tuning [7], resource scheduling [8], and many other real-world domains.
Multi-level threshold image segmentation, as a fundamental task in computer vision, classifies image pixels into multiple categories by setting several gray-level thresholds. This approach preserves structural characteristics and fine details of the image more comprehensively, making it indispensable in critical applications such as medical image analysis, remote sensing image processing, and industrial defect detection [9,10]. However, the core challenge of multi-threshold image segmentation lies in determining the optimal combination of thresholds. The objective functions used in this process—such as maximum between-class variance and entropy—typically exhibit high dimensionality and strong multimodality. Traditional methods based on enumeration or iterative search suffer from low computational efficiency, underscoring the need for high-performance optimization algorithms [11,12]. Meanwhile, increasing image resolution and texture complexity place higher demands on segmentation accuracy, convergence speed, and algorithmic stability. Balancing global exploration and local exploitation thus becomes a key challenge for enhancing the performance of multi-level threshold image segmentation.
Existing metaheuristic algorithms still exhibit several common limitations in numerical optimization and image segmentation tasks. Some algorithms experience rapid loss of population diversity in the later stages of iteration, making them prone to premature convergence and local optima [13,14]. Others suffer from insufficient local exploitation capability, preventing them from effectively refining the neighborhood of the optimal solution. In addition, certain algorithms adopt overly simplistic boundary-handling mechanisms for out-of-bounds solutions, which may lead to the loss of valuable information and reduce search efficiency in boundary regions [7,15]. Therefore, developing a novel metaheuristic algorithm that simultaneously balances global exploration and local exploitation, ensures both convergence accuracy and speed, and maintains stability and adaptability holds significant theoretical importance and practical value for solving complex optimization problems efficiently.
Since the concept of metaheuristic algorithms was introduced, hundreds of optimization methods inspired by various natural phenomena have been proposed by researchers worldwide, forming a rich algorithmic ecosystem. Among the early classical algorithms, Particle Swarm Optimization (PSO) simulates the foraging behavior of bird flocks, where individuals share information with the group to guide the search. However, PSO often suffers from slow convergence in later iterations and is prone to being trapped in local optima [16]. The Grey Wolf Optimizer (GWO), inspired by the encircling, hunting, and attacking behaviors of grey wolves, performs well in low-dimensional optimization but experiences a significant decline in search efficiency as dimensionality increases [17,18]. The Whale Optimization Algorithm (WOA), based on the bubble-net foraging strategy of humpback whales, offers strong global exploration capability but exhibits limited local exploitation performance [19,20].
According to the No Free Lunch (NFL) theorem [21], no single optimization algorithm can achieve superior performance over all possible optimization problems. This theorem implies that an algorithm that performs well on a specific class of problems may exhibit inferior performance on other problem types. Consequently, designing problem-oriented or hybrid optimization algorithms has become a widely accepted and effective strategy to improve optimization performance in practical applications. To overcome the limitations of classical algorithms, researchers have proposed various improvements through hybrid strategies, parameter adaptation, and enhanced local search mechanisms. For example, the Multi-Layer Particle Swarm Optimization algorithm (MLPSO) adopts a hierarchical clustering mechanism to increase population diversity, thereby improving performance on multimodal problems [22]. The Memory Evolutionary Operator and Local Search-based GWO (MELGWO) incorporates memory mechanisms and stochastic local search, significantly strengthening local exploitation capability [23]. The Multi-Strategy Hybrid Whale Optimization Algorithm (MHWOA) integrates multiple search strategies to achieve a better balance between global exploration and local exploitation [24]. Although these enhanced algorithms improve optimization performance in specific scenarios, they still present several limitations. Some methods focus on mitigating a single performance deficiency and struggle to address multidimensional performance requirements comprehensively; others introduce overly complex strategies, resulting in substantially increased computational overhead and reduced practical efficiency; additionally, certain algorithms exhibit weak adaptability, leading to unstable performance across problems of different dimensionalities and characteristics [25,26,27]. Nevertheless, despite their successes across various applications, traditional intelligent optimization methods still suffer from several inherent drawbacks, such as susceptibility to local optima, limited convergence precision, and insufficient information-sharing mechanisms within the population. These issues further weaken their capability to explore the solution space effectively.
The Dingo Optimization Algorithm (DOA), introduced in 2021, is a novel metaheuristic algorithm inspired by the group attack, pursuit, scavenging, and survival behaviors of Australian dingoes. It establishes a balanced framework between global exploration and local exploitation [28]. Although the DOA performs well in its initial studies, it exhibits noticeable limitations when addressing high-dimensional and complex optimization problems: its scavenging behavior generates new solutions in a simplistic manner, lacking the ability to exploit nonlinear interactions among high-quality solutions; population updating relies heavily on individual behaviors with insufficient information exchange, leading to rapid loss of diversity and a tendency to fall into local optima; and its straightforward boundary-handling mechanism often discards useful information and reduces search efficiency [29,30]. These shortcomings restrict the applicability of the DOA in real-world multi-threshold image segmentation tasks, highlighting the need for hybrid strategies to enhance its overall performance.
To address the weaknesses of the DOA—including inadequate local exploitation, rapid decline in population diversity, and low boundary search efficiency—this study proposes a Hybrid Strategy Improved Dingo Optimization Algorithm (HSIDOA). The algorithm incorporates three key mechanisms for systematic enhancement. First, a quadratic interpolation search strategy is employed to construct an interpolation model among high-quality solutions, enabling deeper exploration of the nonlinear structure of the solution space and thus strengthening local exploitation. Second, a horizontal crossover search strategy is introduced to facilitate information exchange among individuals at the population level, effectively maintaining diversity and improving global exploration capability. Finally, a centroid-based opposition learning boundary-handling strategy is designed, which reconstructs out-of-bounds solutions via reflection around the population centroid rather than simple truncation, thereby preserving valuable information and significantly improving search efficiency near the boundaries. Through these multidimensional hybrid improvements, HSIDOA aims to comprehensively enhance the performance of DOA when addressing high-dimensional and complex optimization problems.
The main contributions of this study are as follows:
(1)
A hybrid improved Dingo Optimization Algorithm (HSIDOA) integrating three enhancement strategies is proposed to address the core limitations of the standard DOA, including weak local exploitation, rapid loss of population diversity, and inefficient boundary handling. This provides a new solution framework for tackling complex optimization problems.
(2)
Extensive experiments on the CEC2017 and CEC2022 benchmark suites are conducted to comprehensively validate the superiority of HSIDOA.
(3)
HSIDOA is successfully applied to multi-threshold image segmentation tasks, demonstrating its practical effectiveness in real engineering applications and offering new technical support for advancing image segmentation methods.
The remainder of this paper is organized as follows: Section 2 introduces the fundamental principles of DOA and presents the three enhancement strategies as well as the overall framework of the proposed HSIDOA. Section 3 compares HSIDOA with several state-of-the-art optimization algorithms on the CEC benchmark tests. Section 4 applies HSIDOA to multi-threshold image segmentation and evaluates its practical performance. Section 5 concludes the study and discusses potential directions for future research.

2. Dingo Optimization Algorithm and the Proposed HSIDOA

2.1. Dingo Optimization Algorithm (DOA)

The Dingo Optimization Algorithm (DOA) is a novel metaheuristic algorithm proposed in 2021, inspired by the social behaviors of Australian dingoes. Figure 1 Illustration of Australian dingo, which inspires the Dingo Optimization Algorithm (DOA). The algorithm simulates their hunting strategies—such as persecution attacks, group-based coordination, and scavenging behavior—to achieve a balance between global exploration and local exploitation. Its mathematical formulation is as follows [28,29]:
Strategy 1: Group Attack
Australian dingoes typically hunt in groups. They can identify the prey’s position and coordinate to encircle it. This cooperative attack behavior is modeled by Equation (1) [28,31]:
X i t + 1 = β 1 k = 1 n a [ φ k ( t ) X i ( t ) ] n a X g b e s t ( t )
where X i t + 1 denotes the updated position of the search agent (representing the movement of a dingo); i denotes the index of the i -th dingo (search agent) in the population. The subset of search agents refers to a randomly selected group of dingoes participating in the cooperative group attack behavior, rather than the entire population. n a is an integer randomly generated within the interval [ 2 , P o p 2 ] , and P o p is the total population size; φ k ( t ) represents a subset of search agents (i.e., dingoes participating in the attack), where φ X and X denotes the randomly initialized dingo population; X i ( t ) is the current search agent; X g b e s t ( t ) is the best solution found in the previous iteration; β 1 is a scaling factor uniformly sampled from [−2, 2] that controls the magnitude and direction of the dingoes’ movement trajectory [28,29].
Strategy 2: Pursuit
Australian dingoes often chase small prey individually until capture. This pursuit behavior is modeled by Equation (2) [28,29]:
X i t + 1 = X g b e s t ( t ) + β 1 e β 2 ( X r 1 ( t ) X i ( t ) )
where β 2 is a uniformly generated random number within the interval [−1, 1]; X r 1 is a randomly selected search agent from the interval [ 1 ,   P o p ] , with i r 1 .
Strategy 3: Scavenging
Scavenging behavior refers to dingoes discovering and consuming carrion while randomly roaming their habitat. This behavior is modeled by Equation (3) [28,29]:
X i t + 1 = 1 2 e β 2 X r 1 ( t ) ( 1 ) σ X i ( t )
where X r 1 ( t ) is a randomly selected r 1 -th search agent; X i ( t ) is the current search agent, with i r 1 ; σ is a randomly generated binary number.
Strategy 4: Survival Rate of Australian Dingoes
Australian dingoes face the risk of extinction primarily due to illegal hunting. In the Dingo Optimization Algorithm, the survival rate of each dingo is defined by Equation (4) [28,29]:
s u r v i v a l ( i ) =   f i t n e s s   m a x   f i t n e s s   ( i )   f i t n e s s   m a x   f i t n e s s   m i n
where f i t n e s s m a x and f i t n e s s m i n are the worst and best fitness values in the current generation, respectively; f i t n e s s   ( i ) is the current fitness value of the i t h search agent. The survival vector in Equation (4) contains normalized fitness values within the interval [0, 1]. For agents with low survival rates (e.g., survival rate ≤ 0.3), the position is updated according to Equation (5):
X i t = X g b e s t + 1 2 X r 1 ( t ) ( 1 ) σ X r 2 ( t )
where X i t is the search agent with a low survival rate that requires updating; X r 1 ( t ) and X r 2 ( t ) are the selected r 1 t h and r 2 t h search agents, respectively; X g b e s t is the best search agent found in the previous iteration; σ is a randomly generated binary number.

2.2. Hybrid Strategy Improved Dingo Optimization Algorithm (HSIDOA)

To address the limitations of the standard Dingo Optimization Algorithm (DOA)—such as susceptibility to local optima in later iterations, insufficient convergence accuracy, and inefficient handling of out-of-bounds solutions. This study proposes a Hybrid Strategy Improved Dingo Optimization Algorithm (HSIDOA). HSIDOA integrates a quadratic interpolation search strategy, a horizontal crossover search strategy, and a centroid-based opposition learning boundary-handling strategy, enhancing the algorithm from three dimensions: local exploitation, global exploration, and boundary constraint management. This framework aims to achieve a balance between search accuracy and convergence speed.

2.2.1. Quadratic Interpolation Search Strategy

In the standard DOA, the scavenging behavior generates new solutions solely based on the linear combination of random individuals, lacking the ability to exploit nonlinear relationships among individuals. This limitation weakens local exploitation, particularly in complex high-dimensional problems, often causing the algorithm to miss the neighborhood of optimal solutions. To address this, HSIDOA introduces a quadratic interpolation search strategy, which constructs a quadratic interpolation model based on the global best solution and randomly selected high-quality solutions. This allows precise targeting of local optimum regions and strengthens the algorithm’s local exploitation capability.
The quadratic interpolation search strategy focuses on uncovering the nonlinear guidance value of high-quality solutions in the population. The implementation logic is as follows [32]: During the scavenging phase (when a random probability ≥ 0.5), two dingo individuals different from the current agent (denoted as X p and X q ) are randomly selected. Simultaneously, the global best solution X g b e s t and the current individual X i are extracted. The corresponding fitness values f p ,   f q   a n d   f gbest are then obtained to construct a quadratic interpolation function, fitting the distribution characteristics of high-quality solutions. Finally, the optimal interpolation point is calculated using the interpolation formula, which serves as the new solution for the scavenging behavior, enabling precise exploration of high-quality local regions.
The core computation of the quadratic interpolation is expressed as [32]:
X interp   ( j ) = 0.5 ( X p ( j ) 2 X q ( j ) 2 ) · f gbest   + ( X q ( j ) 2 X i   ( j ) 2 ) · f p + X i   ( j ) 2 X p ( j ) 2 · f q ( X p ( j ) X q ( j ) ) · f gbest   + X q ( j ) X i   ( j ) · f p + X i   ( j ) X p ( j ) · f q
where j is the dimension index ( j = 1 , 2 , , d i m ) , and X interp   denotes the interpolation result in the j t h dimension.
To prevent numerical singularities when the denominator approaches zero, a singular value handling mechanism is introduced:
X i , j t + 1 = X interp   j , i f    d e n o m i n a t o r j ε X g b e s t j , i f    d e n o m i n a t o r j < ε d e n o m i n a t o r j = ( X p ( j ) X q ( j ) ) · f gbest   + X q ( j ) X i   ( j ) · f p + X i   ( j ) X p ( j ) · f q
Here, ε is a small constant, and X g b e s t is the j t h dimension of the global best solution, ensuring that singular dimensions still move toward high-quality regions.

2.2.2. Horizontal Crossover Search Strategy

In the standard DOA, population updates rely solely on individual-level hunting or scavenging behaviors, lacking a population-level information exchange mechanism. This results in a rapid loss of diversity and a tendency to fall into local optima in later iterations. To address this, HSIDOA introduces a horizontal crossover search strategy, which facilitates complementary information exchange through nonlinear crossover between pairs of individuals, enhancing global exploration while preserving high-quality solutions.
The horizontal crossover search strategy focuses on improving the population’s global exploration capability and information-sharing efficiency. The implementation logic is as follows [33]: After individual updates in each iteration, the population indices are randomly shuffled, and individuals are paired as parents (avoiding fixed pairings that may limit search scope). Then, a nonlinear crossover formula is applied to each parent pair to generate offspring, introducing random weights and shift coefficients to balance exploration and exploitation. Finally, only offspring with fitness superior to their parents are retained, preventing inferior solutions from entering the population, while the global best solution is updated to ensure high-quality solutions are preserved [33].
X c h i l d 1 = λ 1 · X p a r e n t 1 + 1 λ 1 · X p a r e n t 2 + μ 1 · X p a r e n t 1 X p a r e n t 2 X c h i l d 2 = λ 2 · X p a r e n t 2 + 1 λ 2 · X p a r e n t 1 + μ 2 · X p a r e n t 2 X p a r e n t 1
where λ 1 ,   λ 2 ~ U ( 0 ,   1 ) are dimension-level random weights; μ 1 ,   μ 2 ~ U ( 1 ,   1 ) are dimension-level shift coefficients; X p a r e n t 1 and X p a r e n t 2 are the parent individuals, while X c h i l d 1 and X c h i l d 2 are the offspring individuals. This formula nonlinearly combines parental information, preserving high-quality traits from the parents while introducing random perturbations to enhance population diversity.

2.2.3. Centroid-Based Opposition Learning Boundary-Handling Strategy

In the standard DOA, out-of-bounds solutions are handled using a simple boundary truncation approach (directly resetting values to the boundary), which completely discards the information contained in the out-of-bounds solutions. This often causes the population to converge toward the boundaries, reducing search efficiency. To address this, HSIDOA introduces a centroid-based opposition learning boundary-handling strategy, which reconstructs out-of-bounds solutions through centroid reflection. This approach preserves the directional information of the solutions while ensuring their feasibility, thereby enhancing the search capability in boundary regions.
The centroid-based opposition learning boundary-handling strategy focuses on efficiently utilizing the potential information of out-of-bounds solutions. The implementation logic is as follows: First, compute the centroid of the current population (i.e., the mean of all individuals across each dimension), serving as the global reference point of the population. Next, identify all out-of-bounds dimension values (values below the lower bound l b j or above the upper bound u b j , and reconstruct these dimensions using the centroid-based reflection formula. Finally, apply a secondary boundary truncation to ensure that all reconstructed solutions fall within the feasible range.
The core computation of centroid-based reflection is given by:
X i , j = 2 · X c e n t r o i d j X i j , i f   X i , j < l b j   o r   X i , j > u b j X i , j , o t h e r w i s e
where X c e n t r o i d j = 1 P o p k = 1 P o p X k , j is the centroid of the j t h dimension of the population ( P o p denotes the population size); X i , j is the j t h dimension of the i t h individual; l b j and u b j are the lower and upper bounds of the j t h dimension.
This strategy pulls out-of-bounds solutions back into the feasible region while preserving their directional features relative to the population centroid, avoiding information loss caused by simple truncation and enhancing the algorithm’s ability to explore high-quality solutions near boundaries.
In summary, the pseudocode of the proposed HSIDOA is presented in Algorithm 1.
Algorithm 1: The pseudo-code of the HSIDOA
1: Initialization of parameters.
2:  P = 0.5 , probability of hunting or scavenger strategy.
3:  Q = 0.7 , probability of Strategy1 (groupattack) or Strategy2 (persecution attack).
4: Initialization population  X .
5: while  t   < T  do
6:    β 1 = 2 + 2 · r a n d ,    β 2 = 1 + 2 · r a n d
7:   for  i = 1 : N  do
8:     if  r a n d < P  do
9:      if  r a n d < Q  do
10:       Strategy1: Update  X i , j t + 1 by Equation (1).
11:      else
12:       Strategy2: Update X i , j t + 1 by Equation (2).
13:      end if
14:     else
15:      Strategy3: Update  X i , j t + 1  by Equation (6) and Equation (7) (Quadratic interpolation).
16:     end if
17:     % Survival mechanism
18:     if  s u r v i v a l i <   0.3 do
19:      Strategy2: Update  X i , j t  by Equation (5).
20:     end if
21:    Update  X i , j  by Equation (8) (Horizontal crossover strategy).
22:     Perform boundary checking and handling using Equation (9).
23:    end for
24:     t = t + 1
25: end while
26: return the best solution  X g b e s t .
It should be noted that HSIDOA is a stochastic optimization algorithm. Random numbers are generated in several stages of the algorithm, including:
(1)
The random selection of search agents in the group attack and pursuit strategies (Equations (1)–(3));
(2)
The generation of random coefficients and binary variables controlling movement directions and survival decisions (Equations (2), (3) and (5));
(3)
The random selection of individuals and weights in the quadratic interpolation strategy (Equations (6) and (7)); and
(4)
The random weights and shift coefficients used in the horizontal crossover strategy (Equation (8)).
These stochastic components ensure population diversity and enhance global exploration capability.

2.3. Complexity Analysis of HSIDOA

In the complexity analysis of HSIDOA, let N denote the population size, D the dimension of the optimization problem, and T the maximum number of iterations. The initialization stage of HSIDOA, including population generation and initial fitness calculation, has a time complexity of O ( N × D ) , which is identical to that of the standard DOA and does not introduce additional computational overhead. During each iteration, HSIDOA retains the core behavioral logic of DOA (hunting, survival mechanism) with a complexity of O ( N × D ) , and integrates three enhanced strategies: the quadratic interpolation search strategy involves vectorized calculation of interpolation parameters for each individual, with a per-iteration complexity of O ( N × D ) ; the horizontal crossover strategy implements pairwise nonlinear crossover for the population, with a complexity of O ( N × D ) after vectorization optimization; the centroid reflection boundary control strategy completes boundary adjustment through vectorized judgement and reflection calculation, with a per-iteration complexity of O ( N × D ) . Although multiple enhanced strategies are added, all strategies adopt vectorized operations to avoid nested loops, so the total complexity of each iteration of HSIDOA remains O ( N × D ) . Combined with T iterations, the overall time complexity of HSIDOA is O ( T × N × D ) , which is consistent with the standard DOA. This means HSIDOA achieves the improvement of optimization precision and convergence speed without increasing the algorithm’s computational complexity, ensuring its applicability in high-dimensional optimization scenarios.

3. Numerical Experiments of CEC2017 and CEC2022

3.1. Comparative Methods and Parameter Settings

In this subsection, the proposed HSIDOA algorithm is evaluated using the currently most challenging numerical optimization benchmark, CEC2017 [34] and CEC2022 [35]. Its efficacy is contrasted against multiple sophisticated optimizer techniques. Evaluated competitors comprise: Multi-level Particle Swarm Optimization (MLPSO) [22], Memory, Evolutionary operator, and Local search based improved Grey Wolf Optimizer (MELGWO) [23], Multi-Strategy Hybrid Whale Optimization Algorithm (MHWOA) [24], Artificial Lemming Algorithm (ALA) [36], Hippopotamus Optimization (HO) [37], Rime optimization algorithm (RIME) [38], and standard Dingo Optimization Algorithm (DOA) [28]. Corresponding algorithmic configurations appear consolidated within Table 1.

3.2. Ablation Study Assessment

The performance evaluation was conducted using an ablation experimental design to quantitatively analyze the contribution of each improvement strategy to the DOA algorithm. Experiments were performed on the 30-dimensional CEC2017 benchmark function set. The control group consisted of the original DOA algorithm, while the experimental groups included three variants with individual strategies applied sequentially: DOA-QI (Quadratic Interpolation Search Strategy), DOA-HC (Horizontal Crossover Search Strategy), and DOA-CR (Centroid-Based Opposition Learning Boundary-Handling Strategy), as well as the HSIDOA algorithm incorporating all strategies. Detailed performance comparison data are provided in Table 2, and the trend visualization results are presented in Figure 2 and Figure 3.
The quantitative results in Table 2 indicate that HSIDOA, incorporating all three improvement strategies, significantly outperforms the original DOA and its single-strategy variants (DOA-QI, DOA-HC, and DOA-CR) on most CEC2017 (30D) test functions. Taking representative functions as examples, the mean of F1 decreases from 4.9148E+10 (DOA) to 3.4022E+03 (HSIDOA); F3 drops from 6.9500E+04 to 3.5588E+03; and F12 reduces from 6.8128E+09 to 3.3247E+05. These results demonstrate that HSIDOA can substantially lower the objective values across multiple typical test functions, achieving performance improvements of several orders of magnitude.
In terms of standard deviation, HSIDOA also exhibits markedly higher stability. For instance, the standard deviation of F1 decreases from 9.1464E+09 (DOA) to 4.3862E+03, and that of F3 declines from 9.2972E+03 to 2.7549E+03. The simultaneous improvement in both mean and variance indicates that the integration of the three strategies not only enhances solution accuracy but also strengthens the algorithm’s stability across multiple independent runs, endowing HSIDOA with greater robustness in complex optimization tasks.
The convergence curves in Figure 2 visually illustrate the iterative optimization process of each algorithm. For representative functions such as F1, F5, and F10, the convergence curve of HSIDOA consistently remains at the bottom, indicating superior performance. Moreover, HSIDOA demonstrates faster early-stage convergence, quickly approaching the optimal solution within the initial iterations, whereas the original DOA curve stays higher, converging slowly and showing signs of stagnation. For example, on F10, the HSIDOA fitness value decreases to approximately 5.0E+03 after 50 iterations, while DOA remains above 8.0E+03. After 500 iterations, HSIDOA achieves a fitness improvement of about 40% compared to DOA. Among the three single-strategy variants, DOA-HC and DOA-QI outperform DOA-CR, particularly in the mid-to-late iterations. DOA-HC maintains search momentum through population-level information exchange, while DOA-QI accelerates convergence by precisely exploiting local optima. HSIDOA effectively integrates these advantages, achieving a balance between convergence speed and accuracy.
The average ranking results in Figure 3 further validate the effectiveness of the improvement strategies. HSIDOA attains the best overall performance with an average rank of 1.03, significantly outperforming the original DOA, which ranks 4.90, across the 30 benchmark functions. DOA-HC (2.37) and DOA-QI (2.93) show similar average rankings, both surpassing DOA-CR (3.77), indicating that the horizontal crossover search strategy enhances population diversity and the quadratic interpolation search strategy strengthens local exploitation, which are the main contributors to performance improvement. Although DOA-CR ranks relatively lower, it still outperforms the original DOA, demonstrating that the centroid-based opposition learning boundary-handling strategy effectively preserves out-of-bounds solution information and improves boundary search efficiency, thus providing complementary performance gains.
By comparing DOA-QI, DOA-HC, DOA-CR, and HSIDOA, it can be observed that quadratic interpolation (DOA-QI) and horizontal crossover (DOA-HC) provide significant benefits across most test functions, whereas centroid-based opposition learning (DOA-CR) achieves relatively modest improvements in certain functions. Overall, DOA-QI enhances local exploitation, DOA-HC maintains population diversity, and DOA-CR strengthens boundary handling. When these strategies are synergistically combined, HSIDOA achieves an optimal balance between global exploration and local exploitation, resulting in the best overall performance across the majority of benchmark functions.

3.3. Performance Assessment Using CEC2017 and CEC2022 Benchmarks

To further verify the generality and superiority of the HSIDOA algorithm, comprehensive comparisons were conducted on three benchmark function sets: CEC2017 (30-dimensional) and CEC2022 (10-dimensional and 20-dimensional). HSIDOA was compared against seven mainstream optimization algorithms, including MLPSO and MELGWO. The experimental results are presented in Table 3, Table 4 and Table 5, and the convergence visualization is shown in Figure 4. All algorithms were evaluated under a unified experimental setup: a population size of 30, a maximum of 500 iterations, and 30 independent runs for each algorithm, ensuring fairness and reliability of the comparisons. All simulations execute within an identical computational environment: Windows 10 OS, an Intel Core i5-13400 (13th Gen) processor running at 2.5 GHz, 16 GB of system memory, employing MATLAB version 2024b. This consistent setup and evaluation methodology ensures outcome reliability and enables straightforward cross-method examination. Outcome data are presented across Table 3, Table 4, Table 5 and Table 6 and illustrated in Figure 4.
As shown in Table 3, HSIDOA demonstrates a remarkable advantage in 30-dimensional complex optimization problems. For the unimodal function F1, HSIDOA achieves an average fitness value of 5.2966E+03, which is significantly lower than DOA (4.5105E+10), MHWOA (5.4819E+10), and other comparative algorithms, and even outperforms well-performing ALA (2.8014E+06) and RIME (3.6284E+06), representing an improvement in optimization precision of over 500 times. For the multimodal function F9, HSIDOA attains an average fitness of 1.6487E+03, only 18.2% of DOA’s value (9.0304E+03), and lower than MELGWO (3.8466E+03) and RIME (3.0110E+03), illustrating its strong capability to locate global optima in complex multimodal landscapes. In addition, HSIDOA achieves the smallest standard deviations on most functions, such as F6 (Std = 4.3639E+00), outperforming all other algorithms, indicating stable optimization performance even in high-dimensional scenarios.
Table 4 (10 dim) and Table 5 (20 dim) further show that HSIDOA performs outstandingly in low- to medium-dimensional optimization problems. On the 10-dimensional F1 function, HSIDOA reaches an average fitness value of 3.0000E+02, achieving the best performance alongside ALA and RIME, with an extremely low standard deviation of 5.9585E−07, far below other algorithms, demonstrating precise convergence. For the 20-dimensional F5 function, HSIDOA attains an average fitness of 9.8782E+02, significantly outperforming DOA (2.7412E+03), MLPSO (3.9613E+03), and MELGWO (1.6082E+03), and is the only algorithm with an average fitness below 1.0E+03. Notably, as the dimensionality increases from 10 dim to 20 dim, most algorithms exhibit substantial performance degradation (e.g., MHWOA’s F6 average fitness rises from 2.2239E+06 to 2.0072E+09), whereas HSIDOA shows the minimal decline, highlighting its strong dimensional adaptability.
The convergence curves in Figure 4 intuitively illustrate the dynamic optimization characteristics of each algorithm. For functions such as CEC2017-F5 and F9, the HSIDOA consistently maintains the lowest convergence curves and exhibits the fastest convergence speed: during the early iterations (first 50), the fitness values rapidly decrease and approach the optimal solutions, while in the middle iterations (100–200), the curves stabilize, avoiding the stagnation commonly observed in other algorithms during later stages. For example, in the CEC2017-F12 function, the HSIDOA achieves stable convergence by iteration 200, whereas the DOA, MHWOA, and other algorithms continue to decrease slowly, ultimately converging to values significantly higher than those of the HSIDOA.
The convergence advantage of the HSIDOA is also evident in the 10D and 20D functions of the CEC2022 benchmark. In the 10-dimensional F3 function, its convergence curve closely tracks the optimal value with near-zero standard deviation. In the 20-dimensional F7 function, the HSIDOA maintains a convergence speed comparable to the 10D scenario, while algorithms such as MLPSO and DOA exhibit noticeable upward shifts in their convergence curves, demonstrating the HSIDOA’s stable convergence across different dimensionalities. Compared with all other algorithms, the HSIDOA achieves dual optimization in convergence speed and accuracy through local precise search via quadratic interpolation and global information exchange via horizontal crossover, exhibiting superior dynamic optimization performance across unimodal, multimodal, and high-dimensional functions.
In summary, the quantitative results in Table 3, Table 4 and Table 5 and the dynamic convergence behaviors in Figure 3 collectively indicate that the HSIDOA consistently outperforms other algorithms across benchmarks of varying dimensionality, category, and complexity. The synergistic integration of multiple strategies effectively enhances the algorithm’s global exploration capability, local exploitation precision, and intelligent boundary handling. Consequently, the HSIDOA demonstrates significant advantages in convergence speed, search stability, and high-dimensional scalability, confirming its potential as a powerful optimization framework for tackling complex real-world problems.

3.4. Runtime Comparison Between the DOA and HSIDOA

To accurately evaluate the computational efficiency of the HSIDOA after incorporating the three enhancement strategies, this section conducts a quantitative comparison of the average runtime between the HSIDOA and the original DOA using the CEC2017 benchmark function set (30 dimensions). The experimental data are summarized in the corresponding table. All tests were performed under the same hardware (Intel Core i5-13400 processor, 16 GB RAM) and software (MATLAB 2024b) environment to ensure fairness and reliability of the comparison. The experimental results are illustrated in Figure 5.
From the experimental results, it can be observed that the overall runtime of the HSIDOA is on the same order of magnitude as that of the DOA. Only for the F1 function does the HSIDOA exhibit a slightly shorter runtime (0.0714 s) than the DOA (0.0740 s). For the remaining 29 functions, the runtime of the HSIDOA increases slightly, but the increments are kept within a reasonable range. For most functions, the runtime increase does not exceed 10%. For example, on the F3 function, the runtime of the HSIDOA is 0.0799 s, representing only a 1.8% increase compared with the DOA’s 0.0814 s; on the F5 function, the HSIDOA runs in 0.0896 s, which is 3.4% higher than the DOA’s 0.0928 s—well within the tolerance of computational overhead in engineering applications. Even for complex multimodal functions (e.g., F19 and F30), the runtime increase of the HSIDOA does not exceed 16%. Specifically, for F19, the HSIDOA requires 0.4071 s, a 12.5% increase over DOA’s 0.3618 s; for F30, the HSIDOA runs in 0.4864 s, which is 15.8% higher than DOA’s 0.4201 s.
It is worth noting that the slight increase in the HSIDOA’s runtime is a reasonable cost for introducing three strategies: quadratic interpolation search, horizontal crossover search, and centroid opposition-based learning boundary handling. In terms of performance gains, the HSIDOA achieves order-of-magnitude improvements in optimization accuracy, convergence speed, and stability. For instance, the mean fitness value on the F1 function decreases from 4.9148E+10 with DOA to 3.4022E+03 with the HSIDOA, and on the F3 function from 6.9500E+04 to 3.5588E+03. This trade-off—accepting a slight increase in computational time in exchange for significant performance improvements—has substantial engineering value.
From a complexity perspective, all enhancement strategies in the HSIDOA are implemented using vectorized operations, thereby avoiding the efficiency loss caused by nested loops. As a result, the overall time complexity remains O(T × N × D), which is identical to that of the original DOA. The minor differences in runtime mainly stem from additional vector computations and logical checks during strategy execution rather than from an increase in theoretical complexity. These results demonstrate that the HSIDOA achieves comprehensive performance optimization through well-designed strategies without increasing algorithmic complexity, effectively balancing computational efficiency and optimization performance and exhibiting strong practical applicability in engineering contexts.

3.5. Statistical Evaluation via Friedman Test

To comprehensively evaluate the HSIDOA method’s relative standing, a statistical measure suitable for simultaneously comparing several interrelated algorithms is required. For this objective, the Friedman test—a nonparametric technique—analyzes different optimizers according to their ranked performance over multiple problem sets. This methodology intrinsically avoids presuppositions about the results’ probability distribution, making it especially effective for assessing multiple techniques against the same suite of benchmark tasks. The corresponding test statistic is derived from the formula provided in the references [8,39]:
Q = 12 k n k + 1 j = 1 k R j 2 3 n k + 1
In the given equation, n represents block count, k indicates group quantity, and R j corresponds to the aggregate ordinal position for the j t h group. Provided both n and k fulfill requisite size criteria, the resulting Q value follows a chi-square distribution characterized by k 1 degrees of liberty [39].
To statistically validate the superior performance of the HSIDOA, the Friedman test was conducted to analyze the significance of ranking results across all compared algorithms on the CEC2017 (30D) and CEC2022 (10D and 20D) benchmark function sets. The detailed ranking data are presented in Table 6, and the distribution of rankings is visualized in Figure 6.
From the summary of Friedman rankings (Table 6), it is evident that the proposed the HSIDOA achieves the best average ranks across all three experimental setups (CEC2017 30D, CEC2022 10D, and 20D), with values of 1.33, 1.75, and 1.58, respectively, consistently securing the first position among all compared algorithms. In contrast, the original DOA ranks at the bottom in all three experiments, with average ranks of 6.47, 6.25, and 6.58, while other advanced algorithms such as MHWOA, MLPSO, and MELGWO occupy intermediate or lower positions. These results indicate that, based on the Friedman ranking, the HSIDOA demonstrates significant overall superiority on these benchmark sets.
Further observation of Table 6 shows that ALA consistently maintains a leading second place (2.80, 2.17, 2.83), suggesting stable competitiveness across these datasets. RIME generally ranks in the upper-middle range (3.20, 3.92, 2.83), indicating advantages under certain experimental configurations. In contrast, MHWOA ranks eighth in all three experiments, reflecting insufficient stability or convergence performance under the benchmark and parameter settings employed in this study.
Figure 6 presents a visualization of the relative ranking distribution of all algorithms, providing an intuitive verification of the tabulated results. The ranking distribution of the HSIDOA is clearly concentrated at the top of the graph (mostly rank 1), indicating not only excellent average performance but also consistent high-ranking outcomes across multiple independent runs. Conversely, algorithms such as the DOA and MHWOA exhibit broader distributions across the middle and lower regions, reflecting both lower average ranks and greater variability.
In summary, the Friedman ranking data in Table 6 and the ranking distribution visualization in Figure 4 jointly demonstrate that the proposed the HSIDOA exhibits significant and stable performance improvements across the selected benchmarks (CEC2017 and CEC2022 with different dimensional settings). In a manuscript, it is recommended to describe the experimental results by reporting both the average ranks to quantify superiority and the ranking distribution to emphasize consistency, thereby providing both quantitative and visual evidence to support the HSIDOA’s advantages.

4. Multilevel Thresholding Image Segmentation

To assess the real-world efficacy of the Adaptive Multi-strategy DOA Optimizer (HSIDOA) for hierarchical multilevel image thresholding, comparative trials are structured in this section. Employing Otsu’s inter-class variance maximization criterion as the fitness measure, partitioning experiments are performed on various standard test images across distinct threshold counts. Outcomes are measured and examined through several quantitative performance indices.
Otsu’s method is selected as the objective function in this study because it directly maximizes the between-class variance, which has a clear physical interpretation and strong robustness against noise. Compared with entropy-based methods such as Kapur entropy, Masi entropy, and cross entropy, Otsu’s criterion exhibits lower computational complexity and more stable performance in multi-threshold segmentation tasks [40].
Moreover, Otsu’s method does not require logarithmic operations or probability density estimation, making it particularly suitable for integration with population-based metaheuristic optimization algorithms.

4.1. Evaluation Metrics

To effectively evaluate the image segmentation performance of different algorithms, we employ image quality assessment metrics including Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Feature Similarity Index (FSIM). The descriptions of these metrics are presented in Table 7.

4.2. Experimental Design

To validate the effectiveness of the HSIDOA in practical applications, multi-level thresholding experiments were conducted on six benchmark images with different styles (barbara, camera, couple, etc.), as shown in Figure 7, using Otsu’s maximum between-class variance as the objective function. Four, six, eight, and ten-level thresholding experiments were performed. To ensure a fair comparison, eight mainstream swarm intelligence algorithms, including Particle Swarm Optimization (PSO), Grey Wolf Optimizer (GWO), and the Whale Optimization Algorithm (WOA), were selected as baseline methods. For all algorithms, the population size was set to P o p   =   30 and the maximum number of iterations to T   =   100 , with other parameters consistent with the previous experimental settings.
In this research, Otsu’s inter-class variance maximization technique serves as the fitness criterion for threshold selection. The fundamental concept involves identifying ideal segmentation points through optimizing separation between class variances.
For a digital image containing L distinct intensity levels, the likelihood of a pixel possessing grayscale value i equals:
P i = n i N
where n i is the number of pixels with gray level i , and N = i = 0 L 1 n i is the total number of pixels. It satisfies P i 0 ,   a n d   P 0 + P 1 + + P L 1 = 1 .
If k thresholds T 1 < T 2 < < T k are set, the image is divided into k + 1 regions. The pixel proportion, mean gray level, overall image mean, and between-class variance of the i t h region are defined as [5,43]:
ω i = j = T i 1 + 1 T i P j ( T 0 = 1 , T k + 1 = L 1 )
μ i = 1 ω i j = T i 1 + 1 T i j × P j
μ = i = 0 k ω i μ i
σ 2 = i = 0 k ω i ( μ i μ ) 2
The optimal threshold combination T best   = ( T 1 * , T 2 * , , T k * ) is the set of thresholds that maximizes the between-class variance:
T best   = a r g m a x T 1 , T 2 , , T k σ 2
In this study, the inter-class variance refers specifically to the between-class variance defined in Otsu’s method.

4.3. Experimental Results and Analysis

In this section, a comprehensive comparison is conducted using four key metrics: optimal fitness value, PSNR, SSIM, and FSIM. The HSIDOA is evaluated against seven other algorithms, including MLPSO and MELGWO. The experimental results are summarized in Table 8, Table 9, Table 10, Table 11 and Table 12, while the fitness curves and metric-in Figure 8 and Figure 9.
Table 8. Multilevel thresholding results using Otsu’s criterion as the fitness function.
Table 8. Multilevel thresholding results using Otsu’s criterion as the fitness function.
ImagesTH = 4TH = 6TH = 8TH = 10
barbaraBiomimetics 11 00052 i001Biomimetics 11 00052 i002Biomimetics 11 00052 i003Biomimetics 11 00052 i004
Biomimetics 11 00052 i005Biomimetics 11 00052 i006Biomimetics 11 00052 i007Biomimetics 11 00052 i008
cameraBiomimetics 11 00052 i009Biomimetics 11 00052 i010Biomimetics 11 00052 i011Biomimetics 11 00052 i012
Biomimetics 11 00052 i013Biomimetics 11 00052 i014Biomimetics 11 00052 i015Biomimetics 11 00052 i016
coupleBiomimetics 11 00052 i017Biomimetics 11 00052 i018Biomimetics 11 00052 i019Biomimetics 11 00052 i020
Biomimetics 11 00052 i021Biomimetics 11 00052 i022Biomimetics 11 00052 i023Biomimetics 11 00052 i024
houseBiomimetics 11 00052 i025Biomimetics 11 00052 i026Biomimetics 11 00052 i027Biomimetics 11 00052 i028
Biomimetics 11 00052 i029Biomimetics 11 00052 i030Biomimetics 11 00052 i031Biomimetics 11 00052 i032
peppersBiomimetics 11 00052 i033Biomimetics 11 00052 i034Biomimetics 11 00052 i035Biomimetics 11 00052 i036
Biomimetics 11 00052 i037Biomimetics 11 00052 i038Biomimetics 11 00052 i039Biomimetics 11 00052 i040
terraceBiomimetics 11 00052 i041Biomimetics 11 00052 i042Biomimetics 11 00052 i043Biomimetics 11 00052 i044
Biomimetics 11 00052 i045Biomimetics 11 00052 i046Biomimetics 11 00052 i047Biomimetics 11 00052 i048
Table 9. Mean and Standard Deviation of best Otsu-criterion fitness scores.
Table 9. Mean and Standard Deviation of best Otsu-criterion fitness scores.
ImagesTHMetricsMLPSOMELGWOMHWOAALAHORIMEDOAHSIDOA
barbara4Mean 2.6446E+03 2.6446E+032.6445E+032.6446E+032.6446E+032.6446E+032.6446E+032.6446E+03
 Std1.8501E−121.8501E−121.1825E−011.8501E−121.0179E−011.8576E−021.1312E−021.8501E−12
6Mean2.7013E+032.7015E+032.7013E+032.7015E+032.7012E+032.7015E+032.7003E+032.7015E+03
 Std2.2827E−014.3652E−021.9625E−011.1196E−015.8174E−013.1921E−021.7998E+006.6734E−03
8Mean2.7246E+032.7273E+032.7252E+032.7262E+032.7257E+032.7270E+032.7231E+032.7273E+03
 Std1.7000E+008.2424E−022.6157E+009.9594E−011.8616E+002.2665E−013.0328E+002.2218E−02
10Mean2.7354E+032.7397E+032.7375E+032.7378E+032.7373E+032.7390E+032.7350E+032.7398E+03
 Std1.7997E+002.6102E−011.5373E+001.2587E+002.4349E+005.3751E−012.0969E+006.0808E−02
camera4Mean4.5999E+034.5997E+034.5996E+034.6000E+034.5991E+034.6003E+034.5997E+034.6011E+03
 Std1.1002E+009.3601E−019.9506E−011.2217E+001.5154E+001.1147E+001.1793E+006.7739E−03
6Mean4.6512E+034.6517E+034.6499E+034.6512E+034.6508E+034.6515E+034.6489E+034.6517E+03
 Std3.2187E−011.2312E−021.3486E+005.3850E−012.6366E+001.1119E−014.2572E+009.2504E−13
8Mean4.6681E+034.6698E+034.6681E+034.6686E+034.6691E+034.6695E+034.6663E+034.6705E+03
 Std1.0513E+008.7608E−011.7729E+001.3276E+001.0017E+008.4293E−012.5297E+004.0915E−01
10Mean4.6782E+034.6807E+034.6778E+034.6791E+034.6779E+034.6803E+034.6767E+034.6809E+03
 Std1.2825E+001.9677E−011.8599E+001.2302E+003.1080E+004.2975E−012.2110E+004.0804E−02
couple4Mean1.7332E+031.7333E+031.7329E+031.7333E+031.7331E+031.7333E+031.7332E+031.7333E+03
 Std5.5630E−026.7390E−039.8805E−017.3788E−023.7494E−013.6828E−021.7161E−011.4427E−02
6Mean1.7981E+031.7991E+031.7970E+031.7989E+031.7981E+031.7990E+031.7966E+031.7992E+03
 Std7.5436E−019.3963E−034.9228E+003.5548E−011.4095E+001.0839E−013.0856E+001.7133E−03
8Mean1.8238E+031.8262E+031.8234E+031.8255E+031.8255E+031.8260E+031.8232E+031.8263E+03
 Std1.3559E+007.7599E−014.0845E+006.2877E−011.0417E+003.8584E−012.5732E+002.8180E−02
10Mean1.8362E+031.8396E+031.8369E+031.8374E+031.8364E+031.8389E+031.8336E+031.8397E+03
 Std1.3290E+002.9857E−012.0820E+001.5461E+002.8100E+006.7103E−013.1915E+009.3897E−02
house 4Mean2.3719E+032.3720E+032.3706E+032.3720E+032.3719E+032.3719E+032.3720E+032.3720E+03
 Std4.0575E−021.3876E−123.8122E+003.4401E−021.3731E−015.3742E−025.4077E−025.1769E−03
6Mean2.4248E+032.4254E+032.4222E+032.4251E+032.4248E+032.4254E+032.4239E+032.4255E+03
 Std5.2936E−014.8408E−026.2187E+004.6029E−016.9934E−011.6083E−011.8259E+001.2246E−02
8Mean2.4485E+032.4511E+032.4479E+032.4500E+032.4496E+032.4507E+032.4482E+032.4511E+03
 Std1.3233E+002.1360E−013.1648E+001.2558E+002.1231E+003.6582E−013.2172E+003.3382E−02
10Mean2.4600E+032.4642E+032.4615E+032.4621E+032.4618E+032.4635E+032.4586E+032.4644E+03
 Std1.3821E+002.5395E−012.5339E+001.8555E+002.2085E+007.8285E−012.3244E+009.0103E−02
peppers 4Mean2.7011E+032.7011E+032.7005E+032.7011E+032.7010E+032.7011E+032.7007E+032.7011E+03
 Std1.8380E−014.8410E−041.2529E+005.6126E−023.2732E−013.8355E−021.1449E+005.4940E−04
6Mean2.7681E+032.7690E+032.7667E+032.7683E+032.7680E+032.7688E+032.7671E+032.7690E+03
 Std6.5351E−011.7566E−024.1420E+001.3837E+001.2397E+001.7230E−012.4554E+009.0789E−04
8Mean2.7927E+032.7954E+032.7927E+032.7939E+032.7935E+032.7951E+032.7916E+032.7956E+03
 Std1.8137E+006.0072E−014.9115E+001.8096E+002.8571E+009.1516E−013.3840E+003.2251E−02
10Mean2.8038E+032.8083E+032.8055E+032.8071E+032.8064E+032.8079E+032.8029E+032.8088E+03
 Std1.1457E+007.1360E−012.2060E+001.2128E+001.5052E+006.9509E−013.1952E+006.0121E−02
terrace4Mean2.6402E+032.6402E+032.6400E+032.6402E+032.6402E+032.6402E+032.6402E+032.6402E+03
 Std9.0131E−021.4636E−033.3886E−017.5906E−031.0130E−013.7978E−023.5116E−020.0000E+00
6Mean2.7014E+032.7024E+032.7001E+032.7022E+032.7021E+032.7023E+032.7013E+032.7024E+03
 Std7.6215E−011.9623E−025.2989E+004.0150E−014.0884E−011.2320E−011.4368E+001.6013E−02
8Mean2.7272E+032.7297E+032.7274E+032.7283E+032.7278E+032.7294E+032.7259E+032.7297E+03
 Std1.4240E+005.7005E−022.8743E+001.2549E+002.3420E+003.2627E−012.5382E+002.1161E−02
10Mean2.7394E+032.7434E+032.7406E+032.7420E+032.7408E+032.7432E+032.7390E+032.7437E+03
 Std2.2468E+005.6941E−013.2009E+001.5874E+002.9178E+004.0708E−011.8502E+006.0548E−02
Friedman-Rank6.40 2.31 5.81 4.58 5.06 4.01 6.08 1.77 
Final-Rank82645371
Table 10. SSIM Results via Otsu’s Method.
Table 10. SSIM Results via Otsu’s Method.
ImagesTHMetricsMLPSOMELGWOMHWOAALAHORIMEDOAHSIDOA
barbara4Mean0.6579 0.6579 0.65800.6579 0.6580 0.6580 0.6580 0.6579 
 Std 0.0000 0.00000.0009 0.00000.0008 0.0002 0.0002 0.0000
6Mean0.74020.7395 0.7398 0.7398 0.7400 0.7396 0.7383 0.7398 
 Std0.0026 0.00060.0033 0.0016 0.0044 0.0009 0.0040 0.0012 
8Mean0.7975 0.8003 0.8003 0.8005 0.7970 0.7994 0.7871 0.8021
 Std0.0077 0.0027 0.0072 0.0038 0.0117 0.0031 0.0142 0.0020
10Mean0.8276 0.8382 0.8369 0.8351 0.8271 0.8364 0.8241 0.8392
 Std0.0138 0.0054 0.0199 0.0115 0.0163 0.0070 0.0130 0.0028
camera4Mean0.7219 0.7094 0.7139 0.7235 0.7004 0.7306 0.7118 0.7591
 Std0.0368 0.0330 0.0338 0.0390 0.0422 0.0355 0.0375 0.0005
6Mean0.8025 0.8035 0.8006 0.8030 0.8009 0.8029 0.7925 0.8054
 Std0.0081 0.0018 0.0193 0.0085 0.0165 0.0039 0.0243 0.0000
8Mean0.8324 0.8349 0.8294 0.83650.8361 0.8331 0.8318 0.8323 
 Std0.0131 0.0053 0.0105 0.0108 0.0111 0.0067 0.0090 0.0020
10Mean0.8500 0.8590 0.8499 0.8552 0.8524 0.8612 0.8482 0.8639
 Std0.0138 0.0068 0.0122 0.0117 0.0208 0.0066 0.0158 0.0023
couple4Mean0.7292 0.7298 0.73070.7299 0.7293 0.7291 0.7293 0.7297 
 Std0.0017 0.00060.0047 0.0020 0.0049 0.0014 0.0022 0.0008 
6Mean0.8291 0.83310.8289 0.8318 0.8278 0.8328 0.8268 0.8331 
 Std0.0057 0.0006 0.0089 0.0026 0.0079 0.0014 0.0081 0.0002
8Mean0.8707 0.8772 0.8718 0.8754 0.8748 0.8764 0.8715 0.8774
 Std0.0048 0.0010 0.0079 0.0023 0.0042 0.0017 0.0055 0.0006
10Mean0.8966 0.90390.8998 0.9015 0.9011 0.9036 0.8916 0.9037 
 Std0.0051 0.0006 0.0047 0.0031 0.0067 0.0022 0.0075 0.0005
house 4Mean0.7522 0.75330.7515 0.7530 0.7528 0.7524 0.7525 0.7532 
 Std0.0018 0.00000.0079 0.0011 0.0022 0.0019 0.0018 0.0005 
6Mean0.8053 0.8067 0.8046 0.8060 0.8030 0.8083 0.8027 0.8090
 Std0.0076 0.0037 0.0113 0.0085 0.0107 0.0046 0.0088 0.0021
8Mean0.8541 0.86120.8530 0.8599 0.8585 0.8598 0.8545 0.8612 
 Std0.0078 0.0016 0.0072 0.0061 0.0092 0.0032 0.0105 0.0009
10Mean0.8799 0.8845 0.8799 0.8841 0.8831 0.8854 0.8751 0.8856
 Std0.0070 0.0023 0.0053 0.0038 0.0081 0.0036 0.0109 0.0014
peppers 4Mean0.71460.7138 0.7132 0.7145 0.7141 0.7142 0.7129 0.7139 
 Std0.0017 0.00060.0034 0.0012 0.0028 0.0010 0.0042 0.0006 
6Mean0.7855 0.7870 0.7834 0.7847 0.7833 0.78710.7839 0.7869 
 Std0.0032 0.0005 0.0047 0.0044 0.0055 0.0011 0.0045 0.0001
8Mean0.8163 0.81940.8183 0.8170 0.8169 0.8188 0.8152 0.8190 
 Std0.0047 0.0012 0.0048 0.0039 0.0073 0.0025 0.0065 0.0007
10Mean0.8437 0.8545 0.8513 0.8521 0.8519 0.8541 0.8424 0.8574
 Std0.0091 0.0052 0.0090 0.0070 0.0086 0.0051 0.0102 0.0021
terrace4Mean0.7191 0.7197 0.7183 0.7189 0.7194 0.7194 0.7195 0.7203
 Std0.0017 0.0012 0.0029 0.0016 0.0022 0.0015 0.0014 0.0000
6Mean0.8043 0.8047 0.8037 0.8045 0.80530.8046 0.8034 0.8049 
 Std0.0064 0.0009 0.0074 0.0033 0.0056 0.0016 0.0059 0.0005
8Mean0.8509 0.8583 0.8585 0.8563 0.85960.8583 0.8503 0.8595 
 Std0.0117 0.0037 0.0114 0.0118 0.0135 0.0044 0.0126 0.0023
10Mean0.8808 0.8943 0.8927 0.8910 0.8892 0.8945 0.8830 0.8973
 Std0.0119 0.0081 0.0117 0.0110 0.0133 0.0044 0.0123 0.0031
Friedman-Rank5.32 3.97 4.48 4.53 4.30 4.23 5.42 3.78 
Final-Rank72564381
Table 11. PSNR Results via Otsu’s Method.
Table 11. PSNR Results via Otsu’s Method.
ImagesTHMetricsMLPSOMELGWOMHWOAALAHORIMEDOAHSIDOA
barbara4Mean18.7687 18.7687  18.7754 18.7687 18.7742 18.7715 18.7706 18.7687 
 Std0.00000.00000.0343 0.00000.0292 0.0108 0.0104 0.0000
6Mean21.105621.0581 21.0720 21.0744 21.0985 21.0657 21.0383 21.0724 
 Std0.0749 0.01620.1027 0.0513 0.1392 0.0257 0.1195 0.0315 
8Mean22.8984 23.1373 23.1238 23.0987 22.9307 23.0807 22.7035 23.1484
 Std0.5153 0.1430 0.6066 0.4438 0.4693 0.2101 0.6279 0.0769
10Mean24.2231 24.6002 24.5457 24.4955 24.1397 24.5400 24.0456 24.6670
 Std0.5547 0.2412 0.7793 0.4921 0.7115 0.2821 0.5463 0.1035
camera4Mean18.9294 18.5994 18.7320 18.9734 18.4484 19.1542 18.6721 19.8709
 Std0.9407 0.8469 0.8740 0.9739 1.0156 0.9036 0.9377 0.0017
6Mean21.8559 21.8864 21.7933 21.8697 21.7821 21.8808 21.5220 21.9353
 Std0.2054 0.0453 0.4809 0.2137 0.5846 0.0957 0.8436 0.0000
8Mean23.1978 23.1965 22.9992 23.304123.2948 23.1727 23.1015 23.0499 
 Std0.3854 0.2608 0.3509 0.4165 0.3720 0.3103 0.3123 0.0927
10Mean24.0360 24.2538 23.8939 24.1306 24.1297 24.3875 23.7676 24.5336
 Std0.5270 0.3585 0.4924 0.4725 0.8246 0.3140 0.6367 0.1153
couple4Mean20.2511 20.2671 20.269620.2655 20.2446 20.2493 20.2562 20.2642 
 Std0.0322 0.01120.0801 0.0367 0.0991 0.0290 0.0386 0.0160 
6Mean23.3637 23.460323.2775 23.4357 23.3265 23.4466 23.2161 23.4593 
 Std0.1078 0.0080 0.3216 0.0407 0.2220 0.0270 0.2799 0.0020
8Mean25.1037 25.3984 25.1340 25.3242 25.2906 25.3662 25.1022 25.4057
 Std0.1997 0.0498 0.3688 0.0682 0.1946 0.0460 0.2444 0.0176
10Mean26.4901 26.832826.5355 26.6780 26.6230 26.8192 26.1788 26.8305 
 Std0.2264 0.0424 0.2508 0.1792 0.3779 0.1095 0.3405 0.0345
house 4Mean20.1139 20.1089 20.0593 20.1197 20.124220.1128 20.1117 20.1103 
 Std0.0409 0.00000.2165 0.0226 0.0348 0.0359 0.0215 0.0056 
6Mean22.7620 22.8198 22.6517 22.7692 22.6312 22.8907 22.5937 22.9530
 Std0.3376 0.1901 0.5245 0.3927 0.4345 0.2252 0.4027 0.1067
8Mean24.9525 25.2679 24.8679 25.1631 25.0955 25.2093 24.9175 25.2723
 Std0.2515 0.0439 0.3802 0.2133 0.2991 0.0971 0.5206 0.0240
10Mean26.2343 26.6082 26.2897 26.5158 26.4671 26.6507 25.9875 26.6816
 Std0.2996 0.1345 0.3317 0.2261 0.3630 0.1961 0.5441 0.0529
peppers 4Mean20.456620.4548 20.4269 20.4444 20.4482 20.4484 20.4225 20.4527 
 Std0.0251 0.01160.0620 0.0162 0.0272 0.0155 0.1224 0.0132 
6Mean23.1683 23.2267 23.1109 23.2052 23.1832 23.2084 23.1097 23.2275
 Std0.0953 0.0077 0.1688 0.0599 0.1097 0.0315 0.1818 0.0011
8Mean24.7435 24.9532 24.7874 24.8494 24.8742 24.9318 24.6542 24.9561
 Std0.2162 0.01490.3259 0.1853 0.2413 0.0528 0.2548 0.0161 
10Mean26.0604 26.6374 26.3212 26.4576 26.3887 26.5862 25.8948 26.7557
 Std0.2716 0.1609 0.3020 0.2508 0.2936 0.1521 0.4084 0.0318
terrace4Mean21.4769 21.4775 21.4776 21.479821.4779 21.4780 21.4781 21.4759 
 Std0.0083 0.0037 0.0213 0.0055 0.0087 0.0057 0.0045 0.0000
6Mean23.9677 24.0164 23.9086 24.0080 24.0076 24.0151 23.9629 24.0178
 Std0.0845 0.00480.2672 0.0270 0.0419 0.0321 0.0806 0.0108 
8Mean25.7496 25.975525.7750 25.8424 25.8017 25.9586 25.6529 25.9688 
 Std0.1338 0.0089 0.2325 0.1193 0.2179 0.0341 0.1929 0.0078
10Mean27.0519 27.5590 27.1869 27.3678 27.2138 27.5236 27.0133 27.5908
 Std0.2984 0.0875 0.3908 0.1987 0.3623 0.0701 0.2114 0.0172
Friedman-Rank5.61 3.68 4.87 4.40 4.77 3.73 6.20 2.75 
Final-Rank72645381
Table 12. FSIM Results via Otsu’s Method.
Table 12. FSIM Results via Otsu’s Method.
ImagesTHMetricsMLPSOMELGWOMHWOAALAHORIMEDOAHSIDOA
barbara4Mean0.8131 0.8131 0.8131 0.8131 0.8131 0.8131 0.8131  0.8132
 Std0.00000.00000.0006 0.00000.0005 0.0002 0.0002 0.0000
6Mean0.86410.8636 0.8636 0.8637 0.8639 0.8638 0.8627 0.8637 
 Std0.0009 0.00010.0008 0.0005 0.0008 0.0002 0.0021 0.0004 
8Mean0.8889 0.8915 0.8902 0.8914 0.8903 0.8913 0.8860 0.8915
 Std0.0029 0.0005 0.0020 0.0016 0.0027 0.0008 0.0045 0.0004
10Mean0.9036 0.9120 0.9085 0.9085 0.9082 0.9107 0.9040 0.9123
 Std0.0041 0.0015 0.0045 0.0030 0.0039 0.0017 0.0043 0.0007
camera4Mean0.8355 0.83780.8362 0.8346 0.8309 0.8341 0.8358 0.8328 
 Std0.0049 0.0034 0.0050 0.0056 0.0085 0.0048 0.0058 0.0001
6Mean0.8774 0.8782 0.8753 0.8775 0.8777 0.8776 0.8740 0.8791
 Std0.0038 0.0008 0.0077 0.0036 0.0035 0.0018 0.0058 0.0000
8Mean0.9007 0.90280.8986 0.9017 0.9025 0.9016 0.8983 0.9024 
 Std0.0043 0.0013 0.0053 0.0036 0.0026 0.0021 0.0040 0.0006
10Mean0.9129 0.9188 0.9128 0.9152 0.9135 0.9188 0.9113 0.9203
 Std0.0050 0.0015 0.0056 0.0048 0.0094 0.0017 0.0073 0.0006
couple4Mean0.8007 0.8009 0.80110.8010 0.8005 0.8006 0.8007 0.8008 
 Std0.0006 0.00010.0018 0.0009 0.0022 0.0005 0.0009 0.0002 
6Mean0.8710 0.8730 0.8703 0.8722 0.8708 0.87320.8690 0.8730 
 Std0.0023 0.0005 0.0067 0.0018 0.0035 0.0008 0.0061 0.0001
8Mean0.9069 0.91220.9084 0.9108 0.9105 0.9115 0.9060 0.9118 
 Std0.0040 0.0007 0.0062 0.0025 0.0024 0.0011 0.0056 0.0006
10Mean0.9270 0.9336 0.9287 0.9308 0.9291 0.9330 0.9212 0.9338
 Std0.0041 0.0013 0.0052 0.0041 0.0061 0.0019 0.0056 0.0006
house 4Mean0.8201 0.82050.8202 0.8204 0.8204 0.8202 0.8202 0.8204 
 Std0.0006 0.00000.0033 0.0003 0.0009 0.0007 0.0007 0.0001 
6Mean0.8634 0.8645 0.8627 0.8643 0.8620 0.8658 0.8615 0.8665
 Std0.0057 0.0029 0.0098 0.0062 0.0073 0.0036 0.0067 0.0016
8Mean0.9011 0.9066 0.8998 0.9051 0.9038 0.9055 0.9010 0.9069
 Std0.0063 0.0012 0.0066 0.0041 0.0059 0.0021 0.0081 0.0007
10Mean0.9188 0.9244 0.9191 0.9235 0.9227 0.9252 0.9154 0.9255
 Std0.0053 0.0025 0.0053 0.0035 0.0060 0.0032 0.0086 0.0009
peppers 4Mean0.7867 0.7868 0.7861 0.78690.7866 0.7866 0.7864 0.7868 
 Std0.0005 0.00000.0012 0.0003 0.0007 0.0004 0.0007 0.0000 
6Mean0.8486 0.8493 0.8465 0.8486 0.8481 0.84950.8475 0.8492 
 Std0.0016 0.0002 0.0046 0.0016 0.0020 0.0006 0.0028 0.0001
8Mean0.8825 0.8864 0.8833 0.8845 0.8833 0.8859 0.8812 0.8865
 Std0.0030 0.0005 0.0070 0.0021 0.0045 0.0015 0.0050 0.0003
10Mean0.9029 0.9138 0.9068 0.9109 0.9092 0.9124 0.9017 0.9145
 Std0.0036 0.0016 0.0056 0.0032 0.0040 0.0021 0.0069 0.0006
terrace4Mean0.8447 0.8450 0.8441 0.8447 0.8449 0.8448 0.8448 0.8451
 Std0.0008 0.0004 0.0017 0.0005 0.0009 0.0006 0.0005 0.0000
6Mean0.9033 0.9048 0.9024 0.9045 0.9047 0.9047 0.9036 0.9048
 Std0.0027 0.0003 0.0050 0.0012 0.0016 0.0005 0.0036 0.0002
8Mean0.9321 0.9372 0.9343 0.9354 0.9366 0.9367 0.9313 0.9379
 Std0.0055 0.0013 0.0065 0.0041 0.0037 0.0016 0.0067 0.0008
10Mean0.9484 0.9560 0.9517 0.9534 0.9516 0.9564 0.9470 0.9571
 Std0.0047 0.0023 0.0059 0.0047 0.0065 0.0014 0.0059 0.0010
Friedman-Rank5.58 3.54 5.23 4.48 4.23 4.02 5.78 3.15 
Final-Rank72654381
As shown in Table 9, the HSIDOA achieves the best or near-best average fitness values across all images and threshold levels, with standard deviations significantly lower than those of the comparison algorithms. For instance, in the 10-level threshold segmentation of the camera image, the HSIDOA attains an average fitness of 4.6809E+03, comparable to MELGWO (4.6807E+03), but with a standard deviation of only 4.0804E−02, far lower than MELGWO’s 1.9677E−01 and those of other algorithms. In the 8-level threshold segmentation of the peppers image, the HSIDOA achieves the highest average fitness of 2.7956E+03, with a standard deviation (3.2251E−02) only 1% of that of the DOA (3.3840E+00), demonstrating its ability to accurately identify optimal threshold combinations in complex grayscale images. The fitness curves in Figure 6 further illustrate that the HSIDOA converges rapidly to the optimal value in the early iterations and remains stable thereafter, whereas the DOA, MLPSO, and other algorithms exhibit noticeable oscillations and higher final fitness values, confirming the HSIDOA’s efficiency and stability in threshold optimization.
Table 10 and Table 11 present evaluations of segmentation quality from structural similarity (SSIM) and peak signal-to-noise ratio (PSNR) perspectives, further highlighting the HSIDOA’s overall advantage. In terms of SSIM, the HSIDOA achieves 0.7591 on the camera image at TH = 4, significantly higher than other algorithms, which generally fall in the 0.70–0.73 range, with a minimal standard deviation of 0.0005, indicating robust and superior preservation of image structure. For the barbara image at TH = 10, the HSIDOA attains an SSIM of 0.8392, among the highest values in that column. Regarding PSNR, Table 10 shows that the HSIDOA consistently achieves leading or near-leading performance across multiple image scenarios. For example, on the barbara image at TH = 4, the HSIDOA reaches 17.4966 dB, matching the best-performing algorithm, with a negligible standard deviation (0.0010), indicating excellent noise suppression and visual fidelity. As the number of thresholds increases, the HSIDOA maintains stable PSNR performance without significant declines or oscillations, further demonstrating the robustness of its search strategies in preserving image quality under complex conditions.
Table 12 reports the FSIM metric, which validates the HSIDOA’s superiority from the perspective of image feature preservation. In the barbara image at TH = 4, the HSIDOA achieves an FSIM of 0.8132, slightly higher than other algorithms clustered around 0.8131, indicating better retention of key image features. Even when the threshold level increases to TH = 8, the HSIDOA remains among the top performers in FSIM, showing that its advantages do not diminish with more thresholds. Additionally, FSIM standard deviations across all algorithms are generally on the order of 10−3 to 10−4, while the HSIDOA exhibits even smaller fluctuations, reflecting its superior stability in feature preservation. This ensures that the HSIDOA consistently produces high-quality and reliable solutions in multi-threshold image segmentation across different experimental settings.
Figure 8 illustrates the dynamic fitness curves of different algorithms during 4-level and 10-level threshold segmentation tasks on representative images (barbara, camera, couple, etc.), intuitively reflecting each algorithm’s convergence efficiency and stability. Overall, the HSIDOA exhibits a “rapid decline—early stabilization” trend across all test scenarios, markedly outperforming the comparison algorithms.
In the 4-level threshold segmentation tasks, the HSIDOA quickly approaches the optimal fitness within the first 20 iterations, after which the curve remains essentially stable with minimal oscillations. For example, on the camera image, the HSIDOA reaches an average fitness of 4.601E+03 at iteration 30, whereas DOA, MLPSO, and other algorithms require more than 80 iterations to approach the same level, and their curves show minor fluctuations in later iterations.
In the 10-level threshold segmentation tasks, where the optimization difficulty increases significantly due to higher-dimensional threshold combinations, the HSIDOA’s convergence advantage becomes even more pronounced. On the peppers image, its fitness curve drops to approximately 2.808E+03 by iteration 50, while DOA, MHWOA, and other algorithms remain above 2.803E+03 even at iteration 100, with frequent oscillations during convergence, indicating their susceptibility to local optima in complex threshold spaces.
Compared to other algorithms, the HSIDOA consistently maintains the lowest fitness curves with the fastest convergence. This advantage arises from the quadratic interpolation search strategy, which precisely exploits locally optimal thresholds, and the horizontal crossover strategy, which effectively preserves population diversity. These mechanisms allow the HSIDOA to rapidly escape local optima and accurately locate global optimal threshold combinations in the early iterations. In contrast, algorithms such as DOA and MLPSO lack efficient coordination between local exploitation and global exploration, leading to slower convergence and potential stagnation or oscillation in later iterations, resulting in higher final fitness values.
Furthermore, the HSIDOA’s convergence curves exhibit strong consistency across different threshold levels and image styles, further confirming its stability and generalizability in multi-threshold image segmentation tasks.
Figure 9 presents the average ranking results of all algorithms, intuitively reflecting their overall performance. The HSIDOA ranks first across all four evaluation metrics, with average rankings significantly ahead of the other algorithms. Specifically, the HSIDOA achieves an average ranking of 2.75 on the PSNR metric, substantially better than the DOA (6.20) and MLPSO (5.61); for SSIM and FSIM, its average rankings are 3.78 and 3.15, respectively, consistently maintaining an absolute leading position. These results indicate that the HSIDOA effectively combines the precise local threshold optimization of the quadratic interpolation search strategy, the population diversity preservation of the horizontal crossover strategy, and the threshold validity enforcement of the centroid-based opposition learning boundary handling strategy, thereby achieving synergistic improvement in threshold optimization accuracy and segmentation quality. Regardless of image style differences (complex/simple textures, uniform/non-uniform gray-level distributions) or threshold levels, the HSIDOA consistently delivers optimal segmentation results, providing an efficient and reliable solution for multi-threshold image segmentation tasks.
In summary, a comprehensive analysis of the quantitative data in Table 9, Table 10, Table 11 and Table 12 and the visual results in Figure 8 and Figure 9 leads to clear conclusions: The HSIDOA exhibits higher average values, lower standard deviations, and faster, more stable convergence trends in key metrics such as Otsu’s between-class variance, SSIM, and FSIM. In terms of optimization performance, structural preservation, feature retention, and overall stability, the HSIDOA outperforms existing comparison algorithms, demonstrating its significant performance advantage and reliability in multi-threshold image segmentation tasks.

5. Conclusions and Future Work

This paper proposed a Hybrid Strategy Improved Dingo Optimization Algorithm (HSIDOA) to overcome the inherent limitations of the standard DOA in complex optimization scenarios. By incorporating a quadratic interpolation search strategy, a horizontal crossover search strategy, and a centroid-based opposition learning boundary-handling mechanism, the HSIDOA effectively enhances local exploitation accuracy, preserves population diversity, and improves boundary search efficiency. The synergistic integration of these strategies enables the HSIDOA to achieve faster convergence and more stable optimization performance without increasing the computational complexity of the original algorithm.
Extensive numerical experiments conducted on the CEC2017 and CEC2022 benchmark suites validate the overall superiority of the HSIDOA from a statistical perspective. In particular, based on the Friedman test results, the HSIDOA achieves the best mean ranking (M.R.) on the CEC2017 benchmark (30-dimensional), consistently ranking first among all compared algorithms. Similar advantages are observed on the CEC2022 benchmarks with different dimensional settings, demonstrating that the HSIDOA maintains stable and competitive performance as problem dimensionality varies. These ranking-based results indicate that the HSIDOA provides a more reliable and robust optimization framework than both the original DOA and other state-of-the-art metaheuristic algorithms.
In multi-level threshold image segmentation tasks, the HSIDOA also exhibits clear advantages when evaluated using ranking-based performance indicators. As illustrated in Figure 9, the HSIDOA ranks first across all four evaluation metrics, reflecting its overall segmentation effectiveness. Specifically, the HSIDOA achieves an average ranking of 2.75 on the PSNR metric, which is significantly better than the DOA (6.20) and MLPSO (5.61). For structural similarity and feature preservation, the HSIDOA attains average rankings of 3.78 and 3.15 on SSIM and FSIM, respectively, consistently outperforming the comparison algorithms. These results demonstrate that the HSIDOA not only improves threshold optimization performance but also achieves superior visual quality in terms of structural integrity and feature consistency.
Overall, the ranking-based experimental evidence from both numerical optimization benchmarks and image segmentation tasks confirms that the HSIDOA offers a well-balanced optimization framework with strong convergence stability, reliable search behavior, and robust generalization capability. The algorithm shows consistent top-tier performance across different evaluation criteria, highlighting its suitability for solving complex, high-dimensional, and multimodal optimization problems.
Future work will focus on further enhancing the adaptability of the HSIDOA by introducing dynamic or self-adaptive mechanisms to adjust strategy contributions during different optimization stages. Additionally, extending the HSIDOA to multi-objective and dynamic optimization problems represents a promising research direction. Integrating the HSIDOA with learning-based or feature-driven models may also broaden its applicability in areas such as medical image analysis, intelligent control, path planning, and large-scale engineering optimization.

Author Contributions

Conceptualization, Q.Z. and M.G.; methodology, Q.Z. and M.G.; software, Q.Z. and M.G.; validation, Q.Z. and M.G.; formal analysis, Q.Z. and M.G.; investigation, Q.Z. and M.G.; re-sources, Q.Z. and M.G.; data curation, Y.W. and Z.Y.; writing original draft preparation, Y.W. and Z.Y.; writing review and editing, Y.W. and Z.Y.; visualization, Y.W. and Z.Y.; supervision, Y.W. and Z.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Acknowledgments

The authors would like to express their sincere gratitude to all those who contributed to the completion of this work. The authors acknowledge the use of artificial intelligence (AI) tools for English language polishing during the preparation of this manuscript. All content, including methodology, experiments, and conclusions, is the original work of the authors, and the AI tool was only used to improve language expression.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Fu, Y.; Liu, D.; Chen, J.; He, L. Secretary bird optimization algorithm: A new metaheuristic for solving global optimization problems. Artif. Intell. Rev. 2024, 57, 123. [Google Scholar] [CrossRef]
  2. Liu, J.; Yang, J.; Cui, L. mESC: An Enhanced Escape Algorithm Fusing Multiple Strategies for Engineering Optimization. Biomimetics 2025, 10, 232. [Google Scholar] [CrossRef]
  3. Liu, X.; Li, S.; Wu, Y.; Fu, Z. Graduate Student Evolutionary Algorithm: A Novel Metaheuristic Algorithm for 3D UAV and Robot Path Planning. Biomimetics 2025, 10, 616. [Google Scholar] [CrossRef]
  4. Luo, X.; Yuan, Y.; Fu, Y.; Huang, H.; Wei, J. A multi-strategy improved Coati optimization algorithm for solving global optimization problems. Clust. Comput. 2025, 28, 264. [Google Scholar] [CrossRef]
  5. Zhang, Y.; Wang, J.; Zhang, X.; Wang, B. ACPOA: An Adaptive Cooperative Pelican Optimization Algorithm for Global Optimization and Multilevel Thresholding Image Segmentation. Biomimetics 2025, 10, 596. [Google Scholar] [CrossRef]
  6. Yang, X.; Zhao, S.; Gao, W.; Li, P.; Feng, Z.; Li, L.; Jia, T.; Wang, X. Three-Dimensional Path Planning for UAV Based on Multi-Strategy Dream Optimization Algorithm. Biomimetics 2025, 10, 551. [Google Scholar] [CrossRef]
  7. Zhu, Y.; Zhang, M.; Huang, Q.; Wu, X.; Wan, L.; Huang, J. Secretary bird optimization algorithm based on quantum computing and multiple strategies improvement for KELM diabetes classification. Sci. Rep. 2025, 15, 3774. [Google Scholar] [CrossRef]
  8. Cao, L.; Wei, Q.J.B. SZOA: An Improved Synergistic Zebra Optimization Algorithm for Microgrid Scheduling and Management. Biomimetics 2025, 10, 664. [Google Scholar] [CrossRef]
  9. Shi, J.; Chen, Y.; Wang, C.; Heidari, A.A.; Liu, L.; Chen, H.; Chen, X.; Sun, L. Multi-threshold image segmentation using new strategies enhanced whale optimization for lupus nephritis pathological images. Displays 2024, 84, 102799. [Google Scholar] [CrossRef]
  10. Zhang, Y.; Liu, X.; Sun, W.; You, T.; Qi, X. Multi-Threshold Remote Sensing Image Segmentation Based on Improved Black-Winged Kite Algorithm. Biomimetics 2025, 10, 331. [Google Scholar] [CrossRef] [PubMed]
  11. Guo, H.; Wang, J.G.; Liu, Y. Multi-threshold image segmentation algorithm based on Aquila optimization. Vis. Comput. 2023, 40, 2905–2932. [Google Scholar] [CrossRef]
  12. Huang, T.; Yin, H.; Huang, X. Improved genetic algorithm for multi-threshold optimization in digital pathology image segmentation. Sci. Rep. 2024, 14, 22454. [Google Scholar] [CrossRef] [PubMed]
  13. Tan, W.-H.; Mohamad-Saleh, J. A hybrid whale optimization algorithm based on equilibrium concept. Alex. Eng. J. 2022, 68, 763–786. [Google Scholar] [CrossRef]
  14. Bhargavi, K.V.N.A.; Varma, G.P.S.; Hemalatha, I.; Dilli, R. An Enhanced Particle Swarm Optimization-Based Node Deployment and Coverage in Sensor Networks. Sensors 2024, 24, 6238. [Google Scholar] [CrossRef]
  15. Zheng, X.; Liu, R.; Li, S. A Novel Improved Dung Beetle Optimization Algorithm for Collaborative 3D Path Planning of UAVs. Biomimetics 2025, 10, 420. [Google Scholar] [CrossRef]
  16. Fu, W.-Y. Adaptive-acceleration-empowered collaborative particle swarm optimization. Inf. Sci. 2025, 721, 122621. [Google Scholar] [CrossRef]
  17. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  18. Chen, X.; Ye, C.; Zhang, Y. Strengthened grey wolf optimization algorithms for numerical optimization tasks and AutoML. Swarm Evol. Comput. 2025, 94, 101891. [Google Scholar] [CrossRef]
  19. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  20. Nadimi-Shahraki, M.H.; Farhanginasab, H.; Taghian, S.; Sadiq, A.S.; Mirjalili, S. Multi-trial Vector-based Whale Optimization Algorithm. J. Bionic Eng. 2024, 21, 1465–1495. [Google Scholar] [CrossRef]
  21. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  22. Pan, H.; Yuan, H.; Yue, Q.; Ouyang, H.; Gu, F.; Li, F. Multi-level particle swarm optimizer for multimodal optimization problems. Inf. Sci. 2025, 702, 121909. [Google Scholar] [CrossRef]
  23. Ahmed, R.; Rangaiah, G.P.; Mahadzir, S.; Mirjalili, S.; Hassan, M.H.; Kamel, S. Memory, evolutionary operator, and local search based improved Grey Wolf Optimizer with linear population size reduction technique. Knowl.-Based Syst. 2023, 264, 110297. [Google Scholar] [CrossRef]
  24. Xie, X.; Yang, Y.; Zhou, H. Multi-Strategy Hybrid Whale Optimization Algorithm Improvement. Appl. Sci. 2025, 15, 2224. [Google Scholar] [CrossRef]
  25. Li, Y.; Gao, A.; Li, H.; Li, L. An improved whale optimization algorithm for UAV swarm trajectory planning. Adv. Differ. Equ. 2024, 2024, 40. [Google Scholar] [CrossRef]
  26. Li, Y.; Tian, G.; Yi, Y.; Yuan, Y. Improved Artificial Rabbit Optimization and Its Application in Multichannel Signal Denoising. IEEE Sens. J. 2024, 24, 32950–32965. [Google Scholar] [CrossRef]
  27. Nadimi-Shahraki, M.H.; Taghian, S.; Javaheri, D.; Sadiq, A.S.; Khodadadi, N.; Mirjalili, S. MTV-SCA: Multi-trial vector-based sine cosine algorithm. Clust. Comput. 2024, 27, 13471–13515. [Google Scholar] [CrossRef]
  28. Peraza-Vázquez, H.; Peña-Delgado, A.F.; Echavarría-Castillo, G.; Morales-Cepeda, A.B.; Velasco-Álvarez, J.; Ruiz-Perez, F. A Bio-Inspired Method for Engineering Design Optimization Inspired by Dingoes Hunting Strategies. Math. Probl. Eng. 2021, 2021, 1–19. [Google Scholar] [CrossRef]
  29. Almazán-Covarrubias, J.H.; Peraza-Vázquez, H.; Peña-Delgado, A.F.; García-Vite, P.M. An Improved Dingo Optimization Algorithm Applied to SHE-PWM Modulation Strategy. Appl. Sci. 2022, 12, 992. [Google Scholar] [CrossRef]
  30. Ramya, K.; Ayothi, S. Hybrid dingo and whale optimization algorithm-based optimal load balancing for cloud computing environment. Trans. Emerg. Telecommun. Technol. 2023, 34, e4760. [Google Scholar] [CrossRef]
  31. Wang, S.; Lv, X.; Li, Y.; Jing, L.; Fang, X.; Peng, G.; Zhou, Y.; Sun, W. A novel hybrid improved dingo algorithm for unmanned aerial vehicle path planning. J. Braz. Soc. Mech. Sci. Eng. 2024, 47, 10. [Google Scholar] [CrossRef]
  32. Chen, X.; Mei, C.; Xu, B.; Yu, K.; Huang, X. Quadratic interpolation based teaching-learning-based optimization for chemical dynamic system optimization. Knowl.-Based Syst. 2018, 145, 250–263. [Google Scholar] [CrossRef]
  33. Mai, X.; Zhong, Y.; Li, L. The Crossover strategy integrated Secretary Bird Optimization Algorithm and its application in engineering design problems. Electron. Res. Arch. 2025, 33, 471–512. [Google Scholar] [CrossRef]
  34. Wu, G.; Mallipeddi, R.; Suganthan, P. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition and Special Session on Constrained Single Objective Real-Parameter Optimization; National University of Defense Technology: Changsha, China, 2016. [Google Scholar]
  35. Luo, W.; Lin, X.; Li, C.; Yang, S.; Shi, Y. Benchmark functions for CEC 2022 competition on seeking multiple optima in dynamic environments. arXiv 2022, arXiv:2201.00523. [Google Scholar] [CrossRef]
  36. Xiao, Y.; Cui, H.; Khurma, R.A.; Castillo, P.A. Artificial lemming algorithm: A novel bionic meta-heuristic technique for solving real-world engineering optimization problems. Artif. Intell. Rev. 2025, 58, 84. [Google Scholar] [CrossRef]
  37. Amiri, M.H.; Hashjin, N.M.; Montazeri, M.; Mirjalili, S.; Khodadadi, N. Hippopotamus optimization algorithm: A novel nature-inspired optimization algorithm. Sci. Rep. 2024, 14, 5032. [Google Scholar] [CrossRef]
  38. Su, H.; Zhao, D.; Heidari, A.A.; Liu, L.; Zhang, X.; Mafarja, M.; Chen, H. RIME: A physics-based optimization. Neurocomputing 2023, 532, 183–214. [Google Scholar] [CrossRef]
  39. Chen, Z.; Fu, B.; Yang, Y. A Novel Elite-Guided Hybrid Metaheuristic Algorithm for Efficient Feature Selection. Biomimetics 2025, 10, 747. [Google Scholar] [CrossRef]
  40. Zhang, X.; Bao, Z.; Li, X.; Wang, J. Multi-Threshold Art Symmetry Image Segmentation and Numerical Optimization Based on the Modified Golden Jackal Optimization. Symmetry 2025, 17, 2130. [Google Scholar] [CrossRef]
  41. Zheng, J.; Gao, Y.; Zhang, H.; Lei, Y.; Zhang, J. OTSU Multi-Threshold Image Segmentation Based on Improved Particle Swarm Algorithm. Appl. Sci. 2022, 12, 11514. [Google Scholar] [CrossRef]
  42. Yuan, X.; Zhu, Z.; Yang, Z.; Zhang, Y. MSCSO: A Modified Sand Cat Swarm Optimization for Global Optimization and Multilevel Thresholding Image Segmentation. Symmetry 2025, 17, 2012. [Google Scholar] [CrossRef]
  43. Tamilarasan, A.; Rajamani, D. Towards efficient image segmentation: A fuzzy entropy-based approach using the snake optimizer algorithm. Results Eng. 2025, 26, 105335. [Google Scholar] [CrossRef]
Figure 1. The appropriate image of ‘Dingo’.
Figure 1. The appropriate image of ‘Dingo’.
Biomimetics 11 00052 g001
Figure 2. Enhanced DOA versions convergence across different modification approaches.
Figure 2. Enhanced DOA versions convergence across different modification approaches.
Biomimetics 11 00052 g002aBiomimetics 11 00052 g002b
Figure 3. Average order of upgraded DOA models across distinct enhancement techniques.
Figure 3. Average order of upgraded DOA models across distinct enhancement techniques.
Biomimetics 11 00052 g003
Figure 4. Convergence speed comparison among the tested methods using the benchmark suite.
Figure 4. Convergence speed comparison among the tested methods using the benchmark suite.
Biomimetics 11 00052 g004aBiomimetics 11 00052 g004bBiomimetics 11 00052 g004c
Figure 5. Comparison of average runtime between the DOA and HSIDOA on CEC2017 (dim = 30).
Figure 5. Comparison of average runtime between the DOA and HSIDOA on CEC2017 (dim = 30).
Biomimetics 11 00052 g005
Figure 6. Graphical representation of relative performance distribution across the evaluated algorithm collection.
Figure 6. Graphical representation of relative performance distribution across the evaluated algorithm collection.
Biomimetics 11 00052 g006aBiomimetics 11 00052 g006b
Figure 7. Reference pictures across diverse styles.
Figure 7. Reference pictures across diverse styles.
Biomimetics 11 00052 g007
Figure 8. Fitness progression plots for various algorithms on multiple images (TH = 4/10).
Figure 8. Fitness progression plots for various algorithms on multiple images (TH = 4/10).
Biomimetics 11 00052 g008aBiomimetics 11 00052 g008bBiomimetics 11 00052 g008c
Figure 9. Mean ordinal positions per algorithm under varied evaluation measures.
Figure 9. Mean ordinal positions per algorithm under varied evaluation measures.
Biomimetics 11 00052 g009
Table 1. Configuration for evaluated method parameters.
Table 1. Configuration for evaluated method parameters.
AlgorithmsName of the ParameterValue of the Parameter
MLPSO c 1 ,   c 2 ,   w ,   g a m m a _ t h r e s h , c l u s t e r _ r a t i o 2, 2, [0.3, 0.8], 0.3, 0.3
MELGWO a ,   C r o s s o v e r ,   S t o c h a s t i c   L o c a l   S e a r c h 2 to 0, 0.6, 0.5
MHWOA a Linear reduction from 1 to 0.
ALA P r o b 0.3
HO ρ 1 ,   ρ 2 ,   ϑ ,   ζ ,   g 0 ,   1 ,   0 ,   1 ,   1.5 ,   2 ,   4 ,   [ 1 ,   1 ]
RIME W 5
DOA β 1 ,   β 2 ,   P ,   Q 2 ,   2 ,   1 ,   1 ,   0.5 ,   0.7
HSIDOA β 1 ,   β 2 ,   P ,   Q ,  λ 1 ,   λ 2 ,   μ 1 ,   μ 2 2 ,   2 ,   1 ,   1 ,   0.5 ,   0.7 ,   0 ,   1 ,   0 ,   1 ,   1 ,   1 ,   1 ,   1
Table 2. CEC2017 Benchmark Results (30 dim Case).
Table 2. CEC2017 Benchmark Results (30 dim Case).
FunctionMetricDOADOA-QIDOA-HCDOA-CRHSIDOA
F1Ave4.9148E+101.8968E+059.0494E+034.9108E+07 3.4022E+03
 Std9.1464E+096.3182E+052.4229E+042.6964E+074.3862E+03
F2Ave1.2358E+441.7271E+301.5259E+242.8560E+241.4578E+20
 Std7.2300E+441.1477E+317.3040E+242.0024E+251.0112E+21
F3Ave6.9500E+042.4698E+041.2243E+046.8668E+043.5588E+03
 Std9.2972E+036.9045E+035.9651E+039.9891E+032.7549E+03
F4Ave1.1835E+045.1764E+025.1418E+025.5825E+024.8839E+02
 Std4.2237E+032.9597E+013.4045E+012.8356E+012.9712E+01
F5Ave8.5825E+026.5071E+026.1851E+027.1964E+025.8798E+02
 Std3.9412E+013.3498E+012.8948E+018.0426E+012.5788E+01
F6Ave6.7855E+026.3796E+026.1697E+026.5286E+026.0476E+02
 Std8.8623E+001.4827E+011.0553E+012.3724E+014.9405E+00
F7Ave1.3519E+031.0081E+039.1780E+021.1357E+038.4756E+02
 Std6.8228E+018.2783E+015.6779E+011.2883E+023.5352E+01
F8Ave1.0946E+039.2337E+029.0552E+029.6848E+028.8480E+02
 Std3.1869E+013.6712E+012.3365E+015.7329E+012.2658E+01
F9Ave8.6194E+033.4401E+032.5419E+037.1462E+031.4574E+03
 Std1.4984E+031.9919E+038.3727E+023.2575E+034.7391E+02
F10Ave8.2351E+035.8910E+035.1909E+037.3589E+034.9248E+03
 Std7.0182E+021.4042E+037.6901E+022.4665E+035.7244E+02
F11Ave5.5065E+031.2690E+031.2802E+033.6919E+031.2014E+03
 Std2.0621E+035.6691E+016.9502E+012.2927E+035.1109E+01
F12Ave6.8128E+091.7042E+061.0186E+069.1374E+063.3247E+05
 Std3.3172E+091.3288E+061.0070E+066.3453E+062.7565E+05
F13Ave1.9991E+091.4482E+042.0553E+045.3206E+061.4234E+04
 Std2.8895E+091.4579E+042.0420E+044.1719E+061.4510E+04
F14Ave6.9634E+041.9665E+041.1842E+041.8172E+066.3768E+03
 Std7.5764E+042.0531E+042.3683E+041.9673E+068.6341E+03
F15Ave7.2406E+065.4170E+036.2847E+032.3267E+055.3660E+03
 Std3.6225E+074.7894E+037.3670E+033.7651E+055.7589E+03
F16Ave4.2906E+032.7046E+032.6433E+032.9686E+032.5074E+03
 Std7.1981E+022.9376E+022.8259E+023.7963E+022.7154E+02
F17Ave2.7482E+032.2935E+032.2486E+032.3933E+032.1583E+03
 Std3.4513E+022.6376E+021.9200E+022.9920E+022.3018E+02
F18Ave1.2332E+061.6436E+051.8244E+052.2997E+068.2259E+04
 Std1.8740E+061.4053E+052.2298E+052.7964E+069.7275E+04
F19Ave2.0836E+079.5360E+031.1017E+042.7237E+058.7112E+03
 Std3.5113E+077.3281E+031.1719E+045.9498E+058.4254E+03
F20Ave2.8429E+032.5948E+032.5918E+032.5984E+032.4721E+03
 Std2.3621E+023.0631E+022.2169E+022.2950E+021.6147E+02
F21Ave2.6456E+032.4266E+032.4048E+032.4830E+032.3790E+03
 Std4.9200E+012.8566E+012.9849E+017.8238E+012.1273E+01
F22Ave8.1985E+033.6163E+032.4998E+033.5292E+033.0590E+03
 Std1.1657E+032.2170E+039.7416E+022.5372E+031.6650E+03
F23Ave3.3429E+032.8811E+032.8190E+032.8311E+032.7687E+03
 Std1.6483E+027.1602E+015.1557E+018.4405E+013.6948E+01
F24Ave3.5057E+033.1401E+033.0258E+033.0655E+032.9572E+03
 Std1.8987E+021.4797E+028.8664E+017.3457E+015.1702E+01
F25Ave4.9820E+032.9128E+032.9079E+032.9841E+032.8966E+03
 Std7.7587E+022.3433E+012.0616E+013.5814E+011.4973E+01
F26Ave1.0036E+045.3646E+035.0937E+035.1307E+034.8625E+03
 Std9.7665E+021.3393E+031.2369E+031.8580E+036.8184E+02
F27Ave3.7218E+033.2679E+033.2728E+033.2633E+033.2542E+03
 Std2.7573E+023.5905E+014.2051E+012.4235E+011.8172E+01
F28Ave6.2983E+033.2656E+033.2596E+033.3336E+033.2272E+03
 Std8.3011E+023.3928E+012.5895E+013.5750E+012.1340E+01
F29Ave5.7299E+034.0691E+034.0744E+033.9579E+033.8141E+03
 Std8.7091E+023.1038E+022.9991E+022.7573E+022.3310E+02
F30Ave1.3406E+081.1070E+041.1914E+047.3359E+059.4913E+03
 Std1.2753E+084.2329E+034.7431E+039.7966E+053.7523E+03
Table 3. CEC2017 Benchmark Test Results (30 dim Scenario).
Table 3. CEC2017 Benchmark Test Results (30 dim Scenario).
FunctionMetricMLPSOMELGWOMHWOAALAHORIMEDOAHSIDOA
F1Mean3.1270E+041.8526E+095.4819E+102.8014E+061.1204E+093.6284E+064.5105E+10 5.2966E+03
 Std1.5956E+051.6479E+097.9750E+093.9049E+065.9524E+081.1205E+061.2608E+104.9114E+03
F2Mean4.9574E+554.6933E+321.6343E+445.6932E+251.7902E+342.4208E+173.5028E+452.8761E+17
 Std1.3455E+562.4757E+335.6216E+443.0187E+268.4421E+345.5553E+171.7765E+461.5494E+18
F3Mean8.2734E+044.4231E+049.9823E+042.9996E+046.0863E+044.8007E+047.3302E+043.5583E+03
 Std2.3139E+041.0504E+043.0670E+047.1183E+037.0739E+031.5455E+049.0115E+032.5536E+03
F4Mean4.7451E+026.1374E+021.1808E+045.2803E+027.3928E+025.3005E+021.2441E+044.9611E+02
 Std2.5797E+011.0431E+022.5807E+034.1983E+011.3155E+023.5124E+014.8075E+033.2504E+01
F5Mean9.6144E+026.7012E+029.4359E+026.2116E+027.3095E+026.1149E+028.4650E+025.8610E+02
 Std6.0084E+013.7747E+012.5493E+013.8448E+014.4060E+012.6732E+013.0003E+012.7226E+01
F6Mean6.7938E+026.4377E+026.9468E+026.1323E+026.6346E+026.1551E+026.7486E+026.0450E+02
 Std6.5733E+008.5867E+006.8987E+004.8356E+007.9129E+006.6396E+001.0126E+014.3639E+00
F7Mean3.3723E+031.0206E+031.4678E+038.9394E+021.1872E+038.7450E+021.3547E+038.4614E+02
 Std3.2123E+027.9978E+014.5614E+014.0177E+017.4175E+014.4001E+017.5680E+014.2431E+01
F8Mean1.1800E+039.4377E+021.1464E+039.1601E+029.6473E+029.1219E+021.0997E+038.8324E+02
 Std4.7124E+012.8428E+012.1709E+013.7317E+012.9767E+012.7117E+012.9477E+012.3484E+01
F9Mean9.6576E+033.8466E+031.2879E+042.1724E+035.7819E+033.0110E+039.0304E+031.6487E+03
 Std1.1458E+039.3298E+021.2380E+037.4705E+026.9465E+021.2508E+031.6327E+033.1329E+02
F10Mean5.2033E+035.1962E+038.9030E+036.0516E+035.4549E+034.7077E+038.4692E+034.8794E+03
 Std4.1619E+024.8868E+024.4412E+029.2586E+026.4066E+025.3037E+024.1885E+025.4150E+02
F11Mean1.2777E+031.5075E+031.1395E+041.2746E+031.8634E+031.3540E+036.0002E+031.2044E+03
 Std4.7362E+013.7977E+021.6023E+034.8575E+012.0287E+026.9090E+012.2234E+034.3344E+01
F12Mean1.4716E+064.9007E+071.2200E+102.6879E+062.0947E+081.3538E+077.5641E+093.3256E+05
 Std1.3737E+065.0693E+073.8335E+091.9989E+061.9628E+081.4121E+074.2057E+093.2683E+05
F13Mean9.0664E+031.0796E+054.3341E+094.3011E+041.6812E+051.8663E+051.7727E+091.3499E+04
 Std5.5296E+035.1431E+044.1122E+093.7551E+041.9646E+053.0973E+052.0480E+091.3470E+04
F14Mean4.9009E+041.5780E+058.9066E+061.9443E+036.9334E+059.9470E+049.6960E+044.3366E+03
 Std4.1822E+042.1700E+057.1062E+063.2767E+027.5385E+058.1963E+042.2891E+053.6268E+03
F15Mean2.5087E+032.0223E+047.5911E+082.2556E+045.6019E+041.8567E+042.1011E+054.7110E+03
 Std1.4630E+031.2378E+046.2280E+081.2894E+045.1269E+041.1945E+043.9075E+054.1723E+03
F16Mean3.0787E+033.0026E+035.6106E+032.7688E+033.5397E+032.8426E+034.5212E+032.6203E+03
 Std3.0913E+023.4923E+027.0598E+023.3622E+024.4536E+022.8435E+027.8554E+023.1286E+02
F17Mean2.5307E+032.3903E+035.0275E+032.2125E+032.5399E+032.2729E+032.7427E+032.0865E+03
 Std2.2833E+022.2853E+023.8829E+031.8639E+021.9554E+022.3122E+023.4737E+021.8566E+02
F18Mean8.6140E+051.3146E+069.2995E+078.5009E+041.0403E+061.6125E+061.1167E+067.1161E+04
 Std6.2909E+051.2887E+065.3378E+074.6438E+041.2107E+061.3251E+061.6142E+063.5703E+04
F19Mean7.6620E+035.9325E+046.3442E+082.4156E+043.0877E+061.5625E+041.5553E+078.4925E+03
 Std1.2516E+046.7094E+043.4187E+081.9803E+041.8516E+061.2844E+042.1369E+078.9472E+03
F20Mean2.8484E+032.5571E+033.1474E+032.5434E+032.6038E+032.4818E+032.8498E+032.5330E+03
 Std1.6952E+022.0237E+021.4035E+021.9385E+021.1996E+021.8756E+021.9049E+022.2893E+02
F21Mean2.6705E+032.4517E+032.7533E+032.4037E+032.5188E+032.4164E+032.6510E+032.3807E+03
 Std1.1660E+023.2686E+015.1329E+012.2220E+016.4893E+012.8626E+016.3033E+012.0745E+01
F22Mean6.7401E+035.4938E+039.9700E+036.7214E+035.0012E+034.8528E+038.1657E+033.1248E+03
 Std9.5877E+022.0683E+039.9696E+022.1623E+032.1245E+031.8928E+031.2357E+031.6962E+03
F23Mean3.6692E+032.8453E+033.3452E+032.7745E+033.0489E+032.7900E+033.3481E+032.7646E+03
 Std2.4062E+025.2068E+011.0949E+023.0180E+019.7080E+012.5888E+011.4608E+023.2687E+01
F24Mean3.5460E+032.9800E+033.4252E+032.9648E+033.2188E+032.9559E+033.5412E+032.9448E+03
 Std2.6811E+023.5822E+011.2262E+024.8418E+019.7879E+013.6551E+011.6335E+024.9637E+01
F25Mean2.8987E+032.9841E+034.6046E+032.9152E+033.0524E+032.9278E+034.8391E+032.8976E+03
 Std1.2194E+014.6532E+013.1319E+022.2085E+014.7797E+012.8021E+016.7954E+021.8787E+01
F26Mean8.0986E+036.1096E+031.1157E+045.0794E+037.2668E+035.2607E+031.0248E+044.7833E+03
 Std3.6677E+037.2652E+021.0315E+034.6397E+021.3111E+034.6864E+021.0723E+039.9001E+02
F27Mean4.0565E+033.3140E+033.9735E+033.2331E+033.4840E+033.2462E+033.7122E+033.2512E+03
 Std3.3248E+024.1330E+013.6249E+021.7326E+011.6929E+021.4748E+012.9355E+022.6256E+01
F28Mean3.2225E+033.4465E+036.7524E+033.6006E+033.5145E+033.2946E+036.1398E+033.2179E+03
 Std2.7081E+011.2388E+025.7592E+029.2618E+028.7996E+013.8137E+018.2054E+021.8473E+01
F29Mean4.6800E+034.3955E+037.1400E+033.9792E+034.9677E+034.0063E+035.9416E+033.7679E+03
 Std2.5162E+023.4615E+021.0256E+032.5054E+024.6356E+022.1300E+021.1188E+032.3575E+02
F30Mean5.0520E+052.2909E+061.2090E+098.7832E+042.4027E+077.7324E+051.4867E+088.4223E+03
 Std2.1994E+051.8876E+066.8201E+089.8790E+042.6886E+078.3335E+051.2621E+082.2496E+03
Table 4. CEC2022 Benchmark Test Results (10 dim Scenario).
Table 4. CEC2022 Benchmark Test Results (10 dim Scenario).
FunctionMetricMLPSOMELGWOMHWOAALAHORIMEDOAHSIDOA
F1Mean4.9601E+023.3442E+029.6824E+033.0049E+021.0521E+033.0072E+024.5259E+03 3.0000E+02
 Std2.0747E+021.7095E+021.8972E+038.4628E−015.4176E+024.7839E−012.7382E+035.9585E−07
F2Mean4.0056E+024.1170E+029.1459E+024.0670E+024.2583E+024.1616E+025.8871E+024.0534E+02
 Std1.7604E+001.9982E+013.6551E+022.5609E+004.3548E+012.5030E+013.7716E+023.6768E+00
F3Mean6.5359E+026.0625E+026.5718E+026.0013E+026.2672E+026.0039E+026.2960E+026.0015E+02
 Std1.0141E+013.6659E+001.1940E+013.1155E−011.1537E+015.1708E−011.3499E+013.8707E−01
F4Mean8.4644E+028.1575E+028.5175E+028.2066E+028.2004E+028.2449E+028.3447E+028.1499E+02
 Std1.0388E+017.0600E+001.0505E+018.9753E+006.5212E+001.3948E+019.6035E+008.6499E+00
F5Mean1.9012E+039.5204E+021.6712E+039.0102E+021.1639E+039.0071E+021.1658E+039.0305E+02
 Std2.9397E+025.6603E+011.9954E+021.4080E+001.1485E+029.7073E−011.4642E+025.2438E+00
F6Mean2.0202E+033.2798E+032.2239E+061.9948E+031.9978E+034.0504E+034.4893E+072.1520E+03
 Std2.7279E+021.3840E+032.1410E+063.8468E+021.5577E+022.1542E+032.4587E+086.7223E+02
F7Mean2.1212E+032.0389E+032.1244E+032.0238E+032.0509E+032.0270E+032.0579E+032.0237E+03
 Std3.8866E+012.3349E+012.1714E+011.0383E+011.6466E+013.1254E+012.5765E+018.2092E+00
F8Mean2.2581E+032.2245E+032.2520E+032.2213E+032.2282E+032.2192E+032.2483E+032.2192E+03
 Std2.7842E+012.7230E+001.7446E+015.3559E+004.7924E+005.9728E+004.4432E+016.0853E+00
F9Mean2.5063E+032.5298E+032.7283E+032.5293E+032.5580E+032.5293E+032.6262E+032.5293E+03
 Std5.5560E+012.3635E+002.6698E+011.0151E−014.2625E+012.6266E−034.2445E+010.0000E+00
F10Mean2.8407E+032.5670E+032.8349E+032.5366E+032.5372E+032.5574E+032.6564E+032.5005E+03
 Std3.5688E+026.9137E+015.0285E+021.1481E+025.6109E+016.0438E+012.4734E+021.7357E−01
F11Mean2.6420E+032.7063E+033.5633E+032.6467E+032.7691E+032.7143E+032.9675E+032.6452E+03
 Std8.0235E+011.5355E+025.2237E+021.0907E+021.8825E+021.3965E+022.6377E+027.0286E+01
F12Mean2.9461E+032.8672E+032.9043E+032.8627E+032.8817E+032.8661E+032.8887E+032.8647E+03
 Std3.4377E+011.0623E+015.8625E+011.8938E+002.3030E+012.0118E+002.7575E+011.3031E+00
Table 5. CEC2022 Benchmark Test Results (20 dim Scenario).
Table 5. CEC2022 Benchmark Test Results (20 dim Scenario).
FunctionMetricMLPSOMELGWOMHWOAALAHORIMEDOAHSIDOA
F1Mean1.0843E+046.1408E+037.0949E+043.8715E+032.5448E+041.6718E+033.3103E+04 3.0759E+02
 Std4.1975E+032.2575E+032.4125E+042.5716E+038.3489E+037.4383E+021.0851E+042.1123E+01
F2Mean4.2748E+024.9509E+022.4239E+034.5717E+025.3995E+024.6429E+021.5872E+034.5171E+02
 Std2.0396E+013.8614E+016.9590E+021.6777E+014.9053E+012.9824E+014.8594E+021.6860E+01
F3Mean6.6651E+026.3153E+026.8477E+026.0414E+026.5184E+026.0648E+026.6301E+026.0072E+02
 Std7.0833E+001.1118E+011.2002E+012.7317E+001.0533E+015.3141E+001.2348E+011.0079E+00
F4Mean9.5124E+028.6721E+029.7728E+028.6139E+028.8065E+028.6317E+029.4197E+028.4649E+02
 Std2.0323E+011.9903E+011.5175E+011.4093E+011.5579E+012.4242E+011.8185E+012.2566E+01
F5Mean3.9613E+031.6082E+033.9340E+031.1491E+032.3824E+031.1106E+032.7412E+039.8782E+02
 Std6.2069E+022.9975E+024.7054E+023.2076E+022.7837E+023.2018E+026.1830E+027.2051E+01
F6Mean2.2708E+034.8236E+032.0072E+091.6381E+041.2036E+041.3246E+041.6047E+086.1906E+03
 Std8.8172E+023.6237E+031.4667E+098.1059E+033.2956E+046.4208E+032.1748E+083.8166E+03
F7Mean2.2234E+032.1326E+032.2505E+032.0896E+032.1446E+032.0937E+032.1925E+032.0778E+03
 Std7.6263E+015.2554E+014.7371E+014.2928E+012.6826E+014.8956E+017.0002E+012.9512E+01
F8Mean2.3496E+032.2908E+032.3440E+032.2394E+032.2470E+032.2537E+032.3536E+032.2333E+03
 Std7.5287E+017.3512E+011.1916E+022.3540E+011.5166E+014.5677E+018.6181E+013.0313E+01
F9Mean2.4726E+032.5018E+033.0079E+032.4808E+032.5516E+032.4817E+032.7387E+032.4808E+03
 Std2.7711E+011.7481E+011.6002E+023.9357E−023.9347E+016.9541E−019.0225E+011.1157E−09
F10Mean4.5285E+033.5971E+036.1164E+033.8215E+034.5215E+032.6965E+034.9643E+032.9219E+03
 Std5.9652E+027.9996E+021.5845E+038.7856E+028.9723E+022.5978E+021.7090E+035.5181E+02
F11Mean2.7400E+033.1569E+038.5744E+032.9890E+033.1892E+032.9234E+036.9420E+032.9074E+03
 Std1.5222E+023.8331E+026.5678E+021.5913E+023.5204E+026.1345E+011.2553E+036.9189E+01
F12Mean3.5861E+032.9940E+033.2414E+032.9605E+033.1368E+032.9764E+033.1859E+032.9729E+03
 Std1.8476E+023.9662E+011.7137E+022.1140E+011.4377E+024.4805E+011.3799E+022.4334E+01
Table 6. Friedman test ranking summary.
Table 6. Friedman test ranking summary.
SuitesCEC2017CEC2022
Dimensions301020
Algorithms M . R T . R M . R T . R M . R T . R
MLPSO4.87 55.50 65.17 5
MELGWO4.30 43.92 34.08 4
MHWOA7.70 87.75 87.67 8
ALA2.80 22.17 22.83 2
HO5.33 64.75 55.25 6
RIME3.20 33.92 32.83 2
DOA6.47 76.25 76.58 7
HSIDOA1.33 11.75 11.58 1
Table 7. Evaluation metrics used for image segmentation.
Table 7. Evaluation metrics used for image segmentation.
MetricDescriptionReference
Peak Signal-to-Noise Ratio (PSNR)Measures reconstruction quality based on pixel-level error[41]
Structural Similarity (SSIM)Evaluates structural similarity in terms of luminance, contrast, and structure[10,42]
Feature Similarity (FSIM)Assesses feature similarity using phase congruency and gradient information[10,42]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, Q.; Gong, M.; Wang, Y.; Yang, Z. Multi-Threshold Image Segmentation Based on the Hybrid Strategy Improved Dingo Optimization Algorithm. Biomimetics 2026, 11, 52. https://doi.org/10.3390/biomimetics11010052

AMA Style

Zhu Q, Gong M, Wang Y, Yang Z. Multi-Threshold Image Segmentation Based on the Hybrid Strategy Improved Dingo Optimization Algorithm. Biomimetics. 2026; 11(1):52. https://doi.org/10.3390/biomimetics11010052

Chicago/Turabian Style

Zhu, Qianqian, Min Gong, Yijie Wang, and Zhengxing Yang. 2026. "Multi-Threshold Image Segmentation Based on the Hybrid Strategy Improved Dingo Optimization Algorithm" Biomimetics 11, no. 1: 52. https://doi.org/10.3390/biomimetics11010052

APA Style

Zhu, Q., Gong, M., Wang, Y., & Yang, Z. (2026). Multi-Threshold Image Segmentation Based on the Hybrid Strategy Improved Dingo Optimization Algorithm. Biomimetics, 11(1), 52. https://doi.org/10.3390/biomimetics11010052

Article Metrics

Back to TopTop