Next Article in Journal
Improved Anthropomorphic Robotic Hand for Architecture and Construction: Integrating Prestressed Mechanisms with Self-Healing Elastomers
Previous Article in Journal
Tribological Effects of Surface Biomimetic Micro–Nano Textures on Metal Cutting Tools: A Review
Previous Article in Special Issue
A Comparison of Binary and Integer Encodings in Genetic Algorithms for the Maximum k-Coverage Problem with Various Genetic Operators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Enhanced Human Evolutionary Optimization Algorithm for Global Optimization and Multi-Threshold Image Segmentation

1
Department of Space and Culture Design Graduate School of Techno Design (TED), Kookmin University, Seoul 02707, Republic of Korea
2
College of Design, Hanyang University, Ansan 15588, Republic of Korea
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(5), 282; https://doi.org/10.3390/biomimetics10050282
Submission received: 24 March 2025 / Revised: 23 April 2025 / Accepted: 27 April 2025 / Published: 1 May 2025
(This article belongs to the Special Issue Nature-Inspired Metaheuristic Optimization Algorithms 2025)

Abstract

:
Thresholding image segmentation aims to divide an image into a number of regions with different feature attributes in order to facilitate the extraction of image features in the context of image detection and pattern recognition. However, existing threshold image-segmentation methods suffer from the problem of easily falling into locally optimal thresholds, resulting in poor image segmentation. In order to improve the image-segmentation performance, this study proposes an enhanced Human Evolutionary Optimization Algorithm (HEOA), known as CLNBHEOA, which incorporates Otsu’s method as an objective function to significantly improve the image-segmentation performance. In the CLNBHEOA, firstly, population diversity is enhanced using the Chebyshev–Tent chaotic mapping refraction opposites-based learning strategy. Secondly, an adaptive learning strategy is proposed which combines differential learning and adaptive factors to improve the ability of the algorithm to jump out of the locally optimum threshold. In addition, a nonlinear control factor is proposed to better balance the global exploration phase and the local exploitation phase of the algorithm. Finally, a three-point guidance strategy based on Bernstein polynomials is proposed which enhances the local exploitation ability of the algorithm and effectively improves the efficiency of optimal threshold search. Subsequently, the optimization performance of the CLNBHEOA was evaluated on the CEC2017 benchmark functions. Experiments demonstrated that the CLNBHEOA outperformed the comparison algorithms by over 90%, exhibiting higher optimization performance and search efficiency. Finally, the CLNBHEOA was applied to solve six multi-threshold image-segmentation problems. The experimental results indicated that the CLNBHEOA achieved a winning rate of over 95% in terms of fitness function value, peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and feature similarity (FSIM), suggesting that it can be considered a promising approach for multi-threshold image segmentation.

1. Introduction

Image segmentation aims to divide an image into several regions with different feature attributes in order to facilitate the extraction of important feature information, which is a step preliminary to image detection and pattern recognition. Currently, image-segmentation techniques are widely used in remote sensing satellites [1], medical images [2], intelligent transportation [3] and space technology [4]. Common image-segmentation methods mainly include threshold segmentation methods [5], clustering segmentation methods [6] and region extraction methods [7]. The high processing efficiency and high reliability performance of the threshold segmentation method make it one of the most widely used image-segmentation techniques [8]. The most critical factor of the threshold segmentation method lies in the identification of the optimal segmentation threshold, and the current commonly employed method is to model it as an optimization problem, maximize the inter-class variance of different regions as the objective function using Otsu’s method [9], and search for the optimal segmentation threshold using meta-heuristic optimization algorithms, which makes it possible to minimize the cost of the search while determining the optimal threshold [10].
Currently, there are four main types of common metaheuristic optimization algorithms, namely, evolutionary algorithms, swarm intelligence algorithms, physical and chemical algorithms and human-based algorithms [11]. Among the main representatives of evolutionary algorithms are Genetic Algorithm (GA) [12], Differential Evolution (DE) [13] and Evolutionary Strategies (ES) [14]. Some typical representatives of swarm intelligence algorithms are Particle Swarm Optimization (PSO) [15], Ant Colony Optimization (ACO) [16] and Gray Wolf Optimizer (GWO) [17]. Some typical representatives of physical and chemical algorithms are Multi-Verse Optimizer (MVO) [18], Magnetic Optimization Algorithm (MOA) [19] and Atom Search Optimization (ASO) [20]. The main representatives of human-based algorithms are Teaching–Learning Based Optimization (TLBO) [21], Search and Rescue Optimization (SAR) [22] and Student Psychology-Based Optimization (SPBO) [23]. Thanks to the lightweight computational characteristics of the meta-heuristic optimization algorithm, researchers have widely combined it with the Otsu method to propose image-segmentation methods based on the meta-heuristic optimization algorithm.
For example, to address the low real-time efficiency of existing Otsu methods, Huang et al. [24] proposed an image-segmentation method (FOA-OTSU) that combines the Fruit Fly Optimization Algorithm (FOA) and the Otsu algorithm with the aim of improving the real-time efficiency of Otsu segmentation. The experimental results show that this method significantly reduces the segmentation time while keeping the segmentation effect unchanged, exhibiting faster convergence and higher real-time performance. Ma et al. [25] proposed an improved Whale Optimization Algorithm (RAV-WOA) for multi-threshold image segmentation using the Otsu method as the objective function. The global search and local exploitation capabilities of the algorithm are improved by introducing a reverse learning strategy and an adaptive weighting strategy. The experimental results confirm that the proposed RAV-WOA outperforms other algorithms in terms of convergence speed and accuracy, as well as segmentation quality and stability, and is suitable for the segmentation tasks associated with grayscale and color images. Chen et al. [26] proposed a new adaptive fractional order genetic particle swarm optimization (FOGPSO) algorithm for improving the search performance of the Otsu image-segmentation algorithm. The algorithm combines the advantages of genetic algorithm and particle swarm optimization by adaptively adjusting the fractional-order calculus operators to optimize the velocity and position updates of the particles. The experimental results show that FOGPSO outperforms other existing methods in several metrics, such as region contrast and peak signal-to-noise ratio, proving its effectiveness and superiority in image segmentation. Qin et al. [27] proposed an Otsu multi-threshold segmentation algorithm based on an improved ant colony optimization algorithm. By combining the Levy flight pattern and global transfer probability in the search process of the algorithm, a faster convergence speed and a more effective threshold search are realized. The experimental results show that the algorithm outperforms the traditional Otsu algorithm, as well as the Otsu algorithm based on the traditional ant colony optimization algorithm, in finding the optimal thresholds. Fan et al. [28] proposed an Otsu image-segmentation algorithm based on fractional order Moth–Flame Optimization (MFO), aiming to solve the problems of low segmentation accuracy, slow convergence and tendency to fall into local optimality, which the traditional MFO algorithm is associated with in image-segmentation processing. The position of the Moth–Flame population is adaptively adjusted by utilizing the memory and heritability of fractional order differentiation. The experimental results show that the algorithm outperforms the traditional MFO algorithm in terms of convergence speed, segmentation accuracy and fitness function value. Abdul et al. [29] proposed a multilevel threshold segmentation method based on Gray Wolf Optimizer (GWO) to solve the problem of the excessive computational complexity of traditional methods when the number of thresholds increases. Its performance has been verified by using standard test images and comparisons with PSO and BFO methods, and the results show that the proposed method has significant advantages in terms of stability, segmentation quality and computational speed. Wu et al. [30] proposed an improved Teaching–Learning-Based Optimization algorithm (DI-TLBO) and applied it to the multi-threshold image-segmentation problem. By introducing random numbers, self-feedback learning and variance crossover strategies, DI-TLBO is made to perform well in global optimization and exploration ability. The experimental results show that the DI-TLBO-based method has high accuracy and stability in segmentation of standard test images and cast X-ray images and can effectively recognize and segment defects in images. Mohamed et al. [31] proposed an improved WHALE optimization algorithm (MLWOA) for multilevel threshold image segmentation. By integrating memory mechanism, multi-leader approach, self-learning strategy and the Levy flight method, MLWOA avoids the limitations of traditional WOA. The experimental results show that MLWOA performs well in terms of image-segmentation performance metrics when using Otsu method as the fitness function.
The above study combines the optimization algorithm with the Otsu method to propose an image-segmentation method based on the optimization algorithm, which greatly enhances the image-segmentation performance of the Otsu method and helps to better extract the image feature information. But although the current optimization algorithms have achieved good results in the field of image segmentation, with the increase of segmentation dimension, the existing methods still have the problem of easily falling into the locally optimal segmentation threshold, which results in poor image-segmentation results. Therefore, there is a need to explore a novel, efficient and robust optimization tool to better cope with the above problems. Fortunately, the Human Evolutionary Optimization Algorithm (HEOA) [32] has been shown to be a robust method with efficient optimization performance, and studies have also demonstrated that it is highly scalable; additionally, it is not currently utilized in image segmentation. Therefore, in order to alleviate the limitations of the existing threshold segmentation methods, and at the same time expand the application areas of HEOA, this study applies HEOA to solve the multi-threshold image-segmentation problem. In addition, considering the challenges associated with the increase in the number of thresholds for image segmentation and in order to further utilize the efficient performance of HEOA, this study proposes an enhanced HEOA, one based on HEOA and combining four learning strategies, which is referred to as CLNBHEOA. In this algorithm, firstly, the Chebyshev–Tent chaotic mapping refraction opposites-based learning strategy is used to ensure that the initial population is associated with a better solution space traversal ability. Secondly, an adaptive learning strategy is proposed, an approach which improves the ability of the algorithm to jump out of the locally optimal threshold by learning from individual information gaps possessing different properties while incorporating adaptive factors. In addition, a nonlinear control factor is proposed to better balance the global exploration phase and the local exploitation phase of the algorithm. Finally, a three-point guidance strategy based on Bernstein polynomials is proposed to enhance the local exploitation of the algorithm. The main contributions of this paper are as follows:
  • The initialization population scheme based on Chebyshev–Tent chaotic mapping refraction opposites-based learning strategy is proposed to enhance the global search capability of the algorithm.
  • An adaptive learning strategy is proposed which improves the ability of the algorithm to jump out of the locally optimum threshold by learning from the information gaps of individuals possessing different properties, while incorporating adaptive factors.
  • Nonlinear control factor is proposed to better balance the global exploration phase and the local exploitation phase of the algorithm to improve the image threshold segmentation performance.
  • A three-point guidance strategy based on Bernstein polynomials is proposed to enhance the local exploitation of the algorithm.
  • Combining the above four learning strategies, the CLNBHEOA is proposed and applied to solve the multi-threshold image-segmentation problem to achieve better image-segmentation performance.
The subsequent portions of the work are structured as follows: Section 2 outlines the mathematical model and execution logic of HEOA. Section 3 provides a detailed overview of the four learning strategies proposed in this paper and proposes the CLNBHEOA in conjunction with the learning strategies. Section 4 evaluates the optimization performance of the CLNBHEOA using the CEC2017 function test set [33]; the results confirm that the CLNBHEOA is a method that possesses efficient optimization capabilities. In Section 5, image-segmentation experiments are conducted on six images using the CLNBHEOA, and the experimental results show that it has efficient image-segmentation performance. Section 6 gives the conclusion of this paper and future research directions.

2. Related Works

Inspired by human evolutionary behaviors in complex environments, including human adaptation to the environment as well as the search for better individuals, the Human Evolutionary Optimization Algorithm (HEOA) was proposed as a novel meta-heuristic algorithm in 2024; it consists of three main phases: the population initialization phase, the human exploration phase and the human development phase. These will be mathematically modeled in detail in the following.

2.1. Population Initialization Phase

At the initial stage of human evolution, the purpose of population initialization using the logical chaotic mapping method is to generate a set of initialized individuals that will serve as the initial solution set for the optimization problem, in which each individual represents a candidate solution to the optimization problem, The individuals are generated by Equation (1). The N individuals are subsequently pooled into an initialized population p o p = [ X 1 , X 2 , , X i , , X N ] .
X i 1 = L B + ( U B L B ) · L i ,   1 i N
where X i 1 denotes the i t h individual in the initial population; U B and L B denote the upper and lower boundary constraints of the problem to be optimized, respectively; the vector of size 1 × D i m , D i m denotes the dimensionality of the variables of the problem to be optimized; N denotes the size of the population; and L i denotes the i t h component of the sequence of logical chaotic mapping, computed using Equation (2):
L i = α · L i 1 · ( 1 L i 1 ) ,   0 L 0 1 ,   i = 1 , 2 , , N ,   α = 4

2.2. Human Exploration Phase

After generating the initial population, individuals improve the algorithm’s optimization accuracy by exploring unknown regions in order to enable the algorithm to explore regions in which the existence of a globally optimal solution is more promising. Specifically, in the human exploration phase, individual information is computed through Equation (3):
X i t + 1 = β · 1 t M a x i t e r · ( X i t X b e s t ) · L e v y ( d i m ) + X b e s t · 1 t M a x i t e r + ( X m e a n t X b e s t ) · f l o o r r a n d f j u m p · f j u m p
where t denotes the current iteration number, M a x i t e r denotes the maximum number of iterations of the algorithm, X i t + 1 denotes the information of the i t h individual in the ( t + 1 ) t h iteration, X i t denotes the information of the i t h individual in the t t h iteration, X b e s t denotes the optimal individual in the current population, f l o o r ( · ) denotes a downward rounding operation, r a n d denotes the pseudo-random number generated in the interval [0, 1], and X m e a n t denotes the average information of all the individuals in the current population, which is computed by using Equation (4):
X m e a n t = 1 N i = 1 N X i t
β denotes the adaptive function, which is computed using Equation (5):
β = 0.2 · 1 t M a x i t e r · X i t X m e a n t
L e v y ( d i m ) denotes the Levy distributed random number with parameter d i m , computed using Equation (6):
L e v y ( d i m ) = μ · σ | ν | γ μ   ~ N ( 0 , d i m ) ν   ~ N ( 0 , d i m ) σ = Γ ( 1 + γ ) · sin π γ 2 Γ 1 + γ 2 · γ · 2 1 + γ 2 γ + 1
where γ takes the value of 1.5 and sin ( · ) denotes the sinusoidal computational function. Finally, f j u m p denotes the jump coefficient, computed using Equation (7):
f j u m p = ( L B ( 1 ) U B ( 1 ) ) δ ,   δ [ 100 , 2000 ]

2.3. Human Development Phase

As mentioned earlier, the human exploration phase locates the potential globally optimal solution region, and then the human development phase requires further exploitation of the potentially optimal region to improve the algorithm’s convergence speed and optimization accuracy. Specifically, the human development phase consists of four roles, i.e., leaders, explorers, followers and losers, each of which employs a different search strategy to make the global optimal solution easier to locate by utilizing diversified search strategies. The following section describes in detail the individual update strategies corresponding to the four roles.

2.3.1. Leaders Update Strategy

Define the top 40% of individuals in the population as associated with the smallest value of the fitness function as leaders; leaders need to further develop their surrounding area to further improve the optimization accuracy of the algorithm; since they already have a large amount of knowledge, the update strategy of leaders is expressed using Equation (8).
X i t + 1 = ω · X i t · exp t r a n d · M a x i t e r , R < A ω · X i t + R n · o n e s ( 1 , d i m ) , R A
where exp ( · ) denotes the exponential operation with constant e as the base, R denotes the pseudo-random number generated in the interval [0, 1], A is set to the constant 0.6, R n denotes the random number obeying the standard normal distribution, o n e s ( 1 , d i m ) denotes the generation of the all-ones vector with the size of 1 × d i m and ω denotes the knowledge acquisition ease coefficient, which is computed by using Equation (9).
ω = 0.2 · cos π 2 · 1 t M a x i t e r
where cos ( · ) denotes the cosine calculation function.

2.3.2. Explorers Update Strategy

Individuals in the population that rank between 40% and 80% in terms of their fitness function values are defined as explorers; these are designed to explore unknown areas, giving the algorithm an advantage in discovering the globally optimal solution. The updating strategy for the explorers is expressed using Equation (10).
X i t + 1 = R n · exp ( X w o r s t t ) 2 ( X i t ) 2 i 2
where X w o r s t t denotes the individual with the worst fitness function value in the t t h iteration.

2.3.3. Followers Update Strategy

Individuals that rank between 80% and 90% in terms of their fitness function values within the population are defined as followers; the follower information is guided by the current optimal individuals in order to rapidly improve the quality of the individuals; specifically, the updating strategy for the followers is expressed using Equation (11).
X i t + 1 = X i t + ω · R d · ( X b e s t t X i t )
where X b e s t t denotes the optimal individual in the t t h iteration and R d denotes a random number in the interval [1, d i m ].

2.3.4. Losers Update Strategy

Define the last 10% of individuals in the population in terms of the fitness function value as losers, since the quality of losers’ individuals is very low. In order to significantly improve the quality of this part of the individuals, the globally optimal individuals are used for substitution as well as bootstrapping; the losers’ updating strategy is expressed using Equation (12).
X i t + 1 = X b e s t + ( X b e s t X i t ) · R n

2.4. Implementation of HEOA

In this section, the execution logic of HEOA is basically introduced. When using HEOA to solve the optimization problem, first, the initialization population operation is carried out, and it then enters the main loop of the algorithm, and the individual information is updated through the combination of the human exploration phase and the human development phase, so that the quality of the candidate solution is effectively improved. Finally, when the number of loops reaches the maximum number of algorithmic iterations, it outputs the best individual information, which is the best solution to the optimization problem. In order to describe this execution logic in detail, Algorithm 1 gives the execution pseudo-code of HEOA.
Algorithm 1: The pseudo-code of the HEOA
  Input:  N , D i m , U B , L B , M a x i t e r , A = 0.6
  Output: global best solution X b e s t
  •  %Population initialization phase
  •  Generate initialized populations p o p = [ X 1 , X 2 , , X i , , X N ] according to Equation (1)
  •  Calculate the fitness function value for the population f i t = [ f i t 1 , f i t 2 , , f i t i , , f i t N ]
  • for  t = 1 : M a x i t e r
  •  [~, i n d ] = s o r t ( f i t )
  • X b e s t = p o p ( i n d ( 1 ) )
  • for  i = 1 : N
  •     if  t ( 1 / 4 ) · M a x i t e r
  •        %Human exploration phase
  •        Update individual information according to Equation (3)
  •     else
  •        %Human development phase
  •        Categorized individuals into four roles based on the fitness function values f i t : Leaders, Explores,
           Followers, Losers
  •        if individuals belong leaders
  •          %Leaders update strategy
  •          Update individual information according to Equation (8)
  •        end
  •        if individuals belong explores
  •          %Leaders update strategy
  •          Update individual information according to Equation (10)
  •        end
  •        if individuals belong followers
  •          %Followers update strategy
  •          Update individual information according to Equation (11)
  •        end
  •        if individuals belong losers
  •          %Followers update strategy
  •          Update individual information according to Equation (12)
  •        end
  •     end
  • end
  • end
  •  Output global best solution X b e s t

3. Proposed Model

The original HEOA has the problem of falling into the locally optimal thresholds, resulting in poor image segmentation when dealing with the multi-threshold image-segmentation problem. The essential reason for this is that HEOA has a lack of population diversity in its solution process, which leads to the loss of the algorithm’s global search ability and the ability to jump out of the locally optimal threshold; at the same time, after locating the potentially optimal threshold region, HEOA’s local exploitation ability is also somewhat insufficient. In addition, the HEOA algorithm does not fully consider the balance between the global exploration phase and the local exploitation phase; the above algorithmic defects lead to the losses in the convergence speed and optimization accuracy of the algorithm. To alleviate the above problem, an enhanced HEOA algorithm, called the CLNBHEOA, is proposed in this section, combining four learning strategies. Firstly, the Chebyshev–Tent chaotic mapping refraction opposites-based learning strategy based on Chebyshev–Tent chaotic mapping is proposed for use as an initialization population scheme, which enables the initial population to have a better solution space traversal and improves the global search capability of the algorithm. Secondly, the adaptive learning strategy is proposed to improve the algorithm’s ability to jump out of the locally optimum threshold by learning from individual information gaps possessing different properties, while incorporating adaptive factors. In addition, a nonlinear control factor is proposed to better balance the global exploration phase and the local exploitation phase of the algorithm. Finally, a three-point guidance strategy based on Bernstein polynomials is proposed to enhance the local exploitation of the algorithm. Through the above improvements to the algorithm, the optimization accuracy of the algorithm in solving the multi-threshold image-segmentation problem is effectively improved, and the performance of the algorithm is enhanced. The above improvement points are described in detail in the following, wherein the CLNBHEOA in this study is proposed.

3.1. Chebyshev–Tent Chaotic Mapping Refraction Opposites-Based Learning Strategy

Li et al. [34] pointed out that by introducing a chaotic mapping strategy for population initialization, the algorithm can effectively enhance the traversal and non-repeatability of the algorithm throughout the iterative process and improve the algorithm population diversity. Meanwhile, Yang et al. [35] pointed out that opposition learning can effectively enhance the algorithm search diversity and efficiency by generating opposing candidate solutions. Based on this inspiration, this section proposes a population initialization scheme for a Chebyshev–Tent chaotic mapping refraction opposites-based learning strategy to enhance the quality of initial candidate solutions; this in turn improves the algorithm’s global search capability. The specific implementation of the initialization strategy is introduced in detail below. Firstly, the Chebyshev chaotic mapping sequence is computed using Equation (13), and the Tent chaotic mapping sequence is computed using Equation (14).
C i + 1 = cos   ( ϕ · cos 1 · C i )
where i denotes the number of mappings, C i denotes the function mapping value for the i t h time of Chebyshev mapping, ϕ denotes the control parameter, which takes the value of a constant 5, and C 0 is set to the constant 0.152.
t i + 1 = t i u , 0 t i u 1 t i 1 u , u < t i 1
where i denotes the number of mappings, t i denotes the mapped value of the function for the i t h time of Tent mapping, u denotes the control parameter, which takes the value of constant 0.4, and t 0 is set to the constant 0.152.
Subsequently, the Chebyshev population p o p c = [ X 1 c , X 2 c , , X i c , , X N c ] was generated using the Chebyshev chaotic mapping sequence generated above; similarly, the Tent population p o p T = [ X 1 T , X 2 T , , X i T , , X N T ] was generated using the Tent chaotic mapping sequence, with the information of individual individuals being calculated using Equations (15) and (16), respectively.
X i c = L B + ( U B L B ) · C i ,   1 i N
X i T = L B + ( U B L B ) · t i ,   1 i N
Subsequently, the refractive opposition learning strategy was utilized to generate corresponding Chebyshev refractive opposition population p o p c o and Tent refractive opposition population p o p T o based on Chebyshev population and Tent population. Single individuals were calculated using Equations (17) and (18), respectively.
X i , j c , o = U B j + L B j 2 + U B j + L B j 2 δ X i , j c δ
X i , j T , o = U B j + L B j 2 + U B j + L B j 2 δ X i , j T δ
where X i , j c , o denotes the j t h dimension information of the i t h Chebyshev individual refracting the opposing individual and X i , j T , o denotes the j t h dimension information of the i t h Tent individual refracting the opposing individual. The term δ is defined as the scaling factor H 1 / H 2 = 1.5 and the values of H 1 and H 2 are referenced to the refracting opposition schematic shown in Figure 1.
Subsequently, the generated Chebyshev population p o p c , Tent population p o p T , Chebyshev refractive opposition population p o p c o and Tent refractive opposition population p o p T o are formed into a population of size 4 · N , denoted as p o p t o t a l . Subsequently, the fitness function values of the p o p t o t a l population individuals are calculated based on the objective function of the problem to be optimized; assuming that the solution problem is a minimization problem, the p o p t o t a l population individuals are sorted in ascending order based on the fitness function values and the set of the top N individuals in the sorted order is used as the initialized population p o p = [ X 1 , X 2 , , X i , , X N ] for subsequent iterations. By adjusting the initialization population for chaotic mapping and refraction opposition learning, the algorithm is equipped with higher solution space coverage in the initialization stage, which helps the algorithm to improve the global search performance in the later iteration stage and thus improves the algorithm’s multi-threshold image-segmentation accuracy.

3.2. Adaptive Learning Strategy

Zhang et al. [36] pointed out that individuals help the algorithm’s global search ability to improve by learning from the gaps of individuals possessing different properties, which in turn alleviates the local stagnation that the algorithm gets stuck in. Based on this inspiration, in order to alleviate the problem of insufficient optimization accuracy caused by the insufficient global exploration capability of HEOA in solving multi-threshold image-segmentation problems, this section proposes an adaptive learning strategy which aims to take into account the disparities associated with individuals of different natures while incorporating an adaptive factor in order to enhance the global exploration capability of the algorithm. The adaptive learning strategy will be described in detail in the following; firstly, the individual gaps considered in this study are the gap between the best individual and a better individual, the gap between the best individual and a worse individual, the gap between the better individual and the worse individual and the gap between two random individuals in the population; the four sets of gaps are expressed using Equation (19).
G a p 1 = X b e s t X b e t t e r G a p 2 = X b e s t X w o r s e G a p 3 = X b e t t e r X w o r s e G a p 4 = X r a n d 1 X r a n d 2
where G a p 1 denotes the gap between the best individual and the better individual, G a p 2 denotes the gap between the best individual and the worse individual, G a p 3 denotes the gap between the better individual and the worse individual and G a p 4 denotes the gap between the two random individuals. X b e s t denotes the best individual in the population; X b e t t e r denotes the better individual, defined as a random individual in the set of the first 5 individuals with the smallest value of the fitness function; X w o r s e denotes the worse individual, defined as a random individual in the set of the first 5 individuals with the largest value of the fitness function; and X r a n d 1 and X r a n d 2 denote the two random individuals in the population that are not identical to each other. Subsequently, in order to reflect the degree of learnability associated with each set of gaps, the learnability of each set of gaps was determined using Equation (20).
L F k = G a p k k = 1 4 G a p k , ( k = 1 , 2 , 3 , 4 )
where L F k denotes the degree of learnability of the k t h set of gaps and “ · ” denotes the operation of modeling the vector. Subsequently, the learning ability of an individual needs to be taken into account; the smaller the value of the fitness function, the higher the quality of the representation of the individual, and in order to ensure its quality, the learning ability should be reduced, while, on the contrary, the larger the value of the fitness function, the greater the learning ability of the individual. Therefore, Equation (21) is used to represent the individual learning ability.
S F i = f i t i f i t m a x ,   ( 1 i N )
where S F i denotes the learning ability of the i t h individual and f i t m a x denotes the value of the maximum fitness function in the population. Based on the above information, the learning process of the i t h individual relative to the k t h group of gaps is expressed using Equation (22).
K A k = e v l · cos   ( 2 · π · ( 1 t M a x i t e r ) ) · S F i · L F k · G a p k , ( k = 1 , 2 , 3 , 4 )
where K A k denotes the amount of knowledge that the i t h individual learns from the k t h group of gaps, v is a constant used to change the shape of the spiral with a default value of 1 and l is a random number in the interval [−1, 1]. In addition to considering the degree of learnability of the gaps and the learning ability possessed by the individual, a cosine adaptive factor is also considered, which makes the learning process adaptive with the number of iterations of the algorithm. Subsequently, based on the above information, the state of the i t h individual after learning from the four sets of gaps is expressed in Equation (23).
X i n e w = X i t + K A 1 + K A 2 + K A 3 + K A 4
Subsequently, the information for the i t h individual is retained using Equation (24).
X i t + 1 = X i n e w i f   f i t i n e w < f i t i X i t o t h e r w i s e
where f i t i n e w denotes the value of the fitness function of X i n e w . Individuals can ensure that the global exploration performance of the algorithm expands well and enhance the algorithm’s multi-threshold segmentation accuracy by learning from different gaps while combining the degrees of learnability associated with the gaps and their own learning ability.

3.3. Nonlinear Control Factor

The original HEOA divides the global exploration phase and the development phase into fixed values, which is not conducive to the balance between global exploration and local exploitation in the iterative process of the algorithm, and results in the algorithm solving the multi-threshold image-segmentation problem being prone to falling into the local, suboptimal threshold problem. Mohamed et al. [37] pointed out that balancing these two phases by the nonlinear factor helps to enhance the balance between the two and improve the convergence accuracy of the algorithm. Therefore, based on this inspiration, a novel nonlinear control factor is proposed in this section to rationalize the global exploration phase and the local exploitation phase. Specifically, the proposed nonlinear control factor consists of a basis attenuation term incorporating an inverse tangent function and a fluctuation adjustment term incorporating sine and cosine functions, where the basis attenuation term is expressed as Equation (25).
B a s e = a r c t a n s · 1 t M a x i t e r a r c t a n ( s )
where a r c t a n denotes the arc-tangent function and the parameter s denotes the decay rate factor, which in this study takes the value of a constant 3. Its trend with an increasing number of iterations is shown in Figure 2. In addition, the fluctuation adjustment term is calculated using Equation (26).
F u n c = 1 + a · sin b · π · t M a x i t e r · cos c · π · t M a x i t e r
where a denotes the amplitude parameter, which takes the value of 0.05, and the frequency parameters b and c are used to adjust the fluctuation frequency, and take the values of 3 and 4, respectively, to generate asymmetric fluctuations; its trend with an increasing number of iterations is shown in Figure 3. Periodic fluctuations are introduced through the products of sine and cosine functions to ensure global exploration capability. Finally, the basis attenuation term and the fluctuation adjustment term are combined to form a nonlinear decay factor with attenuation as well as waviness to improve the balance between global exploration and local exploitation of the algorithm, as defined in Equation (27).
N C F = B a s e · F u n c = a r c t a n s · 1 t M a x i t e r a r c t a n ( s ) · 1 + a · sin b · π · t M a x i t e r · cos c · π · t M a x i t e r
where Figure 4 demonstrates the trend graph of the nonlinear control factor with the increase of the number of iterations, and from which it can be seen that the nonlinear factor decreases nonlinearly from 1 to 0. At the same time, notice, in the pre-iteration period, the specific certain fluctuating nature, which helps to ensure that the algorithm’s global search ability, and in the late iteration period, the specific fast decreasing tendency, which helps to ensure that the algorithm’s ability to exploit locally. Balancing the two stages by nonlinear control factors can significantly improve the convergence performance of the algorithm.

3.4. Three-Point Guidance Strategy Based on Bernstein Polynomials

When HEOA solves the multi-threshold image-segmentation problem, the algorithm suffers from the problem of insufficient local development ability and is unable to effectively exploit the solution region after locating the potentially optimal threshold region during the global search phase, which leads to decreases in the algorithm’s convergence speed and accuracy. Zhang et al. [38] pointed out that the guidance by the excellent individuals in the population can ensure that the local exploitation ability of the algorithm is effectively improved, improving the optimization accuracy of the algorithm. Based on this inspiration, a three-point guidance strategy based on Bernstein polynomials is proposed in this section, which combines Bernstein’s weighting property to weight three kinds of individuals with different attributes; the generated weighted individuals are used for individual guidance, which improves the algorithm’s ability relative to local exploitation while also ensuring a determinate global exploration capability. In the following, a three-point guidance strategy based on Bernstein polynomials will be described in detail. Firstly, the n -order Bernstein polynomials are defined by Equation (28).
B w , n ( p ) = C n w · p w · ( 1 p ) n w
where 0 p 1 denotes the probability of success of the sampling result, w denotes the number of instances associated with success in the sampling experiment, n denotes the total number of times the sampling experiment was carried out and C n w denotes a binomial function, expressed as Equation (29).
C n w = n ! w ! ( n w ) !
For the second-order Bernstein polynomials, we have n = 2 and w = 0 , 1 , 2 , thus containing three polynomials, and by combining Equations (28) and (29), the second-order Bernstein polynomials are expressed within Equation (30).
B 0 , 2 ( p ) = ( 1 p ) 2 B 1 , 2 ( p ) = 2 · p · ( 1 p ) B 2 , 2 ( p ) = p 2
When p is incremented from 0 to 1, the functional image of the second-order Bernstein polynomial is shown in Figure 5, from which it can be found that the sum of B 0 , 2 ( p ) , B 1 , 2 ( p ) and B 2 , 2 ( p ) is 1 when p is in the interval [0, 1]; this property will be utilized in the subsequent weighting operations for individuals with different attributes.
In three-point guidance strategy based on Bernstein polynomials, the main consideration is the weighting of the best, median, and random individuals; this is performed using Equation (31).
X w e i = B 0 , 2 ( p ) · X b e s t + B 1 , 2 ( p ) · X m e d + B 2 , 2 ( p ) · X r a n d
where X w e i denotes the generated weighted individual, X b e s t denotes the globally best individual, X r a n d denotes a random individual in the population and X m e d denotes the median individual, which is defined as a random individual in the set of individuals that have a smaller fitness function value than the current individual. Subsequently, after generating weighted individuals, the individual information is updated using Equation (32); the process is visualized in Figure 6.
X i n e w = X i t + e ( t M a x i t e r ) · r a n d n · ( X w e i X i t )
where r a n d n denotes a random number obeying a standard normal distribution. The individual information is then updated using Equation (24). By using the three-point guidance strategy based on Bernstein polynomials for individuals, the local exploitation of the algorithm is made possible, while also ensuring a certain population diversity and reducing the risk of falling into a local, suboptimal threshold.

3.5. Implementation of the CLNBHEOA

In this section the main focus is on the execution logic of the proposed CLNBHEOA in solving the optimization problem. The CLNBHEOA is a novel algorithm proposed on the basis of HEOA algorithm, and which combines the improvement strategies. First, the Chebyshev–Tent chaotic mapping refraction opposites-based learning strategy is introduced in the initialization population stage to enhance the initial population quality of the algorithm and to improve the global search ability of the algorithm in the iterative process. Second, by introducing the adaptive learning strategy in the global exploration phase, the process effectively enhances the algorithm’s population diversity and reduces the risk of the algorithm falling into local, suboptimal thresholds. Furthermore, the two main phases of the algorithm are better coordinated by the introduction of a nonlinear control factor to balance the global exploration phase and the local exploitation phase of the algorithm. Finally, the introduction of a three-point guidance strategy based on Bernstein polynomials improves the effective exploitation performance of the algorithm and enhances the optimization accuracy of the algorithm. Compared with HEOA, it has better global search performance, as well as improved local exploitation performance, and its convergence accuracy is greatly improved. In order to better understand the execution process, Algorithm 2 gives the execution pseudo-code associated with the CLNBHEOA, and Figure 7 gives the flowchart of CLNBHEOA execution.
Algorithm 2: The pseudo-code associated with the CLNBHEOA
  Input:  N , D i m , U B , L B , M a x i t e r , A = 0.6
  Output: global best solution X b e s t
1. 
%Population initialization phase
2. 
Generate initialized populations p o p = [ X 1 , X 2 , , X i , , X N ] according to Equation (13) to Equation (18) by the Chebyshev–Tent chaotic mapping refraction opposites-based learning strategy, which is generated by integrating the Chebyshev map, the Tent map and the refraction opposites-based learning strategy.
3. 
Calculate the fitness function value for the population f i t = [ f i t 1 , f i t 2 , , f i t i , , f i t N ]
4. 
for  t = 1 : M a x i t e r
5. 
[~, i n d ] = s o r t ( f i t )
6. 
X b e s t = p o p ( i n d ( 1 ) )
7. 
for  i = 1 : N
8. 
   if  r a n d < N C F
9. 
      %Human exploration phase
10. 
      if r a n d < 0.5
11. 
        Update individual information according to Equation (3)
12. 
      else
13. 
        Update individual information according to Equations (23) and (24) by adaptive
      learning strategy
14. 
      end
15. 
   else
16. 
      %Human development phase
17. 
      if r a n d < 0.5
18. 
        Categorized individuals into four roles based on the fitness function values f i t : Leaders,
        Explores, Followers, Losers
19. 
        if individuals belong leaders
20. 
         %Leaders update strategy
21. 
         Update individual information according to Equation (8)
22. 
        end
23. 
        if individuals belong explores
24. 
         %Leaders update strategy
25. 
         Update individual information according to Equation (10)
26. 
        end
27. 
        if individuals belong followers
28. 
         %Followers update strategy
29. 
         Update individual information according to Equation (11)
30. 
        end
31. 
        if individuals belong losers
32. 
         %Followers update strategy
33. 
         Update individual information according to Equation (12)
34. 
        end
35. 
      else
36. 
        Update individual information according to Equations (32) and (24) by three-point
      guidance strategy based on Bernstein polynomials
37. 
      end
38. 
   end
39. 
end
40. 
end
41. 
Output global best solution X b e s t

4. Experimental Procedure

In this section, the main focus is on explaining the procedure for the experiment. We will evaluate the optimization performance of the proposed CLNBHEOA by conducting experiments on the CEC2017 test function set equipped with 100 dimensions; the CEC2017 test-set specific information is shown in Table 1. It is also compared with nine novel and excellent algorithms to objectively and comprehensively evaluate the optimization performance of the CLNBHEOA; the parameter configuration information of these algorithms is shown in Table 2.
In order to ensure the fairness as well as repeatability of the experiments, the maximum number of function evaluations is set to 30,000, the population size is 30 and each group of experiments is independently and non-repeatedly executed 30 times to enhance the persuasive power of the experimental results. All of the experimental code was written and executed on MATLAB 2021Rb on a Windows 11 system, with the hardware conditions of 1 TB of solid-state hard disk and 32 Gb of operating memory.

5. Results

In this section, the optimization performance of the proposed CLNBHEOA on the CEC2017 test function and the multi threshold segmentation performance on six images are comprehensively evaluated.

5.1. Experimental Results on CEC2017 Test Function

In this section, the primary focus is on evaluating the optimization performance of the the CLNBHEOA on the CEC2017 test functions. Specifically, an analysis is conducted on various aspects of the algorithm’s optimization experiments, including population diversity, exploration/exploitation balance, fitness function value, stability, Wilcoxon rank-sum test, Friedman rank-sum test and convergence. These analyses confirm from multiple perspectives that the proposed CLNBHEOA in this study exhibits highly efficient optimization performance.

5.1.1. Validation of Initialization Strategy Effectiveness

The Chebyshev–Tent chaotic mapping refraction opposites-based learning strategy proposed in this paper mainly consists of two steps. Initially, a Chebyshev population of size N is initialized using the aforementioned Chebyshev chaotic strategy, while a Tent population of size N is initialized using the Tent chaotic strategy. Subsequently, refraction opposites-based learning is applied to the generated Chebyshev population to produce a refracted opposite Chebyshev population of size N ; similarly, refraction opposites-based learning is applied to the generated Tent population to produce a refracted opposite Tent population of size N . The aforementioned steps collectively generate four populations, each of size N . Then, the top N individuals with the smallest fitness function values are selected from the combined set of 4 · N individuals to form the initialization population in the iterative process of the algorithm. All the aforementioned steps constitute the Chebyshev–Tent chaotic mapping refraction opposites-based learning strategy proposed in this paper, which integrates the Chebyshev initialization strategy, the Tent initialization strategy and the refraction opposites-based learning strategy. Their combined application forms the Chebyshev–Tent chaotic mapping refraction opposites-based learning strategy proposed in this study. In this section, the enhancing effects of individual components within the Chebyshev–Tent chaotic mapping refraction opposites-based learning strategy on the algorithm’s performance are comprehensively validated. Specifically, the Chebyshev initialization strategy is introduced into the HEOA algorithm to form CHEOA, the Tent initialization strategy is introduced to form THEOA and the refraction opposites-based learning strategy is introduced to form RHEOA. The enhanced version of HEOA, formed by combining the aforementioned three strategies (i.e., the Chebyshev–Tent chaotic mapping refraction opposites-based learning strategy), is referred to as CTRHEOA. The four aforementioned schemes are employed to solve the CEC2017 test functions in order to evaluate the advantages of the Chebyshev–Tent chaotic mapping refraction opposites-based learning strategy proposed in this paper. The results of the Friedman rank-sum test are depicted in Figure 8. As can be observed from the figure, the average Friedman ranks of CHEOA, THEOA and RHEOA all surpass those of the HEOA algorithm, confirming the effectiveness of the three initialization strategies. Notably, CTRHEOA achieves superior solving performance compared to CHEOA, THEOA and RHEOA, indicating that the Chebyshev chaotic strategy, the Tent chaotic strategy and the refraction opposites-based learning strategy all contribute to the enhancement of the algorithm’s performance. However, when these three strategies are combined to form the Chebyshev–Tent chaotic mapping refraction opposites-based learning strategy, they can better promote the algorithm’s performance.

5.1.2. Population Diversity Analysis

This section focuses on the analysis of the population diversity of the algorithm during the optimization process; higher population diversity means that the algorithm is more capable of jumping out of the local, suboptimal threshold, which helps the algorithm to improve the optimization accuracy. The experimental results are shown in Figure 9, where the X-axis represents the number of iterations, the Y-axis represents the population diversity of the algorithm during the running process, the red line represents the population diversity corresponding to HEOA and the blue line represents the population diversity corresponding to the CLNBHEOA.
As can be seen from the figure, on the simple modal test functions CF17_F1 and CF17_F6, compared with the original HEOA, the CLNBHEOA possesses higher population diversity throughout the iterative process, which enables the CLNBHEOA to possess a higher ability to jump out of the local, suboptimal threshold at the later stage of the iteration, which is mainly attributed to the proposed Chebyshev–Tent chaotic mapping refraction opposites-based learning strategy and adaptive learning strategy proposed in this paper, which can effectively enhance the population diversity of the algorithm and facilitate the global search process of the algorithm. In addition, on the complex multimodal functions CF17_F13, CF17_F18, CF17_F23 and CF17_F29, the CLNBHEOA still possesses higher population diversity throughout the iteration process compared to HEOA, which demonstrates that the strategy proposed in this paper is still able to enhance the algorithm’s ability to jump out of the local, suboptimal threshold effectively in the solving of complex multidimensional optimization problems. In summary, it can be concluded that the strategy proposed in this paper can significantly improve the population diversity during the execution of the CLNBHEOA, enhancing the global search performance and thus improving the optimization accuracy.

5.1.3. Exploration/Exploitation Balance Analysis

This section focuses on the analysis of the balance between the global exploration and local exploitation phases of the algorithm during the optimization execution. In the field of optimization, the balance between the global exploration and local exploitation phases is crucial; the typical optimization process used is to locate the potential optimal solution region by the global exploration phase first, and then use the local exploitation phase to further exploit the located potential optimal region, in order to improve the algorithm convergence accuracy and speed, so a reasonable balance between the two can help to improve the algorithm optimization accuracy. We used some representative CEC2017 test function sets to determine the exploration/exploitation ratios of the algorithms during execution, and the experimental results are shown in Figure 10, where the X-axis denotes the number of iterations, the Y-axis denotes the ratios, the blue line denotes the global exploration rate and the red line denotes the local exploitation rate.
As can be seen from the figure, based on the statistics associated with the CEC2017 test function set, the algorithm possesses a high global exploration rate at the early stage of iteration, which is mainly due to the fact that the Chebyshev–Tent chaotic mapping refraction opposites-based learning strategy and adaptive learning strategy effectively improves the algorithm’s global search capability, which makes it tend to perform global exploration behavior in the early stage; it can then effectively locate more potential optimal regions. In the middle of iteration, the global exploration rate and local exploitation rate tend to balance, which is mainly due to the introduction of the nonlinear control factor described in this paper, which makes the global exploration and local exploitation behaviors balanced and can effectively improve the optimization accuracy of the algorithm to avoid the phenomenon of local stagnation. In the later stage of iteration, the algorithm tends to perform local exploitation operations which depend on the three-point guidance strategy based on Bernstein polynomials proposed in this paper; these enhance the algorithm’s local exploitation capability, resulting in great improvements in convergence speed and convergence accuracy, while at the same time, the algorithm maintains a 35% or so global exploration rate, which helps to ensure the algorithm’s ability to jump out of the local, suboptimal threshold. In summary, the CLNBHEOA proposed in this paper achieves a good balance between its global exploration phase and local exploitation phase due to the advanced nature of the strategy, which helps to improve the optimization accuracy of the algorithm.

5.1.4. Fitness Function Value Analysis

In this section, the optimization performance of the CLNBHEOA and nine comparison algorithms on the 100-dimensional CEC2017 test function set is comprehensively evaluated, and the optimization performance of the CLNBHEOA is objectively evaluated through the statistical analysis of the mean and standard deviation of the numerical results of 30 independent experiments; the experimental results are shown in Table 3, where ”Mean Rank” denotes the average numerical rank of the algorithm on the test function set and “Final Rank” denotes the final rank based on the Mean Rank information.
As can be seen from the table, when solving the simple multimodal optimization problems CF17_F1 to CF17_F10, compared with the comparison algorithms, the CLNBHEOA ranked first on nine test functions, with a winning rate of 100%, which is mainly due to the fact that the three-point guidance strategy proposed in this paper based on Bernstein polynomials effectively improves the local exploitation ability of the algorithm and effectively improves the optimization accuracy of the algorithm. In addition, when solving the complex multimodal optimization problems CF17_F11 to CF17_F30, compared with the comparison algorithms, the CLNBHEOA ranked first on 17 test functions with a winning rate of 85%, although it ranked second on CF17_F20, CF17_F21 and CF17_28, which is a weaker result than seen with the comparison algorithms, but it possesses a stronger ability to solve the multimodal optimization problems in a comprehensive perspective, given its modal optimization problem-solving ability. This is mainly due to the Chebyshev–Tent chaotic mapping refraction opposites-based learning strategy and adaptive learning strategy proposed in this paper to enhance the global search performance of the algorithms, and enhance the ability to jump out of a locally optimal solution. It also combines the nonlinear control factor to provide a good balance between the global exploration phase and the local exploitation phase, enabling it to solve multimodal optimization problems effectively. As can be seen from the last row of the table, the CLNBHEOA outperforms the original HEOA and the comparison algorithms, with a mean ranking of 1.10 on the 29 tested functions. To better represent the gap in algorithmic performance, Figure 11 demonstrates the average ranking histograms, with the CLNBHEOA being associated with lower column heights. In addition to the optimization accuracy, the solution stability is also very important; higher stability can ensure that the algorithm has better applicability to the real environment. Figure 12 shows some box plots of the algorithm associated with solving the CEC2017 test function, from which it can be seen that, in most cases, the CLNBHEOA has a higher degree of solution aggregation, and also has fewer anomalies, which indicates that the proposed algorithm has a higher degree of solution stability.
In summary, it can be concluded that the CLNBHEOA proposed in this paper, due to the improvement of HEOA from the perspectives of global exploration, local exploitation and two-phase equilibrium, improves the optimization accuracy in solving simple modal and complex multimodal optimization problems, and in addition, it has higher solution stability and practical applicability.

5.1.5. Expansion Analysis of Fitness Function Values

In this section, an extensive analysis is primarily performed on the fitness function values of the proposed CLNBHEOA relative to its handling of varying test dimensions and different maximum numbers of function evaluations. To achieve this, the test dimension is set to 30, and the maximum number of function evaluations is set to 300,000. Each experiment is independently conducted 30 times without repetition, in order to compile the experimental results. The evaluation metrics encompass the mean and standard deviation, with the best values emphasized in bold. The experimental results on the CEC2017 test functions are presented in Table 4.
As can be seen from the table, the CLNBHEOA achieved first place for both unimodal functions F1 and F3, confirming that the introduction of the three-point guidance strategy based on Bernstein polynomials in this paper enhances the algorithm’s exploitative capability, thereby improving local optimization precision. Additionally, on simple multimodal functions F4 through F10, the CLNBHEOA attained a win rate of over 90%, with MS-TSA ranking first on function F6. Overall, due to the CLNBHEOA’s superior global exploration ability, it demonstrates relatively higher performance in solving simple multimodal problems. On complex multimodal functions F11 through F30, the CLNBHEOA achieved optimal optimization accuracy on 17 test functions, with a win rate of 85%, indicating that the CLNBHEOA possesses a higher level of balance between global exploration and local exploitation, enabling it to effectively develop local regions and enhance algorithm precision. Furthermore, from a comprehensive perspective, in the tests of 29 test functions, the CLNBHEOA achieved an average ranking of 1.1, ranking first overall and surpassing the second-ranked HWEAVOA algorithm by 61.1%. In summary, the CLNBHEOA proposed in this paper can be considered an effective method for solving high-dimensional complex optimization problems.

5.1.6. Nonparametric Rank-Sum Test Analysis

The previous subsection analyzed the numerical results of the algorithm in solving the CEC2017 test function set, but the results of the numerical analysis may be affected by outliers. In order to assess the optimization performance of the CLNBHEOA more comprehensively and to reduce the chance of the results, nonparametric rank-sum tests, including the Wilcoxon rank-sum test with a significance factor of 0.05 and the Friedman rank-sum test, are performed on the results of 30 independent experiments in this section. In the Wilcoxon rank-sum test, the use of the symbol “+” indicates that the comparison algorithm performance is significantly better than the CLNBHEOA, “=” indicates that the comparison algorithm performance is not significantly different from the CLNBHEOA and “−” indicates that the comparison algorithm performance is significantly weaker than that of the CLNBHEOA. The results of the Wilcoxon rank-sum test experiment are shown in Table 5 and the results of the Friedman rank-sum test experiment are shown in Figure 13.
As can be seen from the table, the CLNBHEOA, in the Wilcoxon rank-sum test, demonstrated a significance factor of 0.05, significantly outperforming DMQPSO, FDBARO, QHDBO, WAA and DHEOA on 29 test functions, and achieving significantly better results than HWEAVOA and HEOA on 28 test functions and significantly outperforming BEGJO on 27 test functions, with a win rate of over 93.1%, ending ranked first overall. This is mainly due to the fact that the learning strategies proposed in this paper improve the performance of the algorithm from the perspective of global exploration capability as well as local exploitation capability. In addition, as can be seen in Figure 13, the CLNBHEOA achieved a mean ranking of 1.47 on the Friedman rank-sum test, with a significance factor of 0.05. Compared to the second-ranked HEOA, its ranking was improved by 62.9%, which confirms that the CLNBHEOA proposed in this paper possesses higher multimodal optimization problem-solving performance and can be considered as an effective optimization-related problem-solving tool. In summary, this section confirms that the CLNBHEOA possesses efficient performance, given the nonparametric tests performed and the experimental results, and it also confirms that the four proposed improvement schemes demonstrate positive contributions in the context of the algorithm performance improvement.

5.1.7. Convergence Analysis

The above subsection confirms that the CLNBHEOA possesses efficient optimization performance from the point of view of numerical results as well as non-parametric tests, but in addition to this, the convergence nature of the algorithm is also very important, as it directly affects the applicability of the algorithm in real-world scenarios. Figure 14 illustrates the convergence plot of the algorithm solution process, where the x-axis represents the number of function evaluations and the vertical coordinate represents the logarithmic value of the fitness function value.
As can be seen from the figure, when solving the simple multimodal optimization problem, the algorithm has a certain leading edge at the early stage of iteration due to the introduction of the Chebyshev–Tent chaotic mapping refraction opposites-based learning strategy by the CLNBHEOA in the initialization phase of the population. After the 9000th function evaluation, the CLNBHEOA tends to converge, and the convergence accuracy is ahead of the comparative algorithms, and demonstrates faster convergence speed. This is mainly due to the introduction of the adaptive learning strategy, nonlinear control factor and three-point guidance strategy based on Bernstein polynomials, which improves the algorithm’s abilities of global exploration and local exploitation to a great extent and also strengthens its ability of jumping out of the local region, which improves the convergence speed of the algorithm. Moreover, when solving complex multimodal optimization problems, the advantages of the CLNBHEOA’s initialization population scheme provide a good convergence advantage at the beginning of the iteration. As the algorithm iterates, it converges after about 15,000 function evaluations, and its convergence accuracy is ahead of those of the comparison algorithms, while also having a higher convergence speed. Since the CLNBHEOA has stronger global exploration capability, the algorithm is more capable of solving multimodal problems. In summary, we can conclude that the CLNBHEOA proposed in this paper possesses higher convergence speed and accuracy in solving complex multimodal optimization problems and can be considered an optimization tool of high practicality.

5.2. Experimental Results on Multi-Threshold Image Segmentation

The multi-threshold image-segmentation problem is mainly concerned with finding the best segmentation threshold in order to segment the image into multiple regions, so that the researchers can better analyze the different attribute regions of the image, and ultimately better extract the image information, as well as perform better quantitative analysis of the image. In this section, the best segmentation thresholds are mainly identified using the Otsu method, the main idea of which is to determine the best segmentation thresholds by taking the pixels of the original image as inputs to the algorithm, and subsequently maximize the levels of differences between the different regions. Meanwhile, in order to effectively overcome the problem of traditional search methods, which are prone to fall into local best thresholds when the number of thresholds increases, multi-threshold segmentation is modeled as an optimization problem, and the interclass variance of Otsu’s method is used as the objective function; then, the CLNBHEOA proposed in this paper is used to efficiently search for the best combination of thresholds to improve the quality and efficiency of multi-threshold image segmentation. The concept of the Otsu method and the evaluation criteria used for solving multi-threshold image segmentation using the CLNBHEOA will be described in detail in the subsequent subsections.

5.2.1. The Concept of Otsu Method

In this section, the concept of the Otsu method is introduced. Otsu’s approach is to identify the best segmentation threshold based on the differences between different segmentation regions. Suppose there is an image I with a total of L gray levels and the number of pixels in gray level i is n i ; it can be deduced that the total number of pixels in image I is N , which is expressed in Equation (33).
N = i = 0 L 1 n i
Then, the distribution probability of gray level i can be derived as P i , as expressed in Equation (34).
P i = n i N ,         i = 0 , 1 , , L 1
where p i 0 ,     p 0 + p 1 + + p l 1 = 1 . Assuming that the number of segmentation thresholds is set to k , if the gray value t is confirmed as the segmentation threshold, the threshold t divides the image into two regions; the pixel region with gray level [ 1 , t ] is the target region, and the pixel region with gray level [ t + 1 , L 1 ] is the background region. Assuming that the ratio of the number of pixels in the target region to the total pixels is ω 0 and the average gray level is μ 0 , the ratio of the number of pixels in the background region to the total pixels is ω 1 , the average gray level is μ 1 , the total average gray value of the image is μ and the variance between the different regions is ν . ω 0 , μ 0 , ω 1 , μ 1 , μ and ν are obtained by Equations (35)–(40).
ω 0 = i = 0 t P i
μ 0 = i = 0 t i P i ω 0
ω 1 = i = t + 1 L 1 P i
μ 1 = i = t + 1 L 1 i P i ω 1
μ = i = 0 k 1 ω i μ i = i = 0 L 1 i P i
v ( t ) = ω 0 ( μ 0 μ ) 2 + ω 1 ( μ 1 μ ) 2 = ω 0 ω 1 ( μ 0 μ 1 ) 2
The best segmentation threshold, t b e s t , is calculated using Equation (41).
t b e s t = arg max 0 t L 1 [ v ( t ) ]
Similarly, the k threshold class variance is calculated using Equation (42).
v ( t 1 , t 2 , , t k ) = ω 0 ω 1 ( μ 0 μ 1 ) 2 + ω 0 ω 2 ( μ 0 μ 2 ) 2 + + ω 0 ω k ( μ 0 μ k ) 2 + ω 1 ω 2 ( μ 1 μ 2 ) 2 + + ω 1 ω 3 ( μ 1 μ 3 ) 2 + + ω k 1 ω k ( μ k 1 μ k ) 2
where ω i and μ i are calculated as Equations (43) and (44), respectively.
ω i 1 = i = t i 1 + 1 t i P i ,     1 i k + 1
μ i 1 = i = t i 1 + 1 t i i P i ω i 1 ,     1 i k + 1
Assuming that the best threshold of Equation (42) is T b e s t = ( t 1 * , t 2 * , , t k * ) , the best threshold, T b e s t , is calculated by Equation (45)
T b e s t = arg max 0 t 1 t 2 t k ν t 1 , t 2 , , t k

5.2.2. Experimental Results Analysis

In this section, the performance of the CLNBHEOA in solving image-segmentation problems is analyzed. Specifically, the CLNBHEOA is compared experimentally with five comparison algorithms; the algorithm parameter information is set as shown in Table 6, the maximum number of iterations is set to 100, the population size is set to 40, each experiment is independently and non-repeatedly executed 20 times, and the experiments are conducted on six images; the image-related information is shown in Figure 15. Peak Signal to Noise Ratio (PSNR) [49], Structural Similarity (SSIM) [50] and Feature Similarity (FSIM) [51] are analyzed to comprehensively evaluate the performance of the algorithm in solving the image-segmentation problem. Among them, the higher the value of PSNR and SSIM, the lower the distortion in the segmented image and the higher the quality of segmented image and the higher the value of FSIM, the lower the classification error rate.
In this case, the number of segmentation thresholds was set experimentally to the four values of 2, 4, 6 and 8, and the results of the algorithm for the experiment on six images for the fitness function are shown in Table 7, where “Friedman Rank” denotes the Friedman mean test ranking of the fitness function values for 20 independent experiments and the ”Rank” denotes the final ranking based on “Friedman Rank”; Figure 16 visualizes the Friedman mean test ranking on the fitness function values. As can be seen from the table, compared to the comparison algorithms, the CLNBHEOA obtained the top rank for the average fitness function values in the 24 experiments conducted using different numbers of segmentation thresholds, with a percentage of 100%, and achieving excellent performance. This is mainly due to the fact that the four learning strategies proposed in this paper update the performance of the algorithm from different perspectives, allowing it to have a higher optimal threshold finding capability. Meanwhile, in the Friedman mean test performed, the CLNBHEOA achieved an average ranking of 1.62, which is 28% ahead of the second-ranked QAGO, with an average ranking of 2.25. Meanwhile, as can be seen in Figure 16, the CLNBHEOA scored first place in the experiments with different numbers of segmentation thresholds. In summary, it can be concluded that the CLNBHEOA is an efficient image-segmentation method.
In addition, the experimental results of PSNR metrics on the six images are shown in Table 8, where “Friedman Rank” denotes the Friedman mean test ranking of the fitness function values for 20 independent experiments, and “Rank” denotes the final ranking based on “Friedman Rank”; Figure 17 visualizes the Friedman mean test rankings as to the PSNR values. From the table, it can be seen that the CLNBHEOA scored first place in 23 out of 24 image-segmentation experiments with the number of thresholds set at 2, 4, 6 and 8, with a winning rate of 95.8%. This means that the lower the degree of frame-loss in the segmented image, the better the image-segmentation performance. In addition, it can also be seen from the last row of the table that the CLNBHEOA achieved a Friedman’s mean test ranking of 2.21, which is 24.3% ahead of the second-ranked QAGO’s 2.92. Meanwhile, as can be seen in Figure 17, the CLNBHEOA ranked first in PSNR metrics in all the different threshold number segmentation experiments, demonstrating its efficient image-segmentation capability.
Meanwhile, the experimental results of SSIM metrics on the six images are shown in Table 9, where “Friedman Rank” denotes the Friedman mean test ranking of the fitness function values for 20 independent experiments and “Rank” denotes the final ranking based on “Friedman Rank”; Figure 18 visualizes the Friedman mean test rankings for the SSIM values. From the table, it can be seen that the CLNBHEOA scores first place in 24 experiments of image segmentation with the number of thresholds set at 2, 4, 6 and 8, which is a 100% winning rate, compared to the comparison algorithms. This means that the segmented image possesses higher original similarity and that the algorithm shows strong image-segmentation performance. In addition, from the last row of the table, it can be seen that the CLNBHEOA achieved a Friedman mean test ranking of 1.15, which is 69.8% ahead of the second-ranked QAGO, with 3.81. Meanwhile, as can be seen in Figure 18, the CLNBHEOA ranks first in SSIM metrics in all the different threshold number segmentation experiments, indicating that it is associated with higher structural similarity.
Meanwhile, the experimental results for the FSIM metrics on the six images are shown in Table 10, where “Friedman Rank” denotes the Friedman mean test ranking of the fitness function values for 20 independent experiments and “Rank” denotes the final ranking based on “Friedman Rank”; Figure 19 visualizes the Friedman mean test rankings for the FSIM values. From the table, it can be seen that in 24 image-segmentation experiments with the number of thresholds set at 2, 4, 6 and 8, the CLNBHEOA obtains first place in 24 experiments, with a winning rate of 100%, compared to the comparison algorithms. This implies that the segmented image possesses higher feature similarity, and that the algorithm has a lower classification error rate and demonstrates strong image-segmentation performance. In addition, from the last row of the table, it can be seen that the CLNBHEOA achieved a Friedman mean test ranking of 1.76, which is 39.9% ahead of the second-ranked QAGO, with 2.93. Meanwhile, as can be seen in Figure 19, the CLNBHEOA ranked first in the FSIM metrics in all the different threshold number segmentation experiments, which is an indication of possessing higher feature similarity and a lower classification error rate.
In summary, it is confirmed that the CLNBHEOA has a lower degree of image distortion and a lower classification error rate in solving the multi-threshold image-segmentation problem; this is found by analyzing the values of the fitness function, the peak signal-to-noise ratio (PSNR), the structural similarity (SSIM) and the feature similarity (FSIM). In that this enables the segmentation of the image while maximizing the retention of the image quality, the CLNBHEOA can be considered an effective multi-threshold image-segmentation method.

6. Discussion

In this paper, to address the shortcomings of the HEOA algorithm, such as insufficient global exploration capability, inadequate local exploitation capability and the resulting imbalance that leads to susceptibility to local optima-based traps and diminished optimization precision, the Chebyshev–Tent chaotic mapping refraction opposites-based learning strategy, the adaptive learning strategy, the nonlinear control factor and the three-point guidance strategy based on Bernstein polynomials were specifically introduced to enhance the performance of the algorithm, thereby forming the CLNBHEOA. The proposed CLNBHEOA underwent performance testing as to CEC2017 functions. The experimental results indicate that under varying dimensionalities and different function evaluation counts, compared to the most recent advanced variants and the latest improvements of the HEOA algorithm, the CLNBHEOA achieved a win rate exceeding 90% across different modal functions, demonstrating its promising potential as an optimization method. Subsequently, the CLNBHEOA was employed to solve practical image-segmentation problems. Through comparison with six high-performance algorithms, including the champion algorithm IMODE, it was confirmed that the CLNBHEOA is an effective tool for image segmentation, achieving a win rate exceeding 95% in terms of fitness function values, PSNR, SSIM and FSIM metrics and exhibiting superior performance compared to the competing algorithms.

7. Conclusions

In this study, an enhanced HEOA algorithm, called CLNBHEOA, is proposed, utilizing the combination of four learning strategies to address the poor segmentation effect of HEOA when dealing with multi-threshold image segmentation. Firstly, using the Chebyshev–Tent chaotic mapping refractive contrastive learning strategy gives the initial population a better ability to traverse the solution space. Secondly, an adaptive learning strategy is proposed, which improves the ability of the algorithm to jump out of the locally optimal threshold by learning from individual information gaps possessing different properties, while incorporating adaptive factors. In addition, the nonlinear control factor is proposed to better balance the global exploration phase and the local development phase of the algorithm. Finally, the three-point bootstrap strategy based on Bernstein polynomials is proposed to enhance the local exploitation of the algorithm. Subsequently, it is confirmed that the CLNBHEOA has higher global search capability and search performance due to the introduction of the strategy in this paper; this is found by performing optimization performance tests on the CEC2017 test function set, in which it surpasses the comparative algorithms by more than 90% in terms of the fitness function value metric. At the same time, the CLNBHEOA, with excellent performance, is applied to solve a multi-threshold segmentation problem using six images; the experiments show that it has higher structural similarity and feature similarity in solving the image-segmentation problem; it achieves a win rate of over 95% in terms of the fitness function value, PSNR, SSIM and FSIM metrics, and has higher image-segmentation performance. However, despite the proposed CLNBHEOA achieving superior performance in most scenarios, it still exhibits certain limitations when tackling specific optimization problems. Additionally, the performance of the CLNBHEOA has only been assessed thus far on the CEC2017 test function suite and multi-threshold image-segmentation problems, which merely validates its effective applicability to similar multimodal continuous optimization problems and combinatorial optimization issues. Its applicability in discrete optimization problems and high-dimensional combinatorial optimization remains unverified. Consequently, to further explore its applicability in other domains, our subsequent work plan is outlined as follows.
In the subsequent work, we will focus on the following three points: 1. Targeted learning strategies are proposed for specific types of optimization problems to strengthen the advantages of the algorithms. 2. Work is proposed to investigate the applicability of the CLNBHEOA in high-dimensional combinatorial optimization problems similar to parameter identification issues in photovoltaic systems [56,57], thereby addressing the existing limitations regarding its application domains. 3. The CLNBHEOA is to be extended to multi-objective models to further address the challenges of multi-objective combinatorial optimization problems.

Author Contributions

Conceptualization, L.X.; methodology, L.X. and J.W.; software, L.X.; validation, X.Z. and B.W.; formal analysis, L.X. and J.W.; investigation, X.Z.; resources, J.W.; data curation, L.X.; writing—original draft preparation, L.X. and J.W.; writing—review and editing, L.X., X.Z. and J.W.; visualization, B.W. and X.Z.; supervision, J.W.; project administration, J.W.; funding acquisition, J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data requests can be emailed to the corresponding author’s email address.

Acknowledgments

Many thanks to the editorial board and reviewers for their efforts on our article!

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cao, W.; Martinis, S.; Plank, S. Automatic SAR-based flood detection using hierarchical tile-ranking thresholding and fuzzy logic. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 5697–5700. [Google Scholar]
  2. Santhos, K.A.; Kumar, A.; Bajaj, V.; Singh, G.K. McCulloch’s algorithm inspired cuckoo search optimizer based mammographic image segmentation. Multimed. Tools Appl. 2020, 79, 30453–30488. [Google Scholar] [CrossRef]
  3. Naito, T.; Tsukada, T.; Yamada, K.; Kozuka, K.; Yamamoto, S. Robust license-plate recognition method for passing vehicles under outside environment. IEEE Trans. Veh. Technol. 2000, 49, 2309–2319. [Google Scholar] [CrossRef]
  4. Huo, W.; Huang, Y.; Pei, J.; Liu, X.; Yang, J. Virtual SAR target image generation and similarity. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 914–917. [Google Scholar]
  5. Akay, B. A study on particle swarm optimization and artificial bee colony algorithms for multilevel thresholding. Appl. Soft Comput. 2013, 13, 3066–3091. [Google Scholar] [CrossRef]
  6. Wang, J.; Wang, N.; Wang, R. Research on medical image segmentation based on FCM algorithm. In Proceedings of the 2015 6th International Conference on Manufacturing Science and Engineering, Guangzhou, China, 28–29 November 2015; pp. 90–93. [Google Scholar]
  7. Heimowitz, A.; Keller, Y. Image segmentation via probabilistic graph matching. IEEE Trans. Image Process. 2017, 25, 4743–4752. [Google Scholar] [CrossRef]
  8. Wang, J.; Bei, J.; Song, H.; Zhang, H.; Zhang, P. A whale optimization algorithm with combined mutation and removing similarity for global optimization and multilevel thresholding image segmentation. Appl. Soft Comput. 2023, 137, 110130. [Google Scholar] [CrossRef]
  9. Otsu, N. A threshold selection method from gray-level histograms. Automatica 1975, 11, 23–27. [Google Scholar] [CrossRef]
  10. Dhanachandra, N.; Chanu, Y.J. An image segmentation approach based on fuzzy c-means and dynamic particle swarm optimization algorithm. Multimed. Tools Appl. 2020, 79, 18839–18858. [Google Scholar] [CrossRef]
  11. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A novel meta-heuristic optimization algorithm. Knowl.-Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  12. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  13. Rocca, P.; Oliveri, G.; Massa, A. Differential evolution as applied to electromagnetics. IEEE Antennas Propag. Mag. 2011, 53, 38–49. [Google Scholar] [CrossRef]
  14. Beyer, H.G.; Schwefel, H.P. Evolution strategies—A comprehensive introduction. Nat. Comput. 2002, 1, 3–52. [Google Scholar] [CrossRef]
  15. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  16. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2007, 1, 28–39. [Google Scholar] [CrossRef]
  17. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  18. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  19. Tayarani-N, M.H.; Akbarzadeh-T, M.R. Magnetic-inspired optimization algorithms: Operators and structures. Swarm Evol. Comput. 2014, 19, 82–101. [Google Scholar] [CrossRef]
  20. Zhao, W.; Wang, L.; Zhang, Z. Atom search optimization and its application to solve a hydrogeologic parameter estimation problem. Knowl.-Based Syst. 2019, 163, 283–304. [Google Scholar] [CrossRef]
  21. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  22. Shabani, A.; Asgarian, B.; Salido, M.; Gharebaghi, S.A. Search and rescue optimization algorithm: A new optimization method for solving constrained engineering optimization problems. Expert Syst. Appl. 2020, 161, 113698. [Google Scholar] [CrossRef]
  23. Das, B.; Mukherjee, V.; Das, D. Student psychology based optimization algorithm: A new population based optimization algorithm for solving optimization problems. Adv. Eng. Softw. 2020, 146, 102804. [Google Scholar] [CrossRef]
  24. Huang, C.; Li, X.; Wen, Y. AN OTSU image segmentation based on fruitfly optimization algorithm. Alex. Eng. J. 2021, 60, 183–188. [Google Scholar] [CrossRef]
  25. Ma, G.; Yue, X. An improved whale optimization algorithm based on multilevel threshold image segmentation using the Otsu method. Eng. Appl. Artif. Intell. 2022, 113, 104960. [Google Scholar] [CrossRef]
  26. Chen, L.; Gao, J.; Lopes, A.M.; Zhang, Z.; Chu, Z.; Wu, R. Adaptive fractional-order genetic-particle swarm optimization Otsu algorithm for image segmentation. Appl. Intell. 2023, 53, 26949–26966. [Google Scholar] [CrossRef]
  27. Qin, J.; Shen, X.; Mei, F.; Fang, Z. An Otsu multi-thresholds segmentation algorithm based on improved ACO. J. Supercomput. 2019, 75, 955–967. [Google Scholar] [CrossRef]
  28. Fan, Q.; Ma, Y.; Wang, P.; Bai, F. Otsu Image Segmentation Based on a Fractional Order Moth–Flame Optimization Algorithm. Fractal Fract. 2024, 8, 87. [Google Scholar] [CrossRef]
  29. Khairuzzaman, A.K.M.; Chaudhury, S. Multilevel thresholding using grey wolf optimizer for image segmentation. Expert Syst. Appl. 2017, 86, 64–76. [Google Scholar] [CrossRef]
  30. Wu, B.; Zhou, J.; Ji, X.; Yin, Y.; Shen, X. An ameliorated teaching–learning-based optimization algorithm based study of image segmentation for multilevel thresholding using Kapur’s entropy and Otsu’s between class variance. Inf. Sci. 2020, 533, 72–107. [Google Scholar] [CrossRef]
  31. Abd Elaziz, M.; Lu, S.; He, S. A multi-leader whale optimization algorithm for global optimization and image segmentation. Expert Syst. Appl. 2021, 175, 114841. [Google Scholar] [CrossRef]
  32. Lian, J.; Hui, G. Human evolutionary optimization algorithm. Expert Syst. Appl. 2024, 241, 122638. [Google Scholar] [CrossRef]
  33. Piotrowski, A.P.; Napiorkowski, J.J.; Piotrowska, A.E. Choice of benchmark optimization problems does matter. Swarm Evol. Comput. 2023, 83, 101378. [Google Scholar] [CrossRef]
  34. Li, X.D.; Wang, J.S.; Hao, W.K.; Zhang, M.; Wang, M. Chaotic arithmetic optimization algorithm. Appl. Intell. 2022, 52, 16718–16757. [Google Scholar] [CrossRef]
  35. Yang, Y.; Gao, Y.; Tan, S.; Zhao, S.; Wu, J.; Gao, S.; Wang, Y.G. An opposition learning and spiral modelling based arithmetic optimization algorithm for global continuous optimization problems. Eng. Appl. Artif. Intell. 2022, 113, 104981. [Google Scholar] [CrossRef]
  36. Zhang, Q.; Gao, H.; Zhan, Z.H.; Li, J.; Zhang, H. Growth Optimizer: A powerful metaheuristic algorithm for solving continuous and discrete global optimization problems. Knowl.-Based Syst. 2023, 261, 110206. [Google Scholar] [CrossRef]
  37. Elhoseny, M.; Abdel-salam, M.; El-Hasnony, I.M. An improved multi-strategy Golden Jackal algorithm for real world engineering problems. Knowl.-Based Syst. 2024, 295, 111725. [Google Scholar] [CrossRef]
  38. Zhang, X.; Lin, Q. Three-learning strategy particle swarm algorithm for global optimization problems. Inf. Sci. 2022, 593, 289–313. [Google Scholar] [CrossRef]
  39. Askr, H.; Abdel-Salam, M.; Hassanien, A.E. Copula entropy-based golden jackal optimization algorithm for high-dimensional feature selection problems. Expert Syst. Appl. 2024, 238, 121582. [Google Scholar] [CrossRef]
  40. Gong, C.; Zhou, N.; Xia, S.; Huang, S. Quantum particle swarm optimization algorithm based on diversity migration strategy. Future Gener. Comput. Syst. 2024, 157, 445–458. [Google Scholar] [CrossRef]
  41. Bakır, H. Dynamic fitness-distance balance-based artificial rabbits optimization algorithm to solve optimal power flow problem. Expert Syst. Appl. 2024, 240, 122460. [Google Scholar] [CrossRef]
  42. Wang, B.; Zhang, Z.; Siarry, P.; Liu, X.; Królczyk, G.; Hua, D.; Li, Z. A nonlinear African vulture optimization algorithm combining Henon chaotic mapping theory and reverse learning competition strategy. Expert Syst. Appl. 2024, 236, 121413. [Google Scholar] [CrossRef]
  43. Zhu, F.; Li, G.; Tang, H.; Li, Y.; Lv, X.; Wang, X. Dung beetle optimization algorithm based on quantum computing and multi-strategy fusion for solving engineering problems. Expert Syst. Appl. 2024, 236, 121219. [Google Scholar] [CrossRef]
  44. Cheng, J.; De Waele, W. Weighted average algorithm: A novel meta-heuristic optimization algorithm based on the weighted average position concept. Knowl.-Based Syst. 2024, 305, 112564. [Google Scholar] [CrossRef]
  45. Wang, Y.; Li, R.; Wang, Y.; Sun, J. Military UCAV 3D Path Planning Based on Multi-strategy Developed Human Evolutionary Optimization Algorithm. IEEE Internet Things J. early access. 2025. [Google Scholar] [CrossRef]
  46. Wang, X.; Zhou, S.; Wang, Z.; Xia, X.; Duan, Y. An Improved Human Evolution Optimization Algorithm for Unmanned Aerial Vehicle 3D Trajectory Planning. Biomimetics 2025, 10, 23. [Google Scholar] [CrossRef]
  47. Beşkirli, A.; Dağ, İ.; Kiran, M.S. A tree seed algorithm with multi-strategy for parameter estimation of solar photovoltaic models. Appl. Soft Comput. 2024, 167, 112220. [Google Scholar] [CrossRef]
  48. Beşkirli, A.; Dağ, İ. I-CPA: An improved carnivorous plant algorithm for solar photovoltaic parameter identification problem. Biomimetics 2023, 8, 569. [Google Scholar] [CrossRef]
  49. Huynh-Thu, Q.; Ghanbari, M. Scope of validity of PSNR in image/video quality assessment. Electron. Lett. 2008, 44, 800–801. [Google Scholar] [CrossRef]
  50. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  51. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef]
  52. Gao, H.; Zhang, Q.; Bu, X.; Zhang, H. Quadruple parameter adaptation growth optimizer with integrated distribution, confrontation, and balance features for optimization. Expert Syst. Appl. 2024, 235, 121218. [Google Scholar] [CrossRef]
  53. SeyedGarmroudi, S.; Kayakutlu, G.; Kayalica, M.O.; Çolak, Ü. Improved Pelican optimization algorithm for solving load dispatch problems. Energy 2024, 289, 129811. [Google Scholar] [CrossRef]
  54. Jia, H.; Zhou, X.; Zhang, J.; Abualigah, L.; Yildiz, A.R.; Hussien, A.G. Modified crayfish optimization algorithm for solving multiple engineering application problems. Artif. Intell. Rev. 2024, 57, 127. [Google Scholar] [CrossRef]
  55. Sallam, K.M.; Elsayed, S.M.; Chakrabortty, R.K.; Ryan, M.J. Improved multi-operator differential evolution algorithm for solving unconstrained problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation, Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  56. Beşkirli, A.; Dağ, İ. Parameter extraction for photovoltaic models with tree seed algorithm. Energy Rep. 2023, 9, 174–185. [Google Scholar] [CrossRef]
  57. Beşkirli, A.; Dağ, İ. An efficient tree seed inspired algorithm for parameter estimation of Photovoltaic models. Energy Rep. 2022, 8, 291–298. [Google Scholar] [CrossRef]
Figure 1. The refraction opposites-based learning strategy simulation diagram.
Figure 1. The refraction opposites-based learning strategy simulation diagram.
Biomimetics 10 00282 g001
Figure 2. The basis attenuation term change chart.
Figure 2. The basis attenuation term change chart.
Biomimetics 10 00282 g002
Figure 3. The fluctuation adjustment term change chart.
Figure 3. The fluctuation adjustment term change chart.
Biomimetics 10 00282 g003
Figure 4. The nonlinear control factor change chart.
Figure 4. The nonlinear control factor change chart.
Biomimetics 10 00282 g004
Figure 5. The graph of second-order Bernstein polynomials.
Figure 5. The graph of second-order Bernstein polynomials.
Biomimetics 10 00282 g005
Figure 6. Simulation diagram for the three-point guidance strategy based on Bernstein polynomials.
Figure 6. Simulation diagram for the three-point guidance strategy based on Bernstein polynomials.
Biomimetics 10 00282 g006
Figure 7. The CLNBHEOA flowchart.
Figure 7. The CLNBHEOA flowchart.
Biomimetics 10 00282 g007
Figure 8. Friedman rank-sum test ranking.
Figure 8. Friedman rank-sum test ranking.
Biomimetics 10 00282 g008
Figure 9. Algorithms for population diversity in CEC2017.
Figure 9. Algorithms for population diversity in CEC2017.
Biomimetics 10 00282 g009
Figure 10. Algorithms’ balance of exploration/exploitation in CEC2017.
Figure 10. Algorithms’ balance of exploration/exploitation in CEC2017.
Biomimetics 10 00282 g010
Figure 11. Algorithms’ mean ranking for CEC2017.
Figure 11. Algorithms’ mean ranking for CEC2017.
Biomimetics 10 00282 g011
Figure 12. Algorithms’ box plots for CEC2017.
Figure 12. Algorithms’ box plots for CEC2017.
Biomimetics 10 00282 g012
Figure 13. The algorithm’s Friedman rank values (CEC2017).
Figure 13. The algorithm’s Friedman rank values (CEC2017).
Biomimetics 10 00282 g013
Figure 14. Convergence curves of the algorithm (CEC2017).
Figure 14. Convergence curves of the algorithm (CEC2017).
Biomimetics 10 00282 g014
Figure 15. Six benchmark images.
Figure 15. Six benchmark images.
Biomimetics 10 00282 g015
Figure 16. Friedman rank values for fitness function values of algorithms on multi-threshold image-segmentation problems.
Figure 16. Friedman rank values for fitness function values of algorithms on multi-threshold image-segmentation problems.
Biomimetics 10 00282 g016
Figure 17. Friedman ranks for PSNR values of the algorithms for multi-threshold image-segmentation problems.
Figure 17. Friedman ranks for PSNR values of the algorithms for multi-threshold image-segmentation problems.
Biomimetics 10 00282 g017
Figure 18. Friedman rank for the SSIM values of algorithms relative to multi-threshold image-segmentation problems.
Figure 18. Friedman rank for the SSIM values of algorithms relative to multi-threshold image-segmentation problems.
Biomimetics 10 00282 g018
Figure 19. Friedman rank for the FSIM values of algorithms in the multi-threshold image-segmentation problems.
Figure 19. Friedman rank for the FSIM values of algorithms in the multi-threshold image-segmentation problems.
Biomimetics 10 00282 g019
Table 1. The CEC2017 functions test set.
Table 1. The CEC2017 functions test set.
FunctionsTypesNameBest
CF17_F1UnimodalShifted and Rotated Bent Cigar Function100
CF17_F3 Shifted and Rotated Zakharov Function300
CF17_F4MultimodalShifted and Rotated Rosenbrock’s Function400
CF17_F5 Shifted and Rotated Rastrigin’s Function500
CF17_F6 Shifted and Rotated Expanded Scaffer’s F6 Function600
CF17_F7 Shifted and Rotated Lunacek Bi-Rastrigin Function700
CF17_F8 Shifted and Rotated Non-Continuous Rastrigin’s Function800
CF17_F9 Shifted and Rotated Levy Function900
CF17_F10 Shifted and Rotated Schwefel’s Function1000
CF17_F11HybridHybrid function 1 (N = 3)1100
CF17_F12 Hybrid function 1 (N = 3)1200
CF17_F13 Hybrid function 3 (N = 3)1300
CF17_F14 Hybrid function 4 (N = 4)1400
CF17_F15 Hybrid function 5 (N = 4)1500
CF17_F16 Hybrid function 6 (N = 4)1600
CF17_F17 Hybrid function 6 (N = 5)1700
CF17_F18 Hybrid function 6 (N = 5)1800
CF17_F19 Hybrid function 6 (N = 5)1900
CF17_F20 Hybrid function 6 (N = 6)2000
CF17_F21CompositionComposition function 1 (N = 3)2100
CF17_F22 Composition function 2 (N = 3)2200
CF17_F23 Composition function 3 (N = 4)2300
CF17_F24 Composition function 4 (N = 4)2400
CF17_F25 Composition function 5 (N = 5)2500
CF17_F26 Composition function 6 (N = 5)2600
CF17_F27 Composition function 7 (N = 6)2700
CF17_F28 Composition function 8 (N = 6)2800
CF17_F29 Composition function 9 (N = 3)2900
CF17_F30 Composition function 10 (N = 3)3000
Table 2. Comparison algorithms associated with CEC2017.
Table 2. Comparison algorithms associated with CEC2017.
AlgorithmsTimeParameters
Binary Enhanced Golden Jackal Optimization (BEGJO) [39]2024 β = 1.5 ,   c 1 = 1.5 ,   E 1   linearly   decreasing   from   1.5   to   0
DMQPSO [40]2024 β   linearly   decreasing   from   1   to   0.5
Fitness-Distance Balance-Based Artificial Rabbits Optimization (FDBARO) [41]2024No Parameters
HWEAVOA [42]2024 p 1 = 0.6 ,   p 2 = 0.4 ,   p 3 = 0.4 ,   α = 0.8 ,   β = 0.2 ,   W = 2.5 ,   a = 1.4 ,   b = 0.3
QHDBO [43]2024 R = cos ( π · ( t / T m a x ) + 1 ) · 0.5
Weighted Average Algorithm (WAA) [44]2024 f = ( α · r a n d 1 ) sin ( π · i t M a x i t )
Human Evolutionary Optimization Algorithm (HEOA)2024 w = 0.2 · cos ( π 2 ( 1 t M a x i t e r ) )
Developed Human Evolutionary Optimization Algorithm (DHEOA) [45]2025 w = 0.2 · cos ( π 2 ( 1 t M a x i t e r ) )
Improved Human Evolution Optimization Algorithm (IHEOA) [46]2025 w = 0.2 · cos ( π 2 ( 1 t M a x i t e r ) )
MS-TSA [47]2024 z 0   between   0   and   1
I-CPA [48]2023No Parameters
Table 3. The algorithm’s fitness function values (CEC2017) ( D i m = 100 ,   M a x F E s = 30,000 ).
Table 3. The algorithm’s fitness function values (CEC2017) ( D i m = 100 ,   M a x F E s = 30,000 ).
FunctionsMetricsBEGJODMQPSOFDBAROHWEAVOAQHDBOWAAHEOADHEOAIHEOACLNBHEOA
CF17_F1Mean5.470 × 10+043.042 × 10+033.238 × 10+081.554 × 10+032.964 × 10+033.112 × 10+032.226 × 10+034.028 × 10+047.981 × 10+051.000 × 10+02
Std2.823 × 10+053.299 × 10+031.432 × 10+081.981 × 10+033.317 × 10+033.214 × 10+032.134 × 10+032.036 × 10+052.707 × 10+064.594 × 10−14
CF17_F3Mean3.003 × 10+023.037 × 10+022.684 × 10+033.062 × 10+023.027 × 10+023.000 × 10+023.000 × 10+023.000 × 10+023.116 × 10+023.000 × 10+02
Std5.481 × 10−011.317 × 10+012.348 × 10+031.516 × 10+016.232 × 10+001.598 × 10−094.088 × 10−104.104 × 10−024.514 × 10+014.352 × 10−14
CF17_F4Mean4.126 × 10+024.067 × 10+024.336 × 10+024.024 × 10+024.072 × 10+024.078 × 10+024.034 × 10+024.095 × 10+024.309 × 10+024.000 × 10+02
Std1.754 × 10+011.198 × 10+012.297 × 10+011.268 × 10+001.278 × 10+011.441 × 10+011.423 × 10+001.599 × 10+014.480 × 10+012.396 × 10−08
CF17_F5Mean5.311 × 10+025.434 × 10+025.635 × 10+025.221 × 10+025.250 × 10+025.218 × 10+025.125 × 10+025.134 × 10+025.386 × 10+025.072 × 10+02
Std1.147 × 10+011.355 × 10+019.144 × 10+008.424 × 10+009.635 × 10+008.405 × 10+005.047 × 10+005.540 × 10+001.194 × 10+012.286 × 10+00
CF17_F6Mean6.012 × 10+026.174 × 10+026.126 × 10+026.003 × 10+026.005 × 10+026.157 × 10+026.006 × 10+026.005 × 10+026.107 × 10+026.000 × 10+02
Std2.533 × 10+008.682 × 10+004.898 × 10+008.496 × 10−017.983 × 10−011.253 × 10+011.459 × 10+009.855 × 10−017.438 × 10+001.198 × 10−05
CF17_F7Mean7.430 × 10+027.625 × 10+027.796 × 10+027.334 × 10+027.404 × 10+027.348 × 10+027.240 × 10+027.272 × 10+027.402 × 10+027.176 × 10+02
Std1.193 × 10+012.254 × 10+011.380 × 10+017.288 × 10+001.286 × 10+019.414 × 10+006.465 × 10+007.060 × 10+001.132 × 10+012.921 × 10+00
CF17_F8Mean8.184 × 10+028.336 × 10+028.434 × 10+028.130 × 10+028.269 × 10+028.281 × 10+028.106 × 10+028.182 × 10+028.310 × 10+028.057 × 10+02
Std4.802 × 10+008.803 × 10+001.669 × 10+013.648 × 10+001.207 × 10+011.305 × 10+014.177 × 10+007.527 × 10+001.094 × 10+012.292 × 10+00
CF17_F9Mean9.088 × 10+021.147 × 10+039.435 × 10+029.096 × 10+029.195 × 10+029.551 × 10+029.020 × 10+029.022 × 10+029.743 × 10+029.000 × 10+02
Std2.286 × 10+011.982 × 10+023.116 × 10+013.055 × 10+014.565 × 10+011.045 × 10+023.980 × 10+004.838 × 10+001.139 × 10+020.000 × 10+00
CF17_F10Mean1.554 × 10+031.870 × 10+032.581 × 10+031.639 × 10+031.852 × 10+031.898 × 10+031.986 × 10+031.734 × 10+031.995 × 10+031.299 × 10+03
Std2.869 × 10+023.439 × 10+024.125 × 10+021.752 × 10+022.865 × 10+022.944 × 10+022.801 × 10+023.176 × 10+023.067 × 10+021.419 × 10+02
CF17_F11Mean1.124 × 10+031.159 × 10+031.213 × 10+031.109 × 10+031.114 × 10+031.162 × 10+031.118 × 10+031.128 × 10+031.227 × 10+031.101 × 10+03
Std1.006 × 10+015.745 × 10+019.193 × 10+015.848 × 10+001.267 × 10+014.330 × 10+011.898 × 10+012.979 × 10+011.120 × 10+021.298 × 10+00
CF17_F12Mean7.460 × 10+036.561 × 10+052.192 × 10+079.042 × 10+031.829 × 10+062.340 × 10+061.667 × 10+042.972 × 10+041.908 × 10+061.321 × 10+03
Std2.941 × 10+038.157 × 10+051.941 × 10+074.121 × 10+033.708 × 10+061.941 × 10+061.596 × 10+048.597 × 10+043.230 × 10+069.833 × 10+01
CF17_F13Mean9.174 × 10+031.243 × 10+041.792 × 10+063.738 × 10+039.062 × 10+031.461 × 10+047.015 × 10+038.635 × 10+031.174 × 10+041.307 × 10+03
Std3.531 × 10+039.746 × 10+032.050 × 10+062.101 × 10+037.772 × 10+031.274 × 10+045.612 × 10+036.017 × 10+031.151 × 10+043.204 × 10+00
CF17_F14Mean1.488 × 10+032.170 × 10+032.813 × 10+051.504 × 10+031.527 × 10+031.645 × 10+031.478 × 10+031.466 × 10+031.719 × 10+031.402 × 10+03
Std3.098 × 10+019.216 × 10+025.167 × 10+052.896 × 10+016.726 × 10+013.296 × 10+022.411 × 10+016.426 × 10+014.939 × 10+021.071 × 10+00
CF17_F15Mean2.040 × 10+034.224 × 10+033.891 × 10+041.819 × 10+032.142 × 10+033.961 × 10+031.704 × 10+032.519 × 10+032.417 × 10+031.501 × 10+03
Std9.606 × 10+022.692 × 10+036.415 × 10+042.698 × 10+028.484 × 10+022.675 × 10+039.180 × 10+011.076 × 10+039.134 × 10+021.072 × 10+00
CF17_F16Mean1.748 × 10+031.763 × 10+031.870 × 10+031.679 × 10+031.742 × 10+031.755 × 10+031.623 × 10+031.709 × 10+031.754 × 10+031.613 × 10+03
Std1.495 × 10+021.221 × 10+021.415 × 10+026.130 × 10+011.053 × 10+021.119 × 10+024.034 × 10+018.813 × 10+011.212 × 10+023.112 × 10+01
CF17_F17Mean1.768 × 10+031.767 × 10+031.827 × 10+031.742 × 10+031.782 × 10+031.771 × 10+031.741 × 10+031.742 × 10+031.769 × 10+031.703 × 10+03
Std3.236 × 10+013.434 × 10+015.324 × 10+017.993 × 10+003.992 × 10+012.714 × 10+011.266 × 10+012.539 × 10+013.142 × 10+014.960 × 10+00
CF17_F18Mean5.919 × 10+031.516 × 10+048.325 × 10+062.971 × 10+031.052 × 10+042.042 × 10+048.224 × 10+031.306 × 10+041.878 × 10+041.805 × 10+03
Std5.329 × 10+031.214 × 10+041.013 × 10+071.177 × 10+038.408 × 10+031.230 × 10+045.657 × 10+031.063 × 10+041.606 × 10+047.734 × 10+00
CF17_F19Mean1.956 × 10+039.568 × 10+038.819 × 10+052.216 × 10+033.366 × 10+034.883 × 10+032.028 × 10+038.424 × 10+035.161 × 10+031.900 × 10+03
Std4.675 × 10+019.040 × 10+031.899 × 10+065.089 × 10+022.112 × 10+033.225 × 10+031.052 × 10+026.840 × 10+036.896 × 10+033.722 × 10−01
CF17_F20Mean2.046 × 10+032.133 × 10+032.185 × 10+032.032 × 10+032.061 × 10+032.114 × 10+032.022 × 10+032.092 × 10+032.097 × 10+032.001 × 10+03
Std1.831 × 10+017.504 × 10+016.224 × 10+011.590 × 10+014.496 × 10+016.861 × 10+011.060 × 10+015.782 × 10+015.656 × 10+011.151 × 10+00
CF17_F21Mean2.314 × 10+032.271 × 10+032.350 × 10+032.256 × 10+032.322 × 10+032.283 × 10+032.277 × 10+032.286 × 10+032.232 × 10+032.248 × 10+03
Std4.398 × 10+016.876 × 10+012.830 × 10+015.517 × 10+012.455 × 10+015.881 × 10+014.913 × 10+014.787 × 10+015.263 × 10+015.603 × 10+01
CF17_F22Mean2.272 × 10+032.308 × 10+032.338 × 10+032.300 × 10+032.303 × 10+032.297 × 10+032.297 × 10+032.300 × 10+032.310 × 10+032.294 × 10+03
Std4.078 × 10+014.584 × 10+001.613 × 10+011.415 × 10+002.244 × 10+002.359 × 10+011.946 × 10+011.171 × 10+017.111 × 10+002.406 × 10+01
CF17_F23Mean2.644 × 10+032.645 × 10+032.668 × 10+032.613 × 10+032.630 × 10+032.621 × 10+032.615 × 10+032.619 × 10+032.642 × 10+032.610 × 10+03
Std1.779 × 10+011.876 × 10+018.838 × 10+005.935 × 10+001.528 × 10+018.681 × 10+005.536 × 10+008.574 × 10+001.349 × 10+012.528 × 10+00
CF17_F24Mean2.737 × 10+032.765 × 10+032.800 × 10+032.718 × 10+032.758 × 10+032.741 × 10+032.704 × 10+032.744 × 10+032.671 × 10+032.644 × 10+03
Std1.101 × 10+027.559 × 10+011.272 × 10+016.500 × 10+011.164 × 10+014.663 × 10+018.007 × 10+018.141 × 10+001.157 × 10+021.196 × 10+02
CF17_F25Mean2.922 × 10+032.937 × 10+032.950 × 10+032.927 × 10+032.923 × 10+032.917 × 10+032.924 × 10+032.935 × 10+032.941 × 10+032.912 × 10+03
Std2.271 × 10+012.824 × 10+012.121 × 10+012.090 × 10+012.375 × 10+016.672 × 10+012.334 × 10+012.052 × 10+012.637 × 10+012.112 × 10+01
CF17_F26Mean2.947 × 10+033.118 × 10+033.266 × 10+032.907 × 10+033.215 × 10+032.965 × 10+032.941 × 10+033.010 × 10+033.023 × 10+032.905 × 10+03
Std2.533 × 10+023.427 × 10+024.882 × 10+026.786 × 10+013.558 × 10+022.843 × 10+028.060 × 10+012.247 × 10+021.445 × 10+021.798 × 10+01
CF17_F27Mean3.105 × 10+033.108 × 10+033.109 × 10+033.092 × 10+033.110 × 10+033.098 × 10+033.093 × 10+033.097 × 10+033.105 × 10+033.090 × 10+03
Std8.564 × 10+002.311 × 10+011.684 × 10+012.763 × 10+001.199 × 10+011.791 × 10+013.020 × 10+001.173 × 10+011.850 × 10+019.345 × 10−01
CF17_F28Mean3.459 × 10+033.308 × 10+033.381 × 10+033.209 × 10+033.328 × 10+033.273 × 10+033.267 × 10+033.337 × 10+033.300 × 10+033.230 × 10+03
Std1.364 × 10+021.528 × 10+021.103 × 10+021.320 × 10+021.181 × 10+021.474 × 10+021.362 × 10+029.853 × 10+011.018 × 10+021.509 × 10+02
CF17_F29Mean3.206 × 10+033.286 × 10+033.336 × 10+033.193 × 10+033.232 × 10+033.211 × 10+033.181 × 10+033.202 × 10+033.276 × 10+033.152 × 10+03
Std2.861 × 10+017.563 × 10+015.761 × 10+012.078 × 10+016.186 × 10+014.800 × 10+011.967 × 10+014.596 × 10+016.796 × 10+011.063 × 10+01
CF17_F30Mean9.854 × 10+043.599 × 10+052.222 × 10+069.285 × 10+043.151 × 10+054.437 × 10+053.566 × 10+052.669 × 10+056.622 × 10+053.514 × 10+03
Std2.735 × 10+054.322 × 10+053.032 × 10+062.059 × 10+054.422 × 10+056.321 × 10+054.681 × 10+054.822 × 10+057.298 × 10+058.863 × 10+01
Mean Rank5.14 7.69 9.72 3.24 6.07 6.34 3.21 5.03 7.31 1.10
Final Rank59103672481
Table 4. The algorithm’s fitness function values (CEC2017) ( D i m = 30 , M a x F E s = 300,000 ).
Table 4. The algorithm’s fitness function values (CEC2017) ( D i m = 30 , M a x F E s = 300,000 ).
FunctionsMetricsMS-TSAI-CPAFDBAROHWEAVOAQHDBOWAAHEOADHEOAIHEOACLNBHEOA
CF17_F1Mean4.370 × 10+036.580 × 10+093.853 × 10+081.792 × 10+032.230 × 10+031.711 × 10+031.350 × 10+032.277 × 10+036.436 × 10+031.000 × 10+02
Std5.600 × 10+034.510 × 10+093.200 × 10+081.923 × 10+032.534 × 10+032.142 × 10+031.606 × 10+032.541 × 10+034.242 × 10+030.000 × 10+00
CF17_F3Mean8.120 × 10+035.220 × 10+042.396 × 10+033.000 × 10+023.000 × 10+023.000 × 10+023.000 × 10+023.000 × 10+023.000 × 10+023.000 × 10+02
Std3.150 × 10+038.930 × 10+032.063 × 10+031.007 × 10−136.064 × 10−143.309 × 10−108.039 × 10−141.185 × 10−132.728 × 10−130.000 × 10+00
CF17_F4Mean4.530 × 10+021.840 × 10+034.272 × 10+024.001 × 10+024.028 × 10+024.001 × 10+024.000 × 10+024.003 × 10+024.104 × 10+024.000 × 10+02
Std3.390 × 10+011.090 × 10+031.720 × 10+012.526 × 10−021.884 × 10+002.380 × 10−012.060 × 10−021.440 × 10−011.528 × 10+010.000 × 10+00
CF17_F5Mean6.240 × 10+027.280 × 10+025.538 × 10+025.147 × 10+025.224 × 10+025.159 × 10+025.100 × 10+025.136 × 10+025.267 × 10+025.066 × 10+02
Std6.570 × 10+014.900 × 10+011.029 × 10+016.363 × 10+008.643 × 10+006.316 × 10+003.606 × 10+006.456 × 10+001.244 × 10+012.283 × 10+00
CF17_F6Mean6.000 × 10+026.420 × 10+026.115 × 10+026.001 × 10+026.003 × 10+026.018 × 10+026.003 × 10+026.002 × 10+026.073 × 10+026.000 × 10+02
Std2.350 × 10−017.320 × 10+002.965 × 10+002.009 × 10−014.866 × 10−012.426 × 10+007.679 × 10−015.185 × 10−016.315 × 10+002.111 × 10−14
CF17_F7Mean8.960 × 10+021.100 × 10+037.770 × 10+027.300 × 10+027.256 × 10+027.279 × 10+027.190 × 10+027.259 × 10+027.439 × 10+027.182 × 10+02
Std5.010 × 10+018.140 × 10+011.099 × 10+017.209 × 10+009.149 × 10+008.671 × 10+004.072 × 10+006.837 × 10+001.197 × 10+013.557 × 10+00
CF17_F8Mean9.280 × 10+029.930 × 10+028.366 × 10+028.090 × 10+028.213 × 10+028.159 × 10+028.093 × 10+028.131 × 10+028.284 × 10+028.053 × 10+02
Std6.630 × 10+013.510 × 10+019.495 × 10+002.629 × 10+006.345 × 10+006.868 × 10+003.108 × 10+004.638 × 10+008.632 × 10+001.597 × 10+00
CF17_F9Mean9.010 × 10+025.470 × 10+039.486 × 10+029.002 × 10+029.020 × 10+029.011 × 10+029.015 × 10+029.017 × 10+029.436 × 10+029.000 × 10+02
Std1.140 × 10+001.600 × 10+034.633 × 10+014.684 × 10−015.830 × 10+002.234 × 10+002.265 × 10+008.098 × 10+006.363 × 10+010.000 × 10+00
CF17_F10Mean7.100 × 10+036.000 × 10+032.377 × 10+031.435 × 10+031.745 × 10+031.645 × 10+031.304 × 10+031.591 × 10+031.790 × 10+031.279 × 10+03
Std1.530 × 10+036.680 × 10+024.389 × 10+021.395 × 10+022.904 × 10+022.182 × 10+021.986 × 10+022.558 × 10+022.855 × 10+021.794 × 10+02
CF17_F11Mean1.170 × 10+033.490 × 10+031.181 × 10+031.104 × 10+031.112 × 10+031.133 × 10+031.111 × 10+031.116 × 10+031.173 × 10+031.101 × 10+03
Std4.890 × 10+011.810 × 10+035.112 × 10+013.404 × 10+008.111 × 10+002.325 × 10+011.009 × 10+018.447 × 10+006.107 × 10+011.207 × 10+00
CF17_F12Mean2.060 × 10+056.370 × 10+082.275 × 10+076.805 × 10+033.021 × 10+051.957 × 10+041.079 × 10+041.188 × 10+045.464 × 10+051.294 × 10+03
Std1.140 × 10+057.140 × 10+083.035 × 10+072.654 × 10+036.095 × 10+051.824 × 10+041.040 × 10+041.091 × 10+041.796 × 10+069.653 × 10+01
CF17_F13Mean1.480 × 10+046.370 × 10+081.155 × 10+062.416 × 10+032.522 × 10+031.052 × 10+041.949 × 10+037.999 × 10+038.345 × 10+031.307 × 10+03
Std1.480 × 10+041.320 × 10+091.439 × 10+068.483 × 10+021.673 × 10+038.300 × 10+036.000 × 10+026.887 × 10+039.312 × 10+033.127 × 10+00
CF17_F14Mean7.180 × 10+031.020 × 10+062.927 × 10+051.423 × 10+031.451 × 10+031.456 × 10+031.425 × 10+031.426 × 10+031.486 × 10+031.402 × 10+03
Std4.330 × 10+037.340 × 10+054.628 × 10+056.130 × 10+002.465 × 10+012.469 × 10+011.196 × 10+011.314 × 10+014.387 × 10+017.221 × 10−01
CF17_F15Mean6.870 × 10+037.210 × 10+066.155 × 10+041.513 × 10+031.541 × 10+031.619 × 10+031.513 × 10+031.524 × 10+031.659 × 10+031.501 × 10+03
Std7.010 × 10+031.950 × 10+079.634 × 10+046.968 × 10+003.751 × 10+015.685 × 10+011.390 × 10+011.894 × 10+019.289 × 10+017.846 × 10−01
CF17_F16Mean2.430 × 10+033.230 × 10+031.857 × 10+031.610 × 10+031.658 × 10+031.646 × 10+031.623 × 10+031.680 × 10+031.697 × 10+031.608 × 10+03
Std4.550 × 10+023.970 × 10+021.733 × 10+022.898 × 10+016.980 × 10+015.531 × 10+014.515 × 10+017.447 × 10+019.699 × 10+012.318 × 10+01
CF17_F17Mean1.950 × 10+032.310 × 10+031.826 × 10+031.732 × 10+031.752 × 10+031.747 × 10+031.733 × 10+031.726 × 10+031.749 × 10+031.703 × 10+03
Std1.750 × 10+022.160 × 10+025.685 × 10+019.655 × 10+004.067 × 10+012.250 × 10+011.003 × 10+011.825 × 10+011.433 × 10+014.944 × 10+00
CF17_F18Mean5.710 × 10+054.360 × 10+065.424 × 10+062.856 × 10+031.006 × 10+041.194 × 10+044.412 × 10+031.577 × 10+041.659 × 10+041.802 × 10+03
Std5.220 × 10+055.170 × 10+064.735 × 10+061.057 × 10+031.204 × 10+049.162 × 10+032.557 × 10+031.077 × 10+041.711 × 10+044.795 × 10+00
CF17_F19Mean8.260 × 10+032.050 × 10+078.285 × 10+051.909 × 10+031.926 × 10+031.936 × 10+031.906 × 10+032.318 × 10+032.082 × 10+031.900 × 10+03
Std8.410 × 10+033.680 × 10+071.619 × 10+063.241 × 10+003.012 × 10+012.384 × 10+012.628 × 10+001.085 × 10+032.712 × 10+024.137 × 10−01
CF17_F20Mean2.230 × 10+032.650 × 10+032.162 × 10+032.016 × 10+032.041 × 10+032.044 × 10+032.021 × 10+032.037 × 10+032.078 × 10+032.000 × 10+03
Std1.500 × 10+021.590 × 10+025.858 × 10+011.087 × 10+014.659 × 10+012.867 × 10+011.132 × 10+014.000 × 10+015.295 × 10+015.700 × 10−02
CF17_F21Mean2.410 × 10+032.510 × 10+032.333 × 10+032.249 × 10+032.323 × 10+032.239 × 10+032.252 × 10+032.295 × 10+032.252 × 10+032.204 × 10+03
Std6.270 × 10+013.420 × 10+014.335 × 10+015.676 × 10+012.471 × 10+016.097 × 10+015.538 × 10+014.315 × 10+015.627 × 10+011.087 × 10+00
CF17_F22Mean2.880 × 10+035.240 × 10+032.332 × 10+032.300 × 10+032.333 × 10+032.305 × 10+032.296 × 10+032.298 × 10+032.309 × 10+032.292 × 10+03
Std1.620 × 10+032.210 × 10+037.154 × 10+003.050 × 10−011.574 × 10+022.188 × 10+002.059 × 10+011.506 × 10+017.373 × 10+002.510 × 10+01
CF17_F23Mean2.730 × 10+033.080 × 10+032.666 × 10+032.612 × 10+032.623 × 10+032.617 × 10+032.614 × 10+032.615 × 10+032.635 × 10+032.610 × 10+03
Std4.560 × 10+011.220 × 10+021.011 × 10+014.234 × 10+008.796 × 10+006.911 × 10+005.506 × 10+006.909 × 10+001.336 × 10+013.366 × 10+00
CF17_F24Mean2.980 × 10+033.220 × 10+032.789 × 10+032.706 × 10+032.749 × 10+032.742 × 10+032.732 × 10+032.744 × 10+032.557 × 10+032.635 × 10+03
Std5.370 × 10+018.790 × 10+015.087 × 10+018.217 × 10+014.814 × 10+011.041 × 10+014.421 × 10+015.488 × 10+001.090 × 10+021.205 × 10+02
CF17_F25Mean2.890 × 10+033.170 × 10+032.949 × 10+032.921 × 10+032.923 × 10+032.920 × 10+032.925 × 10+032.937 × 10+032.931 × 10+032.915 × 10+03
Std8.590 × 10+001.180 × 10+022.167 × 10+012.292 × 10+012.332 × 10+012.323 × 10+012.312 × 10+011.959 × 10+012.596 × 10+012.220 × 10+01
CF17_F26Mean4.620 × 10+037.520 × 10+033.180 × 10+032.875 × 10+033.264 × 10+032.905 × 10+032.954 × 10+032.937 × 10+033.038 × 10+032.882 × 10+03
Std6.550 × 10+021.060 × 10+034.030 × 10+028.561 × 10+014.072 × 10+021.514 × 10+011.291 × 10+021.221 × 10+021.112 × 10+027.702 × 10+01
CF17_F27Mean3.220 × 10+033.590 × 10+033.106 × 10+033.091 × 10+033.106 × 10+033.091 × 10+033.095 × 10+033.097 × 10+033.097 × 10+033.090 × 10+03
Std1.360 × 10+011.430 × 10+026.123 × 10+001.784 × 10+001.077 × 10+012.094 × 10+003.625 × 10+008.231 × 10+004.143 × 10+006.596 × 10−01
CF17_F28Mean3.140 × 10+033.860 × 10+033.358 × 10+033.206 × 10+033.300 × 10+033.180 × 10+033.254 × 10+033.197 × 10+033.224 × 10+033.100 × 10+03
Std5.960 × 10+012.880 × 10+028.745 × 10+011.406 × 10+021.360 × 10+021.352 × 10+021.419 × 10+027.235 × 10+019.211 × 10+011.259 × 10−05
CF17_F29Mean3.510 × 10+034.720 × 10+033.285 × 10+033.164 × 10+033.192 × 10+033.162 × 10+033.160 × 10+033.202 × 10+033.197 × 10+033.143 × 10+03
Std1.820 × 10+024.080 × 10+026.727 × 10+011.743 × 10+013.464 × 10+011.973 × 10+011.875 × 10+014.422 × 10+013.806 × 10+018.508 × 10+00
CF17_F30Mean8.500 × 10+034.530 × 10+071.427 × 10+066.665 × 10+039.272 × 10+047.678 × 10+042.308 × 10+056.174 × 10+044.548 × 10+053.431 × 10+03
Std3.640 × 10+038.510 × 10+071.258 × 10+062.995 × 10+032.472 × 10+052.679 × 10+054.560 × 10+052.064 × 10+058.935 × 10+056.031 × 10+01
Mean Rank7.41 9.93 8.41 2.83 5.59 4.52 3.38 4.66 6.41 1.10
Final Rank8 10 9 2 6 4 3 5 7 1
Table 5. The algorithm’s Wilcoxon rank-sum test results (CEC2017).
Table 5. The algorithm’s Wilcoxon rank-sum test results (CEC2017).
FunctionsBEGJODMQPSOFDBAROHWEAVOAQHDBOWAAHEOADHEOAIHEOA
CF17_F11.212 × 10−12/1.212 × 10−12/1.212 × 10−12/1.212 × 10−12/1.212 × 10−12/1.212 × 10−12/1.212 × 10−12/1.212 × 10−12/1.158 × 10−12/
CF17_F31.212 × 10−12/1.212 × 10−12/1.212 × 10−12/1.212 × 10−12/1.212 × 10−12/1.212 × 10−12/1.271 × 10−05/1.212 × 10−12/1.212 × 10−12/
CF17_F42.705 × 10−11/2.705 × 10−11/2.705 × 10−11/2.705 × 10−11/2.705 × 10−11/2.705 × 10−11/2.705 × 10−11/2.705 × 10−11/2.688 × 10−11/
CF17_F53.897 × 10−11/2.885 × 10−11/2.885 × 10−11/1.047 × 10−10/7.070 × 10−11/6.406 × 10−11/2.813 × 10−06/2.653 × 10−06/2.885 × 10−11/
CF17_F62.559 × 10−11/2.559 × 10−11/2.559 × 10−11/3.132 × 10−11/2.559 × 10−11/2.559 × 10−11/3.829 × 10−11/2.559 × 10−11/2.559 × 10−11/
CF17_F73.338 × 10−11/3.020 × 10−11/3.020 × 10−11/5.494 × 10−11/4.077 × 10−11/4.200 × 10−10/1.996 × 10−05/7.088 × 10−08/5.573 × 10−10/
CF17_F82.894 × 10−11/2.894 × 10−11/2.894 × 10−11/4.445 × 10−10/1.551 × 10−10/2.894 × 10−11/4.334 × 10−07/1.531 × 10−10/2.894 × 10−11/
CF17_F91.212 × 10−12/1.212 × 10−12/1.212 × 10−12/1.212 × 10−12/1.212 × 10−12/1.212 × 10−12/1.641 × 10−11/1.212 × 10−12/4.574 × 10−12/
CF17_F103.770 × 10−04/1.070 × 10−09/3.020 × 10−11/3.197 × 10−09/7.119 × 10−09/8.993 × 10−11/8.101 × 10−10/3.804 × 10−07/4.778 × 10−09/
CF17_F112.644 × 10−11/2.389 × 10−11/2.389 × 10−11/1.067 × 10−10/5.340 × 10−11/2.389 × 10−11/3.956 × 10−11/2.925 × 10−11/2.389 × 10−11/
CF17_F123.020 × 10−11/3.020 × 10−11/3.020 × 10−11/3.020 × 10−11/3.020 × 10−11/3.020 × 10−11/3.020 × 10−11/3.020 × 10−11/3.020 × 10−11/
CF17_F133.018 × 10−11/3.018 × 10−11/3.018 × 10−11/3.018 × 10−11/3.018 × 10−11/3.018 × 10−11/3.018 × 10−11/3.018 × 10−11/3.018 × 10−11/
CF17_F142.879 × 10−11/2.879 × 10−11/2.879 × 10−11/2.879 × 10−11/2.879 × 10−11/2.879 × 10−11/2.879 × 10−11/2.879 × 10−11/2.879 × 10−11/
CF17_F153.020 × 10−11/3.020 × 10−11/3.020 × 10−11/3.020 × 10−11/3.020 × 10−11/3.020 × 10−11/3.020 × 10−11/3.338 × 10−11/3.020 × 10−11/
CF17_F161.010 × 10−08/8.891 × 10−10/5.494 × 10−11/3.646 × 10−08/2.227 × 10−09/7.380 × 10−10/3.368 × 10−05/1.698 × 10−08/1.411 × 10−09/
CF17_F173.014 × 10−11/3.014 × 10−11/3.014 × 10−11/3.014 × 10−11/3.014 × 10−11/3.014 × 10−11/3.014 × 10−11/7.376 × 10−11/1.462 × 10−10/
CF17_F183.020 × 10−11/3.020 × 10−11/3.020 × 10−11/3.020 × 10−11/3.020 × 10−11/3.020 × 10−11/3.020 × 10−11/3.020 × 10−11/3.020 × 10−11/
CF17_F192.951 × 10−11/2.951 × 10−11/2.951 × 10−11/2.951 × 10−11/2.951 × 10−11/2.951 × 10−11/2.951 × 10−11/2.951 × 10−11/2.951 × 10−11/
CF17_F201.263 × 10−11/1.263 × 10−11/1.263 × 10−11/1.263 × 10−11/1.555 × 10−11/1.263 × 10−11/2.351 × 10−11/1.913 × 10−11/1.263 × 10−11/
CF17_F212.772 × 10−09/1.412 × 10−04/6.673 × 10−11/9.468 × 10−03/1.100 × 10−09/8.623 × 10−06/7.943 × 10−03/5.666 × 10−04/6.262 × 10−02/=
CF17_F224.282 × 10−01/=2.214 × 10−10/2.800 × 10−11/7.517 × 10−03/3.928 × 10−10/7.321 × 10−08/4.450 × 10−08/3.941 × 10−09/2.800 × 10−11/
CF17_F237.389 × 10−11/3.020 × 10−11/3.020 × 10−11/1.031 × 10−02/8.153 × 10−11/1.429 × 10−08/4.943 × 10−05/4.113 × 10−07/3.020 × 10−11/
CF17_F249.489 × 10−08/8.367 × 10−10/2.521 × 10−11/3.317 × 10−02/3.936 × 10−10/1.822 × 10−06/1.281 × 10−01/=1.618 × 10−05/3.372 × 10−03/
CF17_F251.486 × 10−04/6.238 × 10−08/4.470 × 10−08/3.118 × 10−05/1.118 × 10−03/1.605 × 10−03/1.074 × 10−04/3.673 × 10−07/5.741 × 10−08/
CF17_F265.029 × 10−02/=4.045 × 10−06/1.122 × 10−10/8.466 × 10−06/4.718 × 10−09/1.001 × 10−07/2.383 × 10−03/1.188 × 10−07/3.253 × 10−07/
CF17_F272.852 × 10−11/1.020 × 10−09/2.852 × 10−11/9.378 × 10−03/2.852 × 10−11/1.421 × 10−07/5.309 × 10−07/6.283 × 10−08/3.154 × 10−11/
CF17_F282.527 × 10−10/4.392 × 10−04/3.343 × 10−06/5.052 × 10−02/=6.388 × 10−04/2.820 × 10−03/2.153 × 10−02/2.436 × 10−04/4.426 × 10−03/
CF17_F297.389 × 10−11/4.077 × 10−11/3.020 × 10−11/5.072 × 10−10/3.497 × 10−09/4.311 × 10−08/3.646 × 10−08/5.092 × 10−08/8.993 × 10−11/
CF17_F303.020 × 10−11/3.020 × 10−11/3.020 × 10−11/3.020 × 10−11/3.010 × 10−11/3.020 × 10−11/3.012 × 10−11/3.020 × 10−11/3.012 × 10−11/
+/−/=0/27/20/29/00/29/00/28/10/29/00/29/00/28/10/29/00/28/1
Table 6. Comparison of algorithms in multi-threshold image-segmentation problems.
Table 6. Comparison of algorithms in multi-threshold image-segmentation problems.
AlgorithmsTimeParameters
HEOA2024 w = 0.2 · cos ( π 2 ( 1 t M a x i t e r ) )
QAGO [52]2024 No   Parameters
IPOA [53]2024 F P K = 0.4 ,   F G K = 0.6
MCOA [54]2024 C 2 = 2 ( F E s / M a x F E s )
IMODE [55]2020 N P i n i t = 18 · D ,   N P m i n = 4 ,   | A | = 2.6 · N P ,   p = 0.11 ,   H = 6
Table 7. Algorithm’s fitness function values for multi-threshold image-segmentation problems.
Table 7. Algorithm’s fitness function values for multi-threshold image-segmentation problems.
ImageTHCLNBHEOAHEOAQAGOIPOAMCOAIMODE
MeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
Hunter22.99 × 10+039.33 × 10−132.98 × 10+034.40 × 10+012.98 × 10+034.53 × 10−022.96 × 10+033.35 × 10+012.98 × 10+032.89 × 10+012.96 × 10+032.18 × 10+01
43.19 × 10+038.47 × 10−023.18 × 10+032.25 × 10+013.18 × 10+033.65 × 10−013.14 × 10+032.00 × 10+013.19 × 10+035.62 × 10+003.15 × 10+032.25 × 10+01
63.25 × 10+031.37 × 10−013.24 × 10+031.16 × 10+013.24 × 10+036.26 × 10−013.20 × 10+032.61 × 10+013.24 × 10+035.08 × 10+003.20 × 10+032.17 × 10+01
83.27 × 10+034.47 × 10−013.27 × 10+031.20 × 10+013.26 × 10+039.25 × 10−013.23 × 10+031.45 × 10+013.27 × 10+033.50 × 10+003.23 × 10+031.55 × 10+01
Baboon21.24 × 10+034.67 × 10−131.24 × 10+030.00 × 10+001.23 × 10+037.31 × 10−021.21 × 10+033.78 × 10+011.23 × 10+033.94 × 10+011.20 × 10+032.97 × 10+01
41.38 × 10+034.75 × 10−021.35 × 10+035.07 × 10+011.37 × 10+032.07 × 10−011.33 × 10+032.10 × 10+011.37 × 10+038.20 × 10+001.32 × 10+032.05 × 10+01
61.41 × 10+032.60 × 10−011.41 × 10+039.75 × 10+001.40 × 10+038.95 × 10−011.37 × 10+031.87 × 10+011.41 × 10+033.94 × 10+001.37 × 10+032.28 × 10+01
81.43 × 10+036.94 × 10−011.42 × 10+031.99 × 10+011.42 × 10+035.54 × 10−011.39 × 10+031.90 × 10+011.43 × 10+031.97 × 10+001.39 × 10+031.91 × 10+01
Barbara21.73 × 10+030.00 × 10+001.72 × 10+033.60 × 10+011.71 × 10+031.59 × 10−011.70 × 10+032.76 × 10+011.72 × 10+033.64 × 10+011.68 × 10+036.17 × 10+01
41.92 × 10+035.98 × 10−021.89 × 10+036.06 × 10+011.90 × 10+033.19 × 10−011.85 × 10+033.43 × 10+011.91 × 10+034.92 × 10+001.84 × 10+033.21 × 10+01
61.97 × 10+032.89 × 10−011.96 × 10+032.83 × 10+011.93 × 10+036.42 × 10−011.91 × 10+031.89 × 10+011.96 × 10+036.28 × 10+001.91 × 10+032.54 × 10+01
81.99 × 10+034.82 × 10−011.97 × 10+033.24 × 10+011.98 × 10+035.53 × 10−011.94 × 10+031.94 × 10+011.99 × 10+031.72 × 10+001.95 × 10+039.04 × 10+00
Camera25.12 × 10+039.33 × 10−135.11 × 10+031.87 × 10−125.12 × 10+035.30 × 10−025.09 × 10+033.16 × 10+015.11 × 10+031.01 × 10+015.10 × 10+031.58 × 10+01
45.24 × 10+034.33 × 10−025.21 × 10+038.48 × 10+015.22 × 10+032.02 × 10−015.21 × 10+031.51 × 10+015.23 × 10+038.25 × 10+005.21 × 10+031.55 × 10+01
65.27 × 10+031.39 × 10+005.26 × 10+031.90 × 10+015.27 × 10+031.51 × 10+005.25 × 10+037.35 × 10+005.27 × 10+032.75 × 10+005.25 × 10+039.55 × 10+00
85.30 × 10+039.68 × 10−015.03 × 10+031.18 × 10+035.28 × 10+035.94 × 10−015.27 × 10+036.27 × 10+005.29 × 10+032.92 × 10+005.27 × 10+036.27 × 10+00
Lena21.96 × 10+037.00 × 10−131.95 × 10+032.33 × 10−131.94 × 10+032.72 × 10−011.93 × 10+032.60 × 10+011.93 × 10+031.14 × 10+011.93 × 10+032.80 × 10+01
42.19 × 10+036.48 × 10−022.16 × 10+034.96 × 10+012.16 × 10+031.69 × 10−012.12 × 10+032.85 × 10+012.18 × 10+031.57 × 10+012.13 × 10+032.54 × 10+01
62.24 × 10+032.78 × 10−012.22 × 10+032.60 × 10+012.23 × 10+035.20 × 10−012.19 × 10+031.92 × 10+012.23 × 10+036.37 × 10+002.18 × 10+031.61 × 10+01
82.25 × 10+033.79 × 10−012.24 × 10+031.34 × 10+012.24 × 10+037.71 × 10−012.21 × 10+031.08 × 10+012.23 × 10+032.38 × 10+002.21 × 10+031.28 × 10+01
Woman21.48 × 10+037.00 × 10−131.47 × 10+032.48 × 10+011.47 × 10+031.84 × 10−011.44 × 10+032.52 × 10+011.47 × 10+035.35 × 10+001.44 × 10+033.17 × 10+01
41.61 × 10+034.75 × 10−021.60 × 10+031.97 × 10+011.60 × 10+031.58 × 10−011.56 × 10+032.55 × 10+011.60 × 10+038.50 × 10+001.57 × 10+031.88 × 10+01
61.66 × 10+037.36 × 10−011.65 × 10+037.45 × 10+001.63 × 10+037.27 × 10−011.61 × 10+032.51 × 10+011.65 × 10+035.04 × 10+001.61 × 10+032.19 × 10+01
81.67 × 10+034.95 × 10−011.61 × 10+031.35 × 10+011.63 × 10+037.07 × 10−011.64 × 10+031.03 × 10+011.62 × 10+033.61 × 10+001.63 × 10+032.08 × 10+01
Friedman Rank1.62 2.88 2.25 5.35 3.49 5.42
Final Rank1.00 3.00 2.00 5.00 4.00 6.00
Table 8. Algorithm’s PSNR values for multi-threshold image-segmentation problems.
Table 8. Algorithm’s PSNR values for multi-threshold image-segmentation problems.
ImageTHCLNBHEOAHEOAQAGOIPOAMCOAIMODE
MeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
Hunter21.79 × 10+010.00 × 10+001.78 × 10+014.35 × 10−011.79 × 10+017.95 × 10−031.72 × 10+011.31 × 10+001.79 × 10+013.17 × 10−011.76 × 10+014.10 × 10−01
42.22 × 10+013.14 × 10−022.20 × 10+016.15 × 10−012.21 × 10+015.00 × 10−022.00 × 10+012.76 × 10+002.20 × 10+012.80 × 10−012.07 × 10+017.56 × 10−01
62.51 × 10+013.04 × 10−022.46 × 10+017.25 × 10−012.50 × 10+012.37 × 10−012.06 × 10+013.65 × 10+002.45 × 10+014.44 × 10−012.26 × 10+019.21 × 10−01
82.68 × 10+019.58 × 10−022.64 × 10+017.74 × 10−012.67 × 10+011.42 × 10−012.37 × 10+011.17 × 10+002.64 × 10+012.6 × 10−012.37 × 10+019.70 × 10−01
Baboon21.53 × 10+015.47 × 10−151.52 × 10+010.00 × 10+001.52 × 10+014.07 × 10−021.47 × 10+011.30 × 10+001.52 × 10+014.47 × 10−011.49 × 10+011.22 × 10+00
42.05 × 10+014.10 × 10−021.98 × 10+011.86 × 10+002.04 × 10+019.46 × 10−021.80 × 10+013.10 × 10+002.02 × 10+017.08 × 10−011.84 × 10+011.54 × 10+00
62.37 × 10+011.78 × 10−012.35 × 10+011.11 × 10+002.34 × 10+013.54 × 10−012.00 × 10+013.18 × 10+002.35 × 10+016.14 × 10−012.12 × 10+011.74 × 10+00
82.63 × 10+014.80 × 10−012.54 × 10+012.03 × 10+002.61 × 10+012.96 × 10−012.07 × 10+014.04 × 10+002.62 × 10+013.44 × 10−012.25 × 10+011.74 × 10+00
Barbara21.64 × 10+011.88 × 10−021.63 × 10+015.67 × 10−011.63 × 10+013.65 × 10−151.54 × 10+011.86 × 10+001.62 × 10+016.63 × 10−011.56 × 10+017.99 × 10−01
41.98 × 10+012.33 × 10−011.92 × 10+011.35 × 10+001.96 × 10+019.48 × 10−021.87 × 10+011.00 × 10+001.97 × 10+013.52 × 10−021.86 × 10+011.19 × 10+00
62.24 × 10+013.20 × 10−012.21 × 10+011.07 × 10+002.23 × 10+011.84 × 10−011.99 × 10+011.84 × 10+002.23 × 10+011.24 × 10−012.06 × 10+011.05 × 10+00
82.43 × 10+012.23 × 10−012.33 × 10+011.73 × 10+002.41 × 10+011.45 × 10−012.19 × 10+011.55 × 10+002.42 × 10+012.17 × 10−012.24 × 10+011.27 × 10+00
Camera21.54 × 10+011.08 × 10+001.49 × 10+010.00 × 10+001.50 × 10+013.67 × 10−021.50 × 10+010.00 × 10+001.50 × 10+015.03 × 10−011.49 × 10+017.56 × 10−01
42.05 × 10+016.69 × 10−022.00 × 10+011.28 × 10+002.01 × 10+018.26 × 10−021.83 × 10+012.48 × 10+002.02 × 10+017.56 × 10−011.95 × 10+011.10 × 10+00
62.34 × 10+012.33 × 10−012.21 × 10+011.86 × 10+002.31 × 10+013.52 × 10−012.10 × 10+011.95 × 10+002.27 × 10+014.35 × 10−012.10 × 10+011.09 × 10+00
82.57 × 10+013.57 × 10−012.47 × 10+011.55 × 10+002.54 × 10+014.20 × 10−012.20 × 10+011.75 × 10+002.50 × 10+016.05 × 10−012.27 × 10+011.00 × 10+00
Lena21.54 × 10+011.82 × 10−151.53 × 10+011.82 × 10−151.53 × 10+012.61 × 10−021.41 × 10+011.59 × 10+001.52 × 10+014.67 × 10−011.44 × 10+018.25 × 10−01
41.86 × 10+012.55 × 10−021.81 × 10+011.11 × 10+001.87 × 10+012.98 × 10−021.79 × 10+011.13 × 10+001.85 × 10+013.42 × 10−011.78 × 10+018.79 × 10−01
62.08 × 10+011.70 × 10−012.05 × 10+018.75 × 10−012.07 × 10+019.41 × 10−021.97 × 10+011.68 × 10+002.07 × 10+016.10 × 10−012.01 × 10+011.22 × 10+00
82.33 × 10+015.92 × 10−012.28 × 10+011.62 × 10+002.27 × 10+018.34 × 10−011.99 × 10+013.11 × 10+002.30 × 10+018.21 × 10−012.17 × 10+011.11 × 10+00
Woman21.47 × 10+018.75 × 10−011.45 × 10+013.53 × 10−011.46 × 10+014.33 × 10−021.45 × 10+015.49 × 10−011.45 × 10+011.67 × 10−011.46 × 10+010.00 × 10+00
42.16 × 10+017.27 × 10−022.11 × 10+019.23 × 10−012.16 × 10+013.69 × 10−021.86 × 10+011.83 × 10+002.14 × 10+015.02 × 10−011.96 × 10+011.56 × 10+00
62.39 × 10+011.15 × 10−012.33 × 10+017.01 × 10−012.37 × 10+013.31 × 10−012.05 × 10+012.40 × 10+002.40 × 10+017.79 × 10−012.10 × 10+011.61 × 10+00
82.71 × 10+011.15 × 10−012.64 × 10+011.34 × 10+002.70 × 10+011.31 × 10−012.14 × 10+013.48 × 10+002.66 × 10+015.06 × 10−012.31 × 10+011.68 × 10+00
Friedman Rank2.21 3.22 2.92 4.89 3.01 4.75
Final Rank1 4 2 6 3 5
Table 9. Algorithm’s SSIM values for multi-threshold image-segmentation problems.
Table 9. Algorithm’s SSIM values for multi-threshold image-segmentation problems.
ImageTHCLNBHEOAHEOAQAGOIPOAMCOAIMODE
MeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
Hunter27.89 × 10−015.37 × 10−037.61 × 10−014.39 × 10−037.61 × 10−015.64 × 10−037.58 × 10−015.86 × 10−037.60 × 10−016.18 × 10−037.63 × 10−015.09 × 10−03
48.31 × 10−016.04 × 10−038.05 × 10−012.18 × 10−038.03 × 10−012.21 × 10−038.06 × 10−012.24 × 10−038.03 × 10−012.19 × 10−038.04 × 10−012.34 × 10−03
68.44 × 10−012.51 × 10−038.26 × 10−013.09 × 10−038.25 × 10−012.52 × 10−038.26 × 10−013.33 × 10−038.26 × 10−012.75 × 10−038.25 × 10−012.77 × 10−03
88.59 × 10−016.34 × 10−038.38 × 10−016.09 × 10−038.38 × 10−015.98 × 10−038.39 × 10−016.33 × 10−038.39 × 10−016.39 × 10−038.40 × 10−015.07 × 10−03
Baboon27.83 × 10−018.90 × 10−037.55 × 10−012.61 × 10−037.55 × 10−012.71 × 10−037.55 × 10−012.86 × 10−037.55 × 10−012.93 × 10−037.55 × 10−012.82 × 10−03
48.22 × 10−015.58 × 10−038.05 × 10−013.13 × 10−038.04 × 10−012.67 × 10−038.06 × 10−012.87 × 10−038.05 × 10−012.70 × 10−038.05 × 10−013.18 × 10−03
68.74 × 10−012.20 × 10−038.65 × 10−013.03 × 10−028.69 × 10−015.15 × 10−037.45 × 10−011.32 × 10−018.65 × 10−011.31 × 10−027.99 × 10−014.76 × 10−02
89.16 × 10−015.31 × 10−038.91 × 10−014.99 × 10−029.15 × 10−012.85 × 10−037.48 × 10−011.69 × 10−019.13 × 10−015.23 × 10−038.31 × 10−014.12 × 10−02
Barbara27.81 × 10−015.76 × 10−037.55 × 10−013.64 × 10−037.54 × 10−012.84 × 10−037.56 × 10−013.18 × 10−037.55 × 10−012.79 × 10−037.55 × 10−013.04 × 10−03
48.22 × 10−015.97 × 10−038.05 × 10−012.95 × 10−038.05 × 10−013.18 × 10−038.06 × 10−013.31 × 10−038.04 × 10−012.48 × 10−038.05 × 10−013.53 × 10−03
68.40 × 10−015.05 × 10−038.25 × 10−012.79 × 10−038.26 × 10−012.77 × 10−038.19 × 10−013.62 × 10−028.25 × 10−013.23 × 10−038.25 × 10−013.26 × 10−03
88.60 × 10−015.42 × 10−038.42 × 10−016.39 × 10−038.40 × 10−016.26 × 10−038.39 × 10−015.76 × 10−038.40 × 10−016.53 × 10−038.42 × 10−015.76 × 10−03
Camera27.95 × 10−013.02 × 10−037.61 × 10−015.03 × 10−037.59 × 10−016.45 × 10−037.59 × 10−015.60 × 10−037.60 × 10−015.72 × 10−037.59 × 10−015.97 × 10−03
48.30 × 10−015.98 × 10−038.11 × 10−016.26 × 10−038.08 × 10−016.20 × 10−038.09 × 10−015.62 × 10−038.08 × 10−015.98 × 10−038.10 × 10−015.83 × 10−03
68.46 × 10−012.36 × 10−038.25 × 10−012.74 × 10−038.25 × 10−013.19 × 10−038.25 × 10−012.92 × 10−038.25 × 10−013.04 × 10−038.25 × 10−013.08 × 10−03
88.59 × 10−016.04 × 10−038.35 × 10−012.70 × 10−038.34 × 10−012.73 × 10−038.35 × 10−012.93 × 10−038.35 × 10−012.71 × 10−038.35 × 10−012.48 × 10−03
Lena27.94 × 10−013.23 × 10−037.90 × 10−016.11 × 10−037.92 × 10−015.92 × 10−037.78 × 10−015.68 × 10−027.91 × 10−016.12 × 10−037.89 × 10−016.46 × 10−03
48.30 × 10−017.12 × 10−038.07 × 10−016.06 × 10−038.10 × 10−016.71 × 10−038.10 × 10−015.71 × 10−038.08 × 10−015.90 × 10−038.13 × 10−015.63 × 10−03
68.44 × 10−012.72 × 10−038.25 × 10−012.48 × 10−038.26 × 10−012.98 × 10−038.25 × 10−012.87 × 10−038.24 × 10−012.57 × 10−038.25 × 10−013.15 × 10−03
88.63 × 10−016.11 × 10−038.41 × 10−015.48 × 10−038.41 × 10−015.62 × 10−038.39 × 10−016.16 × 10−038.41 × 10−016.57 × 10−038.40 × 10−016.39 × 10−03
Woman27.95 × 10−012.86 × 10−037.64 × 10−018.70 × 10−037.66 × 10−019.26 × 10−037.66 × 10−018.17 × 10−037.65 × 10−018.74 × 10−037.63 × 10−019.42 × 10−03
48.31 × 10−014.93 × 10−038.09 × 10−016.62 × 10−038.11 × 10−016.13 × 10−038.11 × 10−016.98 × 10−038.10 × 10−015.87 × 10−038.11 × 10−016.84 × 10−03
68.45 × 10−012.78 × 10−038.24 × 10−012.97 × 10−038.25 × 10−012.98 × 10−038.24 × 10−013.17 × 10−038.25 × 10−012.60 × 10−038.26 × 10−012.96 × 10−03
88.93 × 10−013.90 × 10−038.65 × 10−013.00 × 10−038.64 × 10−012.87 × 10−038.66 × 10−013.09 × 10−038.65 × 10−013.22 × 10−038.66 × 10−013.31 × 10−03
Friedman Rank1.15 3.88 3.81 4.06 3.98 4.13
Final Rank1 3 2 5 4 6
Table 10. Algorithm’s FSIM values for multi-threshold image-segmentation problems.
Table 10. Algorithm’s FSIM values for multi-threshold image-segmentation problems.
ImageTHCLNBHEOAHEOAQAGOIPOAMCOAIMODE
MeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
Hunter27.96 × 10−013.04 × 10−037.67 × 10−011.45 × 10−027.68 × 10−011.35 × 10−027.69 × 10−011.32 × 10−027.67 × 10−011.35 × 10−027.67 × 10−011.76 × 10−02
48.54 × 10−012.28 × 10−048.46 × 10−011.98 × 10−028.52 × 10−011.71 × 10−037.89 × 10−018.94 × 10−028.49 × 10−016.68 × 10−038.13 × 10−011.76 × 10−02
69.09 × 10−013.70 × 10−049.00 × 10−011.69 × 10−029.07 × 10−011.56 × 10−038.06 × 10−019.06 × 10−029.01 × 10−016.85 × 10−038.54 × 10−012.19 × 10−02
89.39 × 10−014.39 × 10−049.34 × 10−011.67 × 10−029.38 × 10−011.69 × 10−038.79 × 10−012.76 × 10−029.35 × 10−015.54 × 10−038.86 × 10−011.75 × 10−02
Baboon27.95 × 10−012.61 × 10−037.64 × 10−011.47 × 10−027.67 × 10−011.63 × 10−027.68 × 10−011.32 × 10−027.68 × 10−011.58 × 10−027.61 × 10−011.68 × 10−02
48.43 × 10−018.85 × 10−048.16 × 10−016.17 × 10−028.42 × 10−011.75 × 10−037.69 × 10−011.11 × 10−018.35 × 10−011.82 × 10−027.82 × 10−014.18 × 10−02
68.99 × 10−012.85 × 10−038.93 × 10−012.38 × 10−028.96 × 10−013.64 × 10−038.08 × 10−011.01 × 10−018.95 × 10−011.26 × 10−028.51 × 10−013.48 × 10−02
89.34 × 10−016.87 × 10−039.17 × 10−013.76 × 10−029.34 × 10−014.81 × 10−038.09 × 10−011.29 × 10−019.34 × 10−015.48 × 10−038.73 × 10−013.66 × 10−02
Barbara27.85 × 10−012.13 × 10−037.61 × 10−017.18 × 10−037.66 × 10−018.16 × 10−037.64 × 10−018.73 × 10−037.65 × 10−011.48 × 10−027.66 × 10−018.71 × 10−03
48.07 × 10−016.96 × 10−047.89 × 10−013.32 × 10−028.07 × 10−016.93 × 10−047.74 × 10−012.36 × 10−028.06 × 10−012.64 × 10−037.68 × 10−012.75 × 10−02
68.63 × 10−011.41 × 10−038.56 × 10−012.38 × 10−028.62 × 10−011.11 × 10−037.85 × 10−014.40 × 10−028.58 × 10−017.46 × 10−038.05 × 10−011.97 × 10−02
88.93 × 10−017.43 × 10−048.67 × 10−013.89 × 10−028.92 × 10−011.08 × 10−038.28 × 10−013.12 × 10−028.92 × 10−012.84 × 10−038.47 × 10−011.49 × 10−02
Camera27.85 × 10−013.06 × 10−037.65 × 10−018.23 × 10−037.69 × 10−018.25 × 10−037.64 × 10−017.83 × 10−037.64 × 10−019.72 × 10−037.60 × 10−018.44 × 10−03
48.28 × 10−011.20 × 10−038.17 × 10−012.81 × 10−028.27 × 10−011.09 × 10−037.93 × 10−013.17 × 10−028.18 × 10−012.07 × 10−027.93 × 10−012.08 × 10−02
68.74 × 10−011.33 × 10−028.46 × 10−013.89 × 10−028.73 × 10−017.82 × 10−038.33 × 10−014.48 × 10−028.66 × 10−011.53 × 10−028.31 × 10−012.48 × 10−02
89.13 × 10−013.20 × 10−039.01 × 10−012.20 × 10−029.12 × 10−013.48 × 10−038.57 × 10−012.12 × 10−029.03 × 10−011.02 × 10−028.74 × 10−011.69 × 10−02
Lena27.91 × 10−015.97 × 10−037.64 × 10−018.86 × 10−037.60 × 10−017.93 × 10−037.65 × 10−017.98 × 10−037.64 × 10−018.87 × 10−037.65 × 10−018.40 × 10−03
48.28 × 10−015.54 × 10−038.13 × 10−015.33 × 10−038.10 × 10−016.83 × 10−038.08 × 10−015.34 × 10−038.08 × 10−015.53 × 10−038.10 × 10−014.89 × 10−03
68.53 × 10−018.15 × 10−048.42 × 10−012.33 × 10−028.52 × 10−018.29 × 10−047.99 × 10−013.39 × 10−028.44 × 10−011.05 × 10−028.02 × 10−012.02 × 10−02
88.71 × 10−012.33 × 10−038.63 × 10−012.08 × 10−028.70 × 10−011.50 × 10−038.02 × 10−015.88 × 10−028.67 × 10−013.98 × 10−038.34 × 10−011.62 × 10−02
Woman27.94 × 10−012.56 × 10−037.75 × 10−012.92 × 10−037.74 × 10−012.58 × 10−037.75 × 10−012.72 × 10−037.76 × 10−012.66 × 10−037.74 × 10−012.57 × 10−03
48.16 × 10−017.67 × 10−048.05 × 10−012.28 × 10−028.15 × 10−019.54 × 10−047.50 × 10−012.51 × 10−028.08 × 10−011.30 × 10−027.86 × 10−011.79 × 10−02
68.84 × 10−011.90 × 10−038.81 × 10−011.70 × 10−028.83 × 10−012.89 × 10−037.86 × 10−015.95 × 10−028.70 × 10−011.23 × 10−028.00 × 10−013.38 × 10−02
89.17 × 10−011.86 × 10−039.05 × 10−013.04 × 10−029.16 × 10−011.76 × 10−037.99 × 10−017.89 × 10−029.07 × 10−019.87 × 10−038.37 × 10−013.21 × 10−02
Friedman Rank1.76 3.05 2.93 4.88 3.43 4.94
Final Rank1 3 2 5 4 6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xiang, L.; Zhao, X.; Wang, J.; Wang, B. An Enhanced Human Evolutionary Optimization Algorithm for Global Optimization and Multi-Threshold Image Segmentation. Biomimetics 2025, 10, 282. https://doi.org/10.3390/biomimetics10050282

AMA Style

Xiang L, Zhao X, Wang J, Wang B. An Enhanced Human Evolutionary Optimization Algorithm for Global Optimization and Multi-Threshold Image Segmentation. Biomimetics. 2025; 10(5):282. https://doi.org/10.3390/biomimetics10050282

Chicago/Turabian Style

Xiang, Liang, Xiajie Zhao, Jianfeng Wang, and Bin Wang. 2025. "An Enhanced Human Evolutionary Optimization Algorithm for Global Optimization and Multi-Threshold Image Segmentation" Biomimetics 10, no. 5: 282. https://doi.org/10.3390/biomimetics10050282

APA Style

Xiang, L., Zhao, X., Wang, J., & Wang, B. (2025). An Enhanced Human Evolutionary Optimization Algorithm for Global Optimization and Multi-Threshold Image Segmentation. Biomimetics, 10(5), 282. https://doi.org/10.3390/biomimetics10050282

Article Metrics

Back to TopTop