Next Article in Journal / Special Issue
A Novel Artificial Eagle-Inspired Optimization Algorithm for Trade Hub Location and Allocation Method
Previous Article in Journal
Learning Local Texture and Global Frequency Clues for Face Forgery Detection
Previous Article in Special Issue
Domain-Separated Quantum Neural Network for Truss Structural Analysis with Mechanics-Informed Constraints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Nonlinear Bernstein-Guided Parrot Optimizer for Mural Image Segmentation

1
College of Design, Hanyang University, Ansan 15588, Republic of Korea
2
School of Physics and Electronic Engineering, Xinjiang Normal University, Urumqi 830054, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(8), 482; https://doi.org/10.3390/biomimetics10080482
Submission received: 6 June 2025 / Revised: 15 July 2025 / Accepted: 20 July 2025 / Published: 22 July 2025
(This article belongs to the Special Issue Nature-Inspired Metaheuristic Optimization Algorithms 2025)

Abstract

During the long-term preservation of murals, the degradation of mural image information poses significant challenges to the restoration and conservation of world cultural heritage. Currently, mural conservation scholars focus on image segmentation techniques for mural restoration and protection. However, existing image segmentation methods suffer from suboptimal segmentation quality. To improve mural image segmentation, this study proposes an efficient mural image segmentation method termed Adaptive Nonlinear Bernstein-guided Parrot Optimizer (ANBPO) by integrating an adaptive learning strategy, a nonlinear factor, and a third-order Bernstein-guided strategy into the Parrot Optimizer (PO). In ANBPO, First, to address PO’s limited global exploration capability, the adaptive learning strategy is introduced. By considering individual information disparities and learning behaviors, this strategy effectively enhances the algorithm’s global exploration, enabling a thorough search of the solution space. Second, to mitigate the imbalance between PO’s global exploration and local exploitation phases, the nonlinear factor is proposed. Leveraging its adaptability and nonlinear curve characteristics, this factor improves the algorithm’s ability to escape local optimal segmentation thresholds. Finally, to overcome PO’s inadequate local exploitation capability, the third-order Bernstein-guided strategy is introduced. By incorporating the weighted properties of third-order Bernstein polynomials, this strategy comprehensively evaluates individuals with diverse characteristics, thereby enhancing the precision of mural image segmentation. ANBPO was applied to segment twelve mural images. The results demonstrate that, compared to competing algorithms, ANBPO achieves a 91.6% win rate in fitness function values while outperforming them by 67.6%, 69.4%, and 69.7% in PSNR, SSIM, and FSIM metrics, respectively. These results confirm that the ANBPO algorithm can effectively segment mural images while preserving the original feature information. Thus, it can be regarded as an efficient mural image segmentation algorithm.

1. Introduction

Murals, as a vital component of world cultural heritage, embody profound cultural and historical value [1]. However, during prolonged preservation, the information within mural images degrades, posing significant challenges to the restoration and conservation of world cultural heritage [2]. Currently, scholars in mural conservation are dedicated to restoring and preserving murals through extensive scientific approaches, aiming to safeguard their historical, artistic, and scientific significance [3]. Within the restoration process of mural images, locating degraded details is of critical importance [4]. For instance, Hou et al. utilized hyperspectral imaging to extract hidden information from murals, thereby enhancing the visual value of ancient mural patterns [5]. Wu et al. integrated a generative adversarial network for digital restoration of murals, contributing to the advancement of mural restoration techniques [6]. Xiao et al. proposed a multi-level residual network for restoring edges and contours in ancient murals, with the core idea of leveraging image segmentation across different frequency domains [7]. Zhou et al. combined a progressive context refinement network to restore texture and structural features in ancient mural images [8]. Yu et al. introduced a mural extraction method based on image enhancement and edge detection to capture edge information from murals, ensuring high-quality image restoration [9]. These existing studies primarily focus on restoring mural images. However, due to insufficient data support, neural network methods gradually have limitations. Among which, segmenting mural images also serves as a means to extract critical feature information. Image segmentation technology serves as a key technique for identifying image details by partitioning an image into multiple regions based on its feature information, thereby facilitating the recognition of detailed image elements [10]. Among current methods, metaheuristic-based image segmentation approaches have garnered substantial attention due to their simple structure, low computational complexity, and broad applicability [11].
Metaheuristic algorithms represent a class of algorithms characterized by their simplicity and computational efficiency, primarily developed by simulating biological behaviors and natural phenomena observed in the real world. Common categorizations divide metaheuristic algorithms into four primary types: evolution-based, swarm-based, physics/chemistry-inspired, and human-based [12]. Evolution-based metaheuristic algorithms include Differential Evolution (DE) [13], Genetic Algorithm (GA) [14], and Biogeography-Based Optimization (BBO) [15]. Swarm-based metaheuristic algorithms include Particle Swarm Optimization (PSO) [16], Slime Mold Algorithm (SMA) [17], and Whale Optimization Algorithm (WOA) [18]. Physics/chemistry-inspired metaheuristic algorithms include Multi-verse Optimizer (MVO) [19], Atom Search Optimization (ASO) [20], and Water Evaporation Optimization (WEO) [21]. Human-based metaheuristic algorithms include Teaching–Learning-Based Optimization (TLBO) [22], Poor and Rich Optimization (PRO) [23], and Search And Rescue Optimization (SAR) [24]. Given their computational efficiency and ease of implementation, researchers have proposed numerous metaheuristic-based image segmentation algorithms to enhance the quality of image segmentation.
In recent years, researchers have introduced a variety of metaheuristic-based image segmentation algorithms to address challenges in medical and complex image analysis. For instance, Houssein et al. proposed the Snake Optimization algorithm with Opposition-Based Learning (SO-OBL) to improve CT scan segmentation for liver disease diagnostics. Experimental results revealed that SO-OBL excels in global optimization and multi-level segmentation, outperforming competing metaheuristic algorithms across FSIM, SSIM, and PSNR metrics, thereby validating its efficiency in computer-aided diagnostic systems [25]. Wang et al. proposed an enhanced Northern Goshawk Optimization Algorithm combining three learning strategies to solve the mural image segmentation problem, achieving excellent results on eight standard images [26]. Qiao et al. developed a hybrid Arithmetic Optimization Algorithm and Harris Hawks Optimizer (AOA-HHO) for multi-level threshold segmentation. By evaluating seven threshold levels, AOA-HHO achieved superior performance in segmentation accuracy, fitness function values, PSNR, and SSIM, outperforming baseline algorithms [27]. Yuan et al. proposed the Artemisinin Optimization (AO) Algorithm to tackle medical image segmentation challenges. By balancing global exploration and local exploitation, AO demonstrated strong optimization performance across six-threshold segmentations of 15 breast cancer pathology images, surpassing eight high-performing algorithms in accuracy, feature similarity, PSNR, and SSIM [28]. Chen et al. addressed segmentation inefficiencies with the Poplar Optimization Algorithm (POA), which enhances population diversity to improve optimization. Experiments on six standard images confirmed POA’s superior multi-threshold segmentation performance [29]. Wang et al. mitigated the Whale Optimization Algorithm’s (WOA) limitations such as weak local search and premature convergence by introducing WOA with Crossover and Removal of Similarity (CRWOA). In multi-level thresholding of 10 grayscale images, CRWOA outperformed five comparison algorithms in convergence speed and segmentation quality [30]. Arunita et al. combined the Lévy-Cauchy Arithmetic Optimization Algorithm (LCAOA) with Rough K-Means (RKM) to balance exploration and exploitation via Lévy flight and Cauchy distribution, while opposition-based learning enhanced efficiency. Experiments on diverse image types (e.g., color, oral pathology, and leaf images) showed high feature similarity and accuracy [31]. Wang et al. proposed a multi-threshold segmentation method for breast cancer images using an improved Dandelion Optimization Algorithm. By integrating opposition-based learning, the method achieved more precise lesion segmentation and superior performance metrics, validating its effectiveness in handling complex cellular structures [32].
Previous studies have validated the effectiveness of metaheuristic-based image segmentation methods and demonstrated that integrating learning strategies enhances their segmentation performance. However, existing metaheuristic approaches exhibit limitations when addressing domain-specific segmentation challenges, such as mural image segmentation, where they often converge to suboptimal threshold combinations, resulting in poor retention of feature information and degraded segmentation quality. Motivated by these challenges, there is an urgent need to propose a novel optimization algorithm with efficient segmentation capabilities for mural image segmentation. Fortunately, the Parrot Optimizer (PO) has been proven as an algorithm with robust optimization performance, demonstrating strong scalability and adaptability across various optimization problems [33]. Consequently, this study selects PO for mural image segmentation. However, as mural image complexity increases, the original PO faces inevitable shortcomings, including insufficient global exploration, inadequate local exploitation, and an imbalance between exploration and exploitation phases, which may still lead to suboptimal segmentation performance. To address these issues, this study introduces an enhanced PO algorithm termed ANBPO by integrating three learning strategies: First, an adaptive learning strategy is proposed to overcome PO’s limited global exploration capability. By considering individual information disparities and learning behaviors, this strategy effectively enhances global exploration, enabling the algorithm to thoroughly search the solution space and improve mural image segmentation quality. Second, a nonlinear factor is introduced to balance exploration and exploitation phases. Leveraging its adaptability and nonlinear curve characteristics, this factor mitigates the imbalance, enabling the algorithm to escape local optimal threshold combinations and enhance mural image segmentation quality. Finally, a third-order Bernstein-guided strategy is proposed to improve PO’s local exploitation capability. By incorporating the weighted properties of third-order Bernstein polynomials, this strategy comprehensively evaluates individuals with diverse characteristics, thereby strengthening local exploitation and improving mural image segmentation precision. The primary contributions of this study are as follows:
  • The adaptive learning strategy is proposed to enhance the algorithm’s global exploration capability by accounting for individual information disparities and learning behaviors.
  • The nonlinear factor is introduced to balance the algorithm’s exploration and exploitation phases, leveraging its adaptability and nonlinear curve characteristics.
  • The third-order Bernstein-guided strategy is proposed to strengthen the algorithm’s local exploitation capability by incorporating the weighted properties of third-order Bernstein polynomials, enabling comprehensive evaluation of individuals with diverse characteristics.
  • By integrating these three learning strategies into the PO, an enhanced PO algorithm termed ANBPO is developed.
  • Experimental segmentation of twelve mural images using ANBPO confirms its potential as a promising algorithm for mural image segmentation.
The subsequent work plan of this study is outlined as follows: Section 2 introduces the mathematical model and execution logic of the PO. Section 3 proposes the ANBPO algorithm by integrating three learning strategies into PO. Section 4 applies ANBPO to solve the mural image segmentation problem using twelve mural images to evaluate the algorithm’s performance. Section 5 presents the study conclusions and outlines future work directions.

2. The Mathematical Model of the Parrot Optimizer

This section primarily introduces the concept of the PO. PO achieves global exploration and local exploitation capabilities of the algorithm primarily by simulating the foraging behavior, staying behavior, communicating behavior, and fear of strangers’ behavior of parrots in nature, thereby forming an optimization algorithm with excellent performance. Its mathematical model mainly comprises five components: the population initialization phase, foraging behavior, staying behavior, communicating behavior, and fear of strangers’ behavior. Below, the mathematical models of these five parts will be described, and the execution logic of PO will be subsequently introduced. It should be noted that the mathematical models and formulas in this section are sourced from literature on the original NGO algorithm [33].

2.1. Population Initialization Phase

The initialization phase of the PO is designed to generate an initial population for use in the algorithm’s iterative process. Each individual within the population represents a candidate solution to the optimization problem being solved. Every individual is generated within the upper and lower bounds of the variables in the optimization problem, as expressed in Equation (1).
X i 0 = l b + r a n d ( 0 , 1 ) ( u b l b ) ,   i = 1 , 2 , , N
where, X i 0 represents the information of the i t h individual in the initial population. l b and u b denote the lower and upper bounds of the optimization problem, respectively, and are represented as 1 × d i m vectors, where d i m signifies the dimensionality of the variables in the optimization problem. r a n d ( 0 , 1 ) indicates a random number generated within the interval [0, 1], and N represents the size of the population. After generating an initial population of size N , solution refinement is conducted by simulating the parrot’s foraging behavior, staying behavior, communicating behavior, and fear of strangers’ behavior.

2.2. Foraging Behavior

This section primarily focuses on mathematical modeling of the parrot’s foraging behavior. In this behavior, the parrot primarily moves based on the location of food sources, and its positional update is described by Equation (2).
X i t + 1 = X i t X b e s t L e v y ( d i m ) + r a n d 1 t M a x i t e r 2 t M a x i t e r X m e a n t
where, X i t + 1 represents the information of the i t h individual at the ( t + 1 ) t h iteration, X i t denotes the information of the i t h individual at the t t h iteration, X b e s t signifies the optimal individual within the population, X b e s t is defined as the threshold value such that, when utilized to segment the mural image, the resulting segmented mural image exhibits the maximum inter-class variance with respect to the original image. Consequently, X b e s t is considered the optimal individual within the population, representing the optimal combination of segmentation thresholds. L e v y ( d i m ) indicates a random number generated from a Lévy distribution with a parameter d i m , r a n d represents a random number generated within the interval [0, 1], M a x i t e r denotes the maximum number of iterations, and X m e a n t represents the average information of the individuals in the population at the t t h iteration, as expressed by Equation (3).
X m e a n t = 1 N i = 1 N X i t

2.3. Staying Behavior

This section primarily focuses on mathematical modeling of the parrot’s staying behavior, where the positional update is described by Equation (4).
X i t + 1 = X i t + X b e s t L e v y ( d i m ) + r a n d o n e s ( 1 , d i m )
where, o n e s ( 1 , d i m ) represents the generation of a vector of all ones with a size of 1 × d i m .

2.4. Communicating Behavior

This section primarily focuses on mathematical modeling of the parrot’s communicating behavior. The parrot’s communicating behavior mainly involves two approaches: interacting with the group and not interacting with the group. Here, it is assumed that both communication methods occur with equal probability. The parrot’s communicating behavior is described by Equation (5).
X i t + 1 = 0.2 r a n d ( 1 t M a x i t e r ) ( X i t X m e a n t ) , P 0.5 0.2 r a n d e x p t r a n d M a x i t e r , P > 0.5
where, e x p ( ) denotes the exponential operation, and P represents a random number generated within the interval [0, 1].

2.5. Fear of Strangers’ Behavior

This section primarily focuses on mathematical modeling of the parrot’s fear of strangers’ behavior. In this behavior, the parrot exhibits fear toward strangers and attempts to move closer to the optimal individual within the population to seek a safer environment. Its positional update is represented by Equation (6).
X i t + 1 = X i t + r a n d c o s ( 0.5 π t M a x i t e r ) ( X b e s t X i t ) c o s ( r a n d π ) ( t M a x i t e r ) 2 M a x i t e r ( X i t X b e s t )
where, c o s ( ) denotes the cosine function operation.

2.6. Implementation of Parrot Optimizer

When solving practical optimization problems, the PO first generates an initial population with a certain level of optimization capability. Subsequently, it conducts global exploration and local exploitation of the initial solutions by simulating the parrot’s foraging behavior, staying behavior, communicating behavior, and fear of strangers’ behavior. This approach enhances the quality of the solutions, thereby improving the resolution of the optimization problem. Algorithm 1 presents the pseudocode for the PO.
Algorithm 1: Pseudo-code of PO
1:
 Initialization: N ,   u b ,   l b ,   d i m ,   M a x i t e r
2:
 Generate initialized population using Equation (1)
3:
for   i = 1 : M a x i t e r  do
4:
   Calculate fitness function value
5:
   Save the best individual
6:
   for   j = 1 : N do
7:
     S t = r a n d i ( [ 1 , 4 ] )
8:
    Foraging behavior
9:
    if   S t = = 1 Then
10:
     Updating the individual using Equation (2)
11:
    Staying behavior
12:
    else if   S t = = 2 Then
13:
     Updating the individual using Equation (4)
14:
    Communicating behavior
15:
    else if   S t = = 3 Then
16:
     Updating the individual using Equation (5)
17:
    Fear of strangers’ behavior
18:
    else if   S t = = 4 Then
19:
     Updating the individual using Equation (6)
20:
    end if
21:
   end for
22:
  end for
23:
 Return the best individual

3. The Mathematical Model of the ANBPO

The original PO exhibits deficiencies when solving mural image segmentation problems, including insufficient global exploration capability, inadequate local exploitation capability, and an imbalance between the exploration and exploitation phases. These shortcomings often cause the algorithm to converge prematurely to locally optimal segmentation thresholds, thereby degrading the quality of the segmented images. To mitigate these limitations, this section proposes an enhanced variant of PO, termed ANBPO, by integrating an adaptive learning strategy, a nonlinear factor, and a third-order Bernstein-guided strategy. First, to address the algorithm’s inadequate global exploration capability, an adaptive learning strategy is introduced. By accounting for individual differences in information and learning adaptability, this strategy effectively broadens the search space, enabling the algorithm to explore solutions more comprehensively and improve the quality of mural image segmentation. Second, to resolve the imbalance between global exploration and local exploitation, a nonlinear factor is proposed. Leveraging its adaptive nature and nonlinear curve characteristics, this factor dynamically balances the two phases, enhancing the algorithm’s ability to escape local optima and improving the segmentation quality of mural images. Finally, to strengthen the algorithm’s local exploitation capability, a third-order Bernstein-guided strategy is introduced. By incorporating the weighted properties of the third-order Bernstein polynomial, this strategy comprehensively evaluates individuals with diverse characteristics, thereby refining local exploitation and increasing the precision of mural image segmentation. The subsequent sections will detail the implementations of the adaptive learning strategy, the nonlinear factor, and the third-order Bernstein-guided strategy.

3.1. Adaptive Learning Strategy

The original PO algorithm suffers from insufficient global exploration capability when solving mural image segmentation problems, which hinders its ability to effectively search the solution space. To address this issue, this section proposes an adaptive learning strategy to enhance PO’s global exploration capability; the schematic diagram of the adaptive learning strategy is shown in Figure 1. The core idea of the adaptive learning strategy is that individuals learn from different types of disparity information while adaptively accounting for their own learning adaptability and the learnability of varying disparity information. This approach significantly improves the algorithm’s global exploration capability, leading to higher-quality segmentation of mural images. Specifically, the adaptive learning strategy considers four groups of disparities: (1) the disparity between the best individual and better individuals ( D i s 1 ), (2) the disparity between the best individual and worse individuals ( D i s 2 ), (3) the disparity between better individuals and worse individuals ( D i s 3 ), and (4) the disparity between two randomly selected individuals ( D i s 4 ). These four groups of disparities are represented using Equation (7).
D i s 1 = X b e s t X b e t t e r D i s 2 = X b e s t X w o r s e D i s 3 = X b e t t e r X w o r s e D i s 4 = X r a n d 1 X r a n d 2
where, X b e s t denotes the best individual in the population. X b e t t e r represents a randomly selected individual from the set of the top 10 individuals with the smallest fitness function values in the population. X w o r s e denotes a randomly selected individual from the set of the top 10 individuals with the largest fitness function values in the population. X r a n d 1 and X r a n d 2 represent two distinct randomly selected individuals from the population. In this section, considering the varying learnability of each group of information disparities, the learnability of each group is represented using Equation (8).
L F k = D i s k k = 1 4 D i s k , ( k = 1 , 2 , 3 , 4 )
where, L F k denotes the learnability of the k t h group of information disparities, and “   ” represents the modulus operation. In the adaptive learning strategy, individuals possess varying learning capabilities. Considering that higher-quality individuals should reduce their learning intensity while lower-quality individuals should increase theirs, the learning capability of each individual is defined using Equation (9).
S F i = f i t i f i t m a x ,   ( 1 i N )
where, S F i denotes the learning capability of the i t h individual, f i t i represents the fitness function value of the i t h individual, where, the fitness function value f i t i is defined as follows: First, the individual X i is employed as the threshold combination for segmenting the mural image. Subsequently, the mural image is segmented using this threshold combination. Then, the inter-class variance between the original image and the segmented image is calculated, and this calculated inter-class variance value is defined as the fitness function value. and f i t m a x indicates the maximum fitness function value among individuals in the population. Subsequently, the learning process of the i t h individual from the k t h group of information disparities is represented using Equation (10).
K A k = a r c t a n ( 2 π ( 1 t M a x i t e r ) ) S F i L F k D i s k , ( k = 1 , 2 , 3 , 4 )
where, K A k denotes the amount of information acquired by the i t h individual through learning from the k t h group of disparities, and arctan represents the arctangent function. Based on Equation (10), the new state generated by the i t h individual after learning from the four groups of information disparities is expressed as Equation (11).
X i t + 1 = X i t + K A 1 + K A 2 + K A 3 + K A 4
Subsequently, the individual’s state is preserved using Equation (12).
X i t + 1 = X i t + 1 i f   f i t i t + 1 < f i t i X i t o t h e r w i s e
where, f i t i t + 1 represents the fitness function value of the individual X i t + 1 . By adaptive learning to different information disparities through an adaptive learning strategy, the algorithm’s global exploration capability can be effectively enhanced, ensuring thorough exploration of the solution space and thereby improving the quality of mural image segmentation.

3.2. Nonlinear Factor

The original PO exhibits an imbalance between the global exploration and local exploitation phases when solving the mural image segmentation problem, leading to a tendency to fall into local suboptimal segmentation threshold traps, thereby compromising the quality of mural image segmentation. To mitigate this issue, this section proposes a nonlinear factor to balance the global exploration and local exploitation phases. By integrating the Sigmoid function, arccosine function, and arctangent function, the balancing capability of the nonlinear factor is enhanced, enabling the algorithm to exhibit better global exploration performance in the early iterations. As iterations progress, local exploitation gradually dominates while retaining a certain level of global exploration capability. This phenomenon effectively reduces the likelihood of the algorithm getting trapped in local optima during mural image segmentation. Specifically, the nonlinear factor comprises three terms: a Sigmoid function term, an arccosine function term, and an arctangent function term. Among these, the Sigmoid function term primarily controls the balance during the mid-iteration phase and is expressed using Equation (13).
N F s i g m o d = 1 1 1 + e x p ( k ( t / M a x i t e r 0.5 ) )
where, N F s i g m o d denotes the Sigmoid function term and e x p ( ) represents the exponential operation. In this study, the parameter k is set to 10. The curve depicting the variation of N F s i g m o d with the number of iterations is shown in Figure 2. As illustrated in the figure, the Sigmoid function curve approaches equilibrium during the mid-iteration phase, effectively ensuring a balance between the global exploration and local exploitation phases of the algorithm, thereby guaranteeing a certain level of quality in mural image segmentation. However, its limitations include insufficient exploitation capability in the early iterations and inadequate exploration capability in the late iterations, indicating certain drawbacks. To address the deficiency in global exploration capability during the late iterations caused by the Sigmoid function term, an arccosine function term is subsequently introduced to sustain the algorithm’s global exploration capability in the late iterations, as expressed in Equation (14).
N F a r c c o s = a r c c o s ( t M a x i t e r ) / ( π 2 )
where, N F a r c c o s denotes the arccosine function term and a r c c o s ( ) represents the arccosine operation. The curve depicting the variation of N F a r c c o s with the number of iterations is shown in Figure 3. As illustrated in the figure, the arccosine function term provides a greater proportion of global exploration, thereby enhancing the algorithm’s global exploration capability during the late iterations. Additionally, to address the deficiency in local exploitation capability during the early iterations caused by the Sigmoid function term, an arctangent function term is subsequently introduced to ensure the algorithm’s local exploitation capability in the early iterations, as expressed in Equation (15).
N F a r c t a n = ( a r c t a n ( 5 10 ( t M a x i t e r ) ) + 1 ) / 2
where, N F a r c t a n denotes the arctangent function term, and a r c t a n ( ) represents the arctangent operation. The curve depicting the variation of N F a r c t a n with the number of iterations is shown in Figure 4. As illustrated in the figure, the arctangent function term provides a greater proportion of local exploitation during the early iterations, thereby enhancing the algorithm’s local exploitation capability in this phase. In summary, by integrating the properties of the Sigmoid function, arccosine function, and arctangent function, the nonlinear factor proposed in this section is formulated to achieve a more reasonable balance between the algorithm’s global search and local exploitation phases, as expressed in Equation (16).
N F = ( N F s i g m o d + N F a r c c o s + N F a r c t a n ) 3
where, N F denotes the nonlinear factor proposed in this section, and its variation curve with respect to the number of iterations is shown in Figure 5. As illustrated in the figure, during the early iterations, the algorithm’s global exploration phase dominates while still retaining a certain level of local exploitation capability. As iterations progress, during the mid-iteration phase, the algorithm’s global exploration and local exploitation phases approach equilibrium. In the late iterations, the algorithm’s local exploitation capability becomes dominant, yet it maintains a relatively high level of global exploration capability, thereby strengthening its ability to escape local optimal segmentation thresholds. In summary, the nonlinear factor proposed in this section achieves a more reasonable balance between the exploration and exploitation phases of the algorithm, effectively enhancing the quality of mural image segmentation.

3.3. Third-Order Bernstein-Guided Strategy

The original PO exhibits a deficiency in local exploitation capability when solving the mural image segmentation problem, leading to a loss in segmentation accuracy and an inability to effectively preserve meaningful image information. Zhang et al. [34] pointed out that individuals can effectively enhance the algorithm’s local exploitation by learning from those with superior performance. Additionally, Wu et al. [35] indicated that incorporating Bernstein polynomials to guide individuals can further improve the algorithm’s local exploitation capability. Inspired by these insights, to further enhance the local exploitation of the PO algorithm and achieve higher optimization precision, this section proposes a third-order Bernstein-guided strategy. By leveraging the property that the weights of the third-order Bernstein polynomial sum to 1, individuals with distinct characteristics are weighted to form weighted individuals. These weighted individuals are then used to guide the population, effectively strengthening the algorithm’s local exploitation capability and improving the quality of mural image segmentation. First, the n-order Bernstein polynomial is expressed using Equation (17).
B w , n ( p ) = C n w p w ( 1 p ) n w
where, 0 p 1 denotes the probability of an event occurring, w represents the number of successful occurrences of the event, n indicates the total number of independent and non-repetitive trials, and C n w is expressed using Equation (18).
C n w = n ! w ! ( n w ) !
where, ‘ ! ’ denotes the factorial operation. For a third-order Bernstein polynomial, n = 3 , and w = 0 , 1 , 2 , 3 , encompassing four polynomials. By combining Equation (17) and Equation (18), the Bernstein polynomial can be derived as shown in Equation (19).
B 0 , 3 ( p ) = ( 1 p ) 3 B 1 , 3 ( p ) = 3 p ( 1 p ) 2 B 2 , 3 ( p ) = 3 p 2 ( 1 p ) B 3 , 3 ( p ) = p 3
To more intuitively illustrate the properties of the third-order Bernstein polynomial, Figure 6 displays its functional graph. As illustrated in the figure, when p varies within the interval [0, 1], the sum of the four polynomials remains consistently equal to 1. Leveraging this property, the optimal individual, suboptimal individual, and two randomly selected individuals from the population are weighted to generate weighted individuals. This process is expressed using Equation (20).
X w e i g h t = B 0 , 3 ( p ) X b e s t + B 1 , 3 ( p ) X b e + B 2 , 3 ( p ) X r a n d 1 + B 3 , 3 ( p ) X r a n d 2
where, X w e i g h t denotes the weighted individual. X b e is defined as a randomly selected individual from the subset comprising the top 20% highest-quality individuals in the population. X r a n d 1 and X r a n d 2 represent two distinct randomly selected individuals from the population. Subsequently, the generated weighted individual is used to guide other individuals, and this process is expressed using Equation (21).
X i t + 1 = X i t + a c r t a n 2 π t M a x i t e r ( X w e i g h t X i t )
To facilitate an intuitive understanding of the third-order Bernstein-guided strategy, Figure 7 presents a schematic diagram of its principles. By applying the third-order Bernstein-guided strategy to the population, the weighted properties of the Bernstein polynomial are effectively utilized, thereby enhancing the algorithm’s local exploitation capability and subsequently improving the quality of mural image segmentation.

3.4. Implementation of ANBPO

The original PO exhibits deficiencies in global exploration capability, local exploitation capability, and an imbalance between the global exploration and local exploitation phases when solving mural image segmentation problems, resulting in suboptimal segmentation performance. To address these issues, this paper proposes an enhanced PO variant, termed ANBPO, by integrating an adaptive learning strategy, a nonlinear factor, and a third-order Bernstein-guided strategy. ANBPO effectively improves the algorithm’s capability for mural image segmentation, ensuring that the image is segmented effectively while preserving its structural and feature integrity. To facilitate an intuitive understanding of ANBPO’s execution logic, Algorithm 2 provides the pseudocode for its implementation, and Figure 8 illustrates the algorithm’s execution flowchart. In Figure 8, the improved strategies introduced in this study are highlighted with red dashed boxes. When solving the mural image segmentation problem, the process begins with population initialization. Subsequently, global exploration and local exploitation operations, marked by the indicator “St,” are performed to locate the optimal segmentation thresholds. Afterward, once the algorithm reaches the predefined maximum number of iterations, the optimal mural image segmentation thresholds are output, and the loop terminates.
Algorithm 2: Pseudo-code of ANBPO
1:
 Initialization: N ,   u b ,   l b ,   d i m ,   M a x i t e r
2:
 Generate initialized population using Equation (1)
3:
for  i = 1 : M a x i t e r  do
4:
   Calculate fitness function value
5:
   Save the best individual
6:
   for   j = 1 : N  do
7:
     Calculate   the   value   of   nonlinear   factor   N F
8:
    if  r a n d > N F
9:
     if  r a n d < 0.5
10:
       S t = r a n d i ( [ 1 , 2 ] )
11:
      Foraging behavior
12:
      if   S t = = 1  Then
13:
       Updating the individual using Equation (2)
14:
      Staying behavior
15:
      else if   S t = = 2  Then
16:
       Updating the individual using Equation (4)
17:
      end if
18:
     else
19:
      Updating the individual using Equation (21)
20:
     end if
21:
    Else
22:
     if  r a n d < 0.5
23:
       S t = r a n d i ( [ 3 , 4 ] )
24:
      Communicating behavior
25:
      if   S t = = 3  Then
26:
       Updating the individual using Equation (5)
27:
      Fear of strangers’ behavior
28:
      else if   S t = = 4
29:
       Updating the individual using Equation (6)
30:
      end if
31:
     else
32:
      Updating the individual using Equation (11)
33:
     end if
34:
    end if
35:
   end for
36:
  end for
37:
 Return the best individual

4. Experimental Results on Mural Image Segmentation

This section primarily evaluates the performance of the proposed ANBPO for mural image segmentation. Specifically, as illustrated in Figure 9, experiments were conducted on twelve mural images sourced from an open-source standard mural image segmentation dataset. This dataset is accessible via the link: https://ww2.mathworks.cn/matlabcentral/fileexchange/181489-mural-image-segmentation-dataset (accessed on 19 July 2025). It primarily comprises notable examples of mural images and has undergone image standardization processing, making it effectively suitable for performance testing in mural image segmentation tasks. To comprehensively assess the algorithm’s performance, ANBPO was compared with 5 state-of-the-art algorithms, and the parameter settings for these 5 comparative algorithms are detailed in Table 1. In each experiment, the algorithm was independently run 30 times, with a population size of 40 and a maximum number of iterations set to 100. A comprehensive evaluation of ANBPO’s mural image segmentation performance was conducted by analyzing population diversity, the exploration/exploitation ratio, fitness function values, convergence behavior, peak signal-to-noise ratio (PSNR) [36], structural similarity (SSIM) [37], and feature similarity (FSIM) [38]. To ensure experimental fairness, all experimental codes involved in this study were developed and executed using MATLAB 2022 Rb on the Windows 11 operating system. The hardware configuration included 32 GB of RAM and an Intel(R) Core (TM) Ultra 5 processor.
The following section will provide a detailed introduction to the calculation methods for the PSNR, SSIM, and FSIM metrics. First, the PSNR metric is computed using Equation (22).
P S N R = 10 log 10 255 2 M S E
where, M S E represents the mean squared error between the original image I and the segmented image I , and it is calculated using Equation (23).
M S E = 1 M N j = 1 M k = 1 N I ( j , k ) I ( j , k ) 2
where, M N denotes the size of the image, I ( j , k ) is the grayscale value of the original image at position ( j , k ) , while I ( j , k ) is the grayscale value of the segmented image at the same position ( j , k ) . SSIM is a method for measuring the similarity between two images and is computed using Equation (24).
SSIM I , I = ( 2 μ I μ I + C 1 ) ( 2 σ I I + C 2 ) ( μ I 2 + μ I 2 + C 1 ) ( σ I 2 + σ I 2 + C 2 )
where, μ I and μ I represent the mean grayscale values of the original image and the segmented image, respectively. σ I I denotes the covariance between the grayscale values of the original image and the segmented image. σ I 2 is the variance of the grayscale values in the original image, while σ I 2 is the variance of the grayscale values in the segmented image. The constants are defined as C 1 = ( k 1 L ) 2 and C 1 = ( k 2 L ) 2 , with typical values of L = 256 , k 1 = 0.01 , and k 2 = 0.03 . Finally, FSIM is calculated using Equation (25).
FSIM I , I = x Ω S L ( x ) P C m ( x ) x Ω P C m ( x )
where, x represents a pixel, and Ω denotes the entire spatial domain of the image. P C m = max ( P C 1 , P C 2 ) , where P C 1 and P C 2 are the phase congruency values of the original image and the segmented image, respectively.

4.1. The Concept of Otsu Segmentation Technique

This section primarily focuses on modeling the fitness function for the mural image segmentation problem. In this study, the Otsu image segmentation technique is employed to segment mural images. The core idea of Otsu’s method is to achieve image segmentation by maximizing the inter-class variance between different regions of the image. Therefore, when using ANBPO to search for optimal segmentation threshold combinations, the inter-class variance between different image regions is adopted as the objective function. Below, the concept of the Otsu image segmentation technique is introduced in detail. First, assume that the grayscale pixel matrix of the image to be segmented is denoted as I , and the image contains L grayscale levels. Let n i represent the number of pixels with grayscale level i in the image. Based on these assumptions, the total number of pixels N in the image I is calculated using Equation (26).
N = i = 0 L 1 n i
Subsequently, the pixel proportion P i of grayscale level i in the entire image I is calculated using Equation (27).
P i = n i N , i = 0 , 1 , , L 1
where, P i 0 , and P 0 + P 1 + + P L 1 = 1 . Let the number of thresholds for image segmentation be k , and assume the segmentation threshold is t . The image can then be divided into two regions based on threshold t : The region with pixel grayscale values in the interval [ 1 , t ] is classified as the object region, and the region with pixel grayscale values in the interval [ t , L 1 ] is classified as the background region. Let the ratio of the number of pixels in the object region to the total number of pixels in the image be ω 0 , the average pixel value of the object region be μ 0 , the ratio of the number of pixels in the background region to the total number of pixels be ω 1 , and the average pixel value of the background region be μ 1 . The average pixel value of the entire image is denoted as μ , and the variance between the different segmented regions is denoted as ν . Based on these assumptions, ω 0 , μ 0 , ω 1 , μ 1 , μ , and ν are calculated using Equations (28), (29), (30), (31), (32) and (33), respectively.
ω 0 = i = 0 t P i
μ 0 = i = 0 t i P i ω 0
ω 1 = i = t + 1 L 1 P i
μ 1 = i = t + 1 L 1 i P i ω 1
μ = i = 0 k 1 ω i μ i = i = 0 L 1 i P i
v ( t ) = ω 0 ( μ 0 μ ) 2 + ω 1 ( μ 1 μ ) 2 = ω 0 ω 1 ( μ 0 μ 1 ) 2
Subsequently, the optimal segmentation threshold t b e s t for the image is calculated using Equation (34).
t best = arg max 0 t L v ( t )
Subsequently, the inter-class variance between different regions when the number of thresholds is k calculated using Equation (35).
v ( t 1 , t 2 , , t k ) = ω 0 ω 1 ( μ 0 μ 1 ) 2 + ω 0 ω 2 ( μ 0 μ 2 ) 2 + + ω 0 ω k ( μ 0 μ k ) 2 + ω 1 ω 2 ( μ 1 μ 2 ) 2 +   + ω 1 ω 3 ( μ 1 μ 3 ) 2 + + ω k 1 ω k ( μ k 1 μ k ) 2
where, ω i and μ i are calculated using Equation (36) and Equation (37), respectively.
ω i 1 = i = t i 1 + 1 t i P i , 1 i k + 1
μ i 1 = i = t i 1 + 1 t i i P i ω i 1 ,   1 i k + 1
Assume the optimal segmentation threshold combination for the image is denoted as T best = ( t 1 * , t 2 * , , t k * ) . The optimal threshold combination T best is then calculated using Equation (38).
T best = arg max 0 t 1 t 2 t k v ( t 1 , t 2 , , t k )

4.2. Discussion of Experimental Results

This section primarily focuses on analyzing the experimental results of applying ANBPO to mural image segmentation when the number of thresholds is set to 2, 4, 6, and 8. The specific image segmentation results are shown in Table 2. The analysis includes metrics such as population diversity, the exploration/exploitation ratio, fitness function values, convergence behavior, and image quality metrics: PSNR, SSIM, and FSIM. Before that, we need to sort out the logic of using the ANBPO algorithm to solve the problem of mural image segmentation. The simplified flowchart for solving the mural image segmentation problem using the ANBPO algorithm is shown in Figure 10. The specific steps are as follows:
Step 1: Based on the execution logic of the ANBPO algorithm, first initialize a population with N individuals. Among them, the i t h individual is denoted as X i = ( x i , 1 , x i , 2 , , x i , 8 ) . Here, X i represents the i t h threshold combination for mural image segmentation. Taking a case with eight segmentation thresholds as an example, the dimensionality of the individual X i is 8, indicating that the mural image needs to be segmented using eight thresholds. x i , 1 denotes the first threshold in the i t h threshold combination.
Step 2: Segment the mural image according to the i t h threshold combination X i . Simultaneously, calculate the inter-class variance between the images before and after segmentation using Equation (35). This inter-class variance serves as the fitness function value in the ANBPO algorithm.
Step 3: Based on the fitness function values calculated for each individual in Step 2, determine the optimal threshold combination X b e s t for mural image segmentation in the current iteration.
Step 4: Check whether the predefined maximum number of iterations has been reached. If the maximum number of iterations has been reached, terminate the algorithm, return X b e s t , and perform the final segmentation on the mural image to obtain the actual mural image segmentation results. Otherwise, proceed to Step 5.
Step 5: Update the population individuals according to the individual update process of the ANBPO algorithm, and then jump back to Step 2 for execution.

4.2.1. Parameter Sensitivity Analysis

This section primarily conducts a sensitivity analysis of the operational parameters of ANBPO when solving mural image segmentation problems to determine the operational parameters for this experiment. Specifically, the operational parameters involved in the algorithm include the settings of population size and maximum number of iterations. To make reasonable selections for these two parameters, this section employs the control variable method for experimental investigations.
Firstly, to determine the population size parameter, we defined five scenarios with population sizes of N = 10 , N = 20 , N = 40 , N = 60 , and N = 100 . With the maximum number of iterations fixed at 100, the average rankings of the fitness function values for 12 mural image segmentation problems are shown in Figure 11. As depicted in Figure 11, when the population size is set to 10 and 20, the algorithm’s average fitness values rank 3.75 and 3.42, respectively, indicating weaker performance compared to when the population size is 40. This is primarily because a smaller population search space makes the algorithm more prone to falling into local optima traps when solving mural image segmentation problems, unable to escape them, thereby affecting its image segmentation performance. Additionally, when the population size is set to 60 and 100, the algorithm’s average fitness values rank 2.50 and 4.08, respectively. In comparison, the algorithm performs best in mural image segmentation when the population size is set to 40. This is mainly because a larger population size enhances the algorithm’s search space, but its ability to exploit local regions is compromised, leading to a loss in precision during mural image segmentation. Therefore, setting the population size to 40 as an operational parameter for the ANBPO algorithm is deemed reasonable.
Subsequently, to determine an appropriate setting for the maximum number of iterations, we set the maximum number of iterations to 200 to observe the fitness function curve during mural image segmentation, with the number of segmentation thresholds set to 6. The fitness function variation curves for some mural image segmentation problems are shown in Figure 12. As seen in Figure 12, in most cases, the algorithm achieves a certain level of convergence after the 70th iteration. Subsequently, although there are still minor performance optimizations in subsequent iterations, the curve consistently follows a stable convergence trend. Therefore, to reduce computational costs during mural image segmentation, we set the maximum number of iterations to 100. Under this condition, the problem-solving process exhibits a stable trend, making it meaningful to analyze this phenomenon. In summary, we set the population size and maximum number of iterations to 40 and 100, respectively, as reasonable operational conditions for all subsequent experiments.

4.2.2. Population Diversity Analysis

This section analyzes the population diversity of ANBPO when applied to mural image segmentation. Experiments were conducted on 6 mural images, and the results are illustrated in Figure 13, where the number of iterations is set to 500, the number of segmentation thresholds is set to 6, the Y-axis represents real-time population diversity during algorithm execution, and the X-axis represents the iteration count. Generally, higher population diversity is beneficial for helping the algorithm escape local optima in threshold combinations and enhance global search capability, thereby improving the algorithm’s performance in mural image segmentation. As shown in the figure, under most conditions, the ANBPO algorithm maintains higher population diversity than the PO algorithm, facilitating broader exploration of the solution space and improving the quality of mural image segmentation. This is primarily attributed to the adaptive learning strategy and nonlinear factor proposed in this study, which effectively enhance the algorithm’s global exploration capability, leading to substantial improvements in population diversity.
Figure 12. Convergence graph of the ANBPO algorithm with 200 iterations.
Figure 12. Convergence graph of the ANBPO algorithm with 200 iterations.
Biomimetics 10 00482 g012
Figure 13. Population diversity.
Figure 13. Population diversity.
Biomimetics 10 00482 g013

4.2.3. Exploration/Exploitation Ratio Analysis

This section primarily analyzes the balance between the exploration and exploitation phases of ANBPO. Typically, a high-performance optimization algorithm first conducts global exploration of the solution space to locate potential optimal solution regions, followed by enhanced local exploitation to refine these regions and improve optimization precision. Thus, achieving a balance between exploration and exploitation is critical. The exploration/exploitation experimental results on 6 mural images are illustrated in Figure 14, where the number of segmentation thresholds is set to 6, the maximum number of iterations is set to 500, the blue line represents ANBPO’s global exploration rate, and the red line represents its local exploitation rate. As shown in the figure, during the early iterations, the global exploration phase dominates, enabling ANBPO to identify more potential optimal threshold combination regions. This is primarily attributed to the adaptive learning strategy proposed in this study, which effectively enhances the algorithm’s global exploration capability. Additionally, the algorithm retains sufficient local exploitation capability to improve optimization precision. As iterations progress, during the mid-stage iterations, the global exploration and local exploitation phases achieve balance, largely due to the nonlinear factor introduced in this study, which effectively balances the two phases and ensures a favorable trade-off across multiple metrics, thereby improving the algorithm’s ability to escape local optima in threshold combinations. During the late iterations, the local exploitation phase dominates, enabling the algorithm to enhance image segmentation precision. This is primarily due to the third-order Bernstein-guided strategy, which significantly strengthens the algorithm’s local exploitation capability. Furthermore, even in the late iterations, ANBPO maintains a certain level of global exploration capability due to the superiority of its strategies, aiding in escaping local optima in threshold combinations. In summary, the proposed ANBPO achieves a robust balance between the global exploration and local exploitation phases, effectively enhancing the quality of mural image segmentation.

4.2.4. Effectiveness of Strategies Analysis

In this section, we primarily analyze the enhancing effects of the adaptive learning strategy, nonlinear factor, and third-order Bernstein-guided strategy proposed in this study on the performance of the PO algorithm, aiming to validate the effectiveness of these three learning strategies. Specifically, we define the APO algorithm by incorporating the adaptive learning strategy into the PO algorithm, the NPO algorithm by introducing the nonlinear factor into the PO algorithm, and the BPO algorithm by employing the third-order Bernstein-guided strategy within the PO algorithm. Subsequently, we apply the PO, APO, NPO, BPO, and ANBPO algorithms to solve 12 mural image segmentation problems and statistically rank their average fitness values to verify the effectiveness of the proposed strategies.
The experimental results are illustrated in Figure 15. As depicted in Figure 15, under four different numbers of segmentation thresholds, the average rankings of the APO, NPO, and BPO algorithms are all superior to those of the PO algorithm. This indicates that the three improvement strategies introduced in this study—the adaptive learning strategy, nonlinear factor, and third-order Bernstein-guided strategy—effectively enhance the mural image segmentation performance of the PO algorithm, confirming the validity of each strategy. Furthermore, the ANBPO algorithm, which incorporates all three learning strategies simultaneously, outperforms the APO, NPO, and BPO algorithms in terms of average ranking. This suggests that when these three improvement strategies are simultaneously introduced into the PO algorithm, they collectively enhance the algorithm’s global exploration capability, local exploitation capability, and overall balance, thereby further improving its performance. In summary, our study confirms that the three learning strategies introduced herein play a significant role in enhancing algorithm performance, and the simultaneous integration of these three strategies can further elevate the mural image segmentation performance of the algorithm.

4.2.5. Fitness Function Values Analysis

This section analyzes the fitness function values obtained by ANBPO for mural image segmentation with thresholds set to 2, 4, 6, and 8, as presented in Table 3. Specifically, “Mean” represents the average fitness function value from 30 independent experimental runs, “Std” denotes the standard deviation of these results, “Friedman” indicates the algorithm’s Friedman rank, and “Final Rank” reflects its overall performance ranking based on the Friedman metric. To visually demonstrate the advantages of ANBPO, Figure 16 illustrates the average fitness function value rankings of the algorithm across different numbers of segmentation thresholds.
As shown in the table, when the number of segmentation thresholds is 2 or 6, ANBPO achieves the optimal fitness function values across all 12 mural images, with a winning rate of 100%, demonstrating significant advantages in mural image segmentation. This performance is primarily attributed to the adaptive learning strategy and the third-order Bernstein-guided strategy proposed in this study, which effectively enhance the algorithm’s global exploration and exploitation capabilities, enabling comprehensive search and precise localization within the threshold combination space, and thereby improving the quality of mural image segmentation. Additionally, the incorporation of nonlinear factors balances the exploration and exploitation phases of the algorithm, enabling it to avoid local optima in threshold selection and enhancing segmentation accuracy. When the number of thresholds is 4 or 8, ANBPO attains the optimal fitness function values in 11 out of the 12 mural images, achieving a winning rate of 91.6%, indicating strong segmentation performance. However, it must be acknowledged that ANBPO’s performance occasionally lags behind that of the comparison algorithms, suggesting room for further improvement. Meanwhile, ANBPO achieves a Friedman ranking of 1.10, which is 71.1% higher than that of the second-ranked PO algorithm. Figure 16 visually confirms that, under varying numbers of segmentation thresholds, ANBPO consistently exhibits lower bar heights, validating its superior mural segmentation performance. Analysis of the fitness function values indicates that, owing to ANBPO’s advanced search strategies, it achieves higher mural segmentation performance compared to the comparison algorithms and can be considered a promising approach for mural image segmentation.

4.2.6. Wilcoxon Rank Sum Test

The previous subsection analyzed the numerical results of the ANBPO algorithm for solving mural image segmentation problems. To comprehensively evaluate the performance of the ANBPO algorithm, this section conducts a Wilcoxon rank sum non-parametric test on the numerical experimental results of the fitness function, with a significance level set at 0.05. The experimental results are presented in Table 4, where “−“ indicates that the algorithm’s performance is significantly inferior to that of the ANBPO algorithm, “+” denotes that the algorithm’s performance is significantly superior to that of the ANBPO algorithm, and “=” signifies no significant difference in performance between the algorithm and the ANBPO algorithm.
As can be seen from the table, among the 48 experiments conducted, the PRO and QHDBO algorithms demonstrate significantly inferior performance compared to the ANBPO algorithm in 47 instances. The HEOA, PO, and IMODE algorithms show significantly inferior performance to the ANBPO algorithm in all 48 experiments. The ANBPO algorithm outperforms the comparative algorithms, with a winning ratio exceeding 95%. Therefore, it can be concluded that the ANBPO algorithm is a high-performing mural image segmentation algorithm. This advantage primarily stems from the introduction of the adaptive learning strategy, nonlinear factor, and third-order Bernstein-guided strategy in this study, which enhance the algorithm’s performance from the perspectives of global search, local exploitation, and algorithmic balance, thereby enabling the algorithm to achieve superior mural image segmentation performance.

4.2.7. Convergence Analysis

This section primarily analyzes the convergence behavior of ANBPO when solving the mural image segmentation problem. A high-quality algorithm should not only achieve superior optimization precision but also exhibit faster convergence speed. Figure 17 illustrates the convergence curves of the algorithms when the number of thresholds is set to 6, where the Y-axis represents the fitness function value and the X-axis represents the iteration count. As shown in the figure, all algorithms effectively segment the mural images, and their convergence curves stabilize. However, it is noteworthy that, compared to the competing algorithms, ANBPO gains a significant lead after the 40th iteration, and this advantage gradually widens until convergence. This demonstrates that ANBPO not only achieves higher convergence precision but also exhibits faster convergence speed compared to the competing algorithms. Consequently, ANBPO can be regarded as a highly applicable method for mural image segmentation.

4.2.8. PSNR Analysis

This section primarily analyzes the PSNR performance of ANBPO for mural image segmentation. Table 5 presents the PSNR values obtained for segmentation thresholds of 2, 4, 6, and 8. Here, “Mean” denotes the average PSNR value across 30 independent and non-repetitive experimental runs, “Std” represents the standard deviation of these results, “Friedman” indicates the algorithm’s Friedman rank, and “Final Rank” reflects the algorithm’s ultimate ranking based on the Friedman metric. To visually demonstrate ANBPO’s superiority in terms of PSNR, Figure 18 illustrates the average PSNR rankings of the algorithm across different numbers of segmentation thresholds.
As shown in the table, when the number of segmentation thresholds is set to 2 or 6, ANBPO achieves the optimal Peak Signal-to-Noise Ratio (PSNR) values across all 12 mural images, with a winning rate of 100%, indicating minimal distortion in mural image segmentation. This performance is primarily attributed to the adaptive learning strategy proposed in this study, which effectively enhances the algorithm’s global exploration capability, minimizing noise interference during mural segmentation and preserving the quality of the segmented images. Additionally, the third-order Bernstein-guided strategy strengthens the algorithm’s local exploitation ability, significantly improving the quality of threshold combinations and enhancing segmentation quality. Furthermore, the integration of nonlinear factors achieves a better balance between the global search and local exploitation phases of the algorithm, bolstering its image denoising capability and improving segmentation accuracy. When the number of thresholds is 4 or 8, ANBPO attains the optimal PSNR values in 11 out of the 12 mural images, achieving a winning rate of 91.6%, demonstrating robust segmentation performance. Although ANBPO occasionally underperforms compared to competing algorithms, its overall PSNR performance is superior. ANBPO achieves a Friedman ranking of 1.15, which is 67.6% higher than that of the second-ranked QHDBO algorithm. Figure 18 visually confirms that, under varying numbers of segmentation thresholds, ANBPO exhibits a relatively low average PSNR ranking, validating its low distortion rate in segmented images. Analysis of the PSNR values indicates that, owing to ANBPO’s enhanced exploration and exploitation capabilities, it achieves minimal image distortion in mural segmentation tasks, ensuring high-quality segmentation. Therefore, ANBPO can be considered a promising approach for mural image segmentation.

4.2.9. SSIM Analysis

This section primarily analyzes the SSIM performance of ANBPO for mural image segmentation. Table 6 presents the SSIM values obtained for segmentation thresholds of 2, 4, 6, and 8. Here, “Mean” denotes the average SSIM value across 30 independent and non-repetitive experimental runs, “Std” represents the standard deviation of these results, “Friedman” indicates the algorithm’s Friedman rank, and “Final Rank” reflects the algorithm’s ultimate ranking based on the Friedman metric. To visually demonstrate ANBPO’s superiority in terms of SSIM, Figure 19 illustrates the average SSIM rankings of the algorithm across different numbers of segmentation thresholds.
As shown in the table, when the number of segmentation thresholds is set to 2 or 6, ANBPO achieves the optimal Structural Similarity Index (SSIM) values across all 12 mural images, with a winning rate of 100%. This indicates a high degree of structural similarity in the segmented images and demonstrates the preservation of most structural features from the original images. This performance is primarily attributable to the adaptive learning strategy proposed in this study, which effectively enhances the algorithm’s global exploration capability, maximizes the preservation of structural integrity during mural image segmentation, and retains original image information throughout the effective segmentation process. Additionally, the third-order Bernstein-guided strategy strengthens the algorithm’s local exploitation ability, significantly improving the quality of threshold combinations for image segmentation and ensuring the retention of structural information. Furthermore, the integration of nonlinear factors achieves a better balance between the global search and local exploitation phases of the algorithm, enhancing its multi-objective optimization capability and generating a comprehensive optimal segmentation solution that preserves most of the structural information from the original images. When the number of thresholds is 4 or 8, ANBPO attains the optimal SSIM values in 11 out of the 12 mural images, achieving a winning rate of 91.6%, demonstrating robust structural preservation capability. Although ANBPO occasionally underperforms in terms of SSIM compared to competing algorithms, its overall SSIM performance is superior. ANBPO achieves a Friedman ranking of 1.10, which is 69.4% higher than that of the second-ranked PO algorithm. Figure 19 visually confirms that, under varying numbers of segmentation thresholds, ANBPO exhibits a relatively low average SSIM ranking, validating its higher feature preservation capability in segmented images. Analysis of the SSIM values indicates that, owing to ANBPO’s enhanced ability to balance across multiple metrics, it achieves exceptional feature preservation performance in mural image segmentation tasks, ensuring high-quality segmentation. Therefore, ANBPO can be considered a promising approach for mural image segmentation.

4.2.10. FSIM Analysis

This section primarily analyzes the FSIM performance of ANBPO for mural image segmentation. Table 7 presents the FSIM values obtained for segmentation thresholds of 2, 4, 6, and 8. Here, “Mean” denotes the average FSIM value across 30 independent and non-repetitive experimental runs, “Std” represents the standard deviation of these results, “Friedman” indicates the algorithm’s Friedman rank, and “Final Rank” reflects the algorithm’s ultimate ranking based on the Friedman metric. To visually demonstrate ANBPO’s superiority in terms of FSIM, Figure 20 illustrates the average FSIM rankings of the algorithm across different numbers of segmentation thresholds.
As shown in the table, when the number of segmentation thresholds is set to 2 or 6, ANBPO achieves the optimal Feature Similarity Index (FSIM) values across all 12 mural images, with a winning rate of 100%. This demonstrates high feature similarity and enhanced visual similarity in the segmented images, thereby reducing distortion levels. This performance is primarily attributable to the adaptive learning strategy proposed in this study, which effectively enhances the algorithm’s global exploration capability, enabling better capture of visual features and minimizing distortion during image segmentation. Additionally, the third-order Bernstein-guided strategy strengthens the algorithm’s local exploitation ability, significantly improving the quality of threshold combinations for mural image segmentation and effectively preserving visual feature information from the original images, thereby enhancing segmentation quality. Furthermore, the integration of nonlinear factors achieves a better balance between the global search and local exploitation phases of the algorithm, effectively balancing multiple metrics in image segmentation to ensure the preservation of structural and visual features. When the number of thresholds is 4 or 8, ANBPO attains the optimal FSIM values in 11 out of the 12 mural images, achieving a winning rate of 91.6%. This demonstrates strong visual feature preservation capability and reduces distortion in the segmented images. Although ANBPO occasionally underperforms in terms of FSIM compared to competing algorithms, its overall FSIM performance is superior. ANBPO achieves a Friedman ranking of 1.06, which is 69.7% higher than that of the second-ranked IMODE algorithm. Figure 20 visually confirms that, under varying numbers of segmentation thresholds, ANBPO exhibits a relatively low average FSIM ranking, proving its higher visual feature capture capability in segmented images. Analysis of the FSIM values indicates that, compared to competing algorithms, ANBPO effectively segments mural images while maximally preserving original visual features, improving segmentation quality, and reducing distortion, making it a promising approach for mural image segmentation.

4.2.11. Runtime Analysis

This section primarily focuses on analyzing the runtime of the ANBPO algorithm when solving mural image segmentation problems. The experimental results are presented in Table 8, where “Mean Rank” indicates the average runtime ranking of the algorithm across 12 mural image segmentation problems, and “Final Rank” represents the algorithm’s final ranking based on the “Mean Rank” metric.
As shown in Table 8, in 43 out of the 48 experiments conducted, the ANBPO algorithm achieved shorter runtimes compared to the competing algorithms, with a winning rate of 89.5%. This is primarily attributed to the ANBPO algorithm’s simpler operational structure, which reduces the time consumed during actual computations. Additionally, as indicated in the last row of the table, the ANBPO algorithm attained an average runtime ranking of 1.13, demonstrating its superiority over the competing algorithms. To visually compare the algorithms’ runtimes, Figure 21 displays a bar chart of the average runtime rankings under different numbers of segmentation thresholds. From the figure, it is evident that the ANBPO algorithm consistently exhibits lower runtime costs under all four operational conditions. In summary, due to its concise operational logic, the ANBPO algorithm holds an advantage in terms of runtime metrics, making it a practical and effective mural image segmentation algorithm.

4.2.12. Comprehensive Analysis

The preceding sections conducted separate analyses of the fitness function value, PSNR, SSIM, and FSIM metrics. This section aims to provide a comprehensive analysis of these four indicators. The experimental results are shown in Figure 22, where the Y-axis represents the stacked ranking of the fitness function value, PSNR, SSIM, and FSIM. As depicted, the ANBPO algorithm achieves a final ranking of 1 across all four metrics fitness function value, PSNR, SSIM, and FSIM attaining the lowest stacked bar height of 4. Compared to competing algorithms, it demonstrates superior overall performance in mural image segmentation. This is primarily attributed to the three learning strategies proposed in this study, which enable the algorithm to effectively balance multiple metrics and determine a globally optimal threshold segmentation scheme. Consequently, ANBPO ensures high-quality segmentation of mural images, validating its effectiveness as an efficient image segmentation method.

5. Conclusions and Future Works

Due to the insufficient data support in existing neural network-based image segmentation methods, the trained image segmentation models exhibit certain limitations, thereby adversely affecting the performance of mural image segmentation. Consequently, optimization algorithm-based image segmentation methods have gradually gained popularity. This study addresses the issue of PO easily falling into local optimal segmentation threshold traps when solving mural image segmentation problems, thereby degrading the quality of the segmented images. The root cause lies in PO’s deficiencies, including insufficient exploration capability, inadequate exploitation capability, and imbalance between the exploration and exploitation phases. To mitigate these limitations and improve mural image segmentation quality, this section proposes an enhanced PO variant, termed ANBPO, by integrating an adaptive learning strategy, a nonlinear factor, and a third-order Bernstein-guided strategy into the PO framework. First, to address PO’s limited global exploration capability, an adaptive learning strategy is introduced to effectively enhance the algorithm’s global exploration, enabling it to thoroughly search the solution space and improve mural image segmentation quality. Second, to resolve the imbalance between global exploration and local exploitation, a nonlinear factor is proposed to balance these two phases, thereby strengthening the algorithm’s ability to escape local optimal segmentation thresholds and enhancing the segmentation quality of mural images. Finally, to counter PO’s inadequate exploitation capability, a third-order Bernstein-guided strategy is introduced to boost the algorithm’s local exploitation capability, improving the precision of mural image segmentation. Subsequently, the proposed ANBPO was applied to 8 mural image segmentation problems. Experimental results demonstrate that, compared to competing algorithms, ANBPO achieves a 93.7% win rate in terms of fitness function values, indicating superior segmentation performance. Additionally, ANBPO outperforms competing algorithms by 63.9%, 68.3%, and 69.12% on the PSNR, SSIM, and FSIM metrics, respectively. These results confirm that ANBPO maximally preserves the original feature information of mural images during segmentation, thereby enhancing the quality of mural image segmentation.
Despite ANBPO achieving exceptional overall performance in mural image segmentation, its performance may still exhibit limitations in certain specialized scenarios. Future work will focus on the following directions: (1). Developing more efficient search strategies based on ANBPO to further enhance the quality of mural image segmentation. (2). Extending ANBPO to solve more complex combinatorial optimization problems, thereby broadening its application domains. (3). Proposing a multi-objective version of ANBPO to address a wider range of complex optimization problems.

Author Contributions

Conceptualization, J.W.; methodology, J.W.; software, J.W.; validation, J.F. and X.Z.; formal analysis, J.W. and B.Q.; investigation, J.F. and X.Z.; resources, B.Q.; data curation, J.W. and J.F.; writing—original draft preparation, J.W.; writing—review and editing, J.W. and J.F.; visualization, X.Z. and B.Q.; supervision, J.F. and X.Z.; project administration, J.F.; funding acquisition, J.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

If there is a reasonable need, you can request it from the corresponding author.

Acknowledgments

Thank you to the reviewers and editorial department for their efforts in our article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, H. Intelligent restoration of ancient murals based on discrete differential algorithm. J. Comput. Methods Sci. Eng. 2021, 21, 803–814. [Google Scholar] [CrossRef]
  2. Jiao, L.J.; Wang, W.J.; Li, B.J.; Zhao, Q.S. Wutai Mountain mural inpainting based on improved block matching algorithm. J. Comput. Aid Des. Comput. Graph. 2019, 31, 119–125. [Google Scholar] [CrossRef]
  3. Yu, T.; Lin, C.; Zhang, S.; Ding, X.; Wu, J.; Zhang, J. End-to-end partial convolutions neural networks for Dunhuang grottoes wall-painting restoration. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Republic of Korea, 27–28 October 2019. [Google Scholar]
  4. Han, P.H.; Chen, Y.S.; Liu, I.S.; Jang, Y.P.; Tsai, L.; Chang, A.; Hung, Y.P. A compelling virtual tour of the dunhuang cave with an immersive head-mounted display. IEEE Comput. Graph. Appl. 2019, 40, 40–55. [Google Scholar] [CrossRef] [PubMed]
  5. Hou, M.; Cao, N.; Tan, L.; Lyu, S.; Zhou, P.; Xu, C. Extraction of hidden information under sootiness on murals based on hyperspectral image enhancement. Appl. Sci. 2019, 9, 3591. [Google Scholar] [CrossRef]
  6. Wu, M.; Li, M.; Zhang, Q. Feature Separation and Fusion to Optimise the Migration Model of Mural Painting Style in Tombs. Appl. Sci. 2024, 14, 2784. [Google Scholar] [CrossRef]
  7. Xiao, C.; Chen, Y.; Sun, C.; You, L.; Li, R. AM-ESRGAN: Super-resolution reconstruction of ancient murals based on attention mechanism and multi-level residual network. Electronics 2024, 13, 3142. [Google Scholar] [CrossRef]
  8. Zhou, Y.; Guo, M.; Ma, M. Mural image restoration with spatial geometric perception and progressive context refinement. Comput. Graph. 2025, 130, 104266. [Google Scholar] [CrossRef]
  9. Yu, Z.; Lyu, S.; Hou, M.; Sun, Y.; Li, L. A new method for extracting refined sketches of ancient murals. Sensors 2024, 24, 2213. [Google Scholar] [CrossRef]
  10. Abualigah, L.; Almotairi, K.H.; Elaziz, M.A. Multilevel thresholding image segmentation using meta-heuristic optimization algorithms: Comparative analysis, open challenges and new trends. Appl. Intell. 2023, 53, 11654–11704. [Google Scholar] [CrossRef]
  11. Khishe, M.; Mosavi, M.R. Chimp optimization algorithm. Expert Syst. Appl. 2020, 149, 113338. [Google Scholar] [CrossRef]
  12. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A novel meta-heuristic optimization algorithm. Knowl.-Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  13. Rocca, P.; Oliveri, G.; Massa, A. Differential evolution as applied to electromagnetics. IEEE Antennas Propag. Mag. 2011, 53, 38–49. [Google Scholar] [CrossRef]
  14. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  15. Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef]
  16. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  17. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  18. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  19. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  20. Zhao, W.; Wang, L.; Zhang, Z. Atom search optimization and its application to solve a hydrogeologic parameter estimation problem. Knowl.-Based Syst. 2019, 163, 283–304. [Google Scholar] [CrossRef]
  21. Kaveh, A.; Bakhshpoori, T. Water evaporation optimization: A novel physically inspired optimization algorithm. Comput. Struct. 2016, 167, 69–85. [Google Scholar] [CrossRef]
  22. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  23. Moosavi, S.H.S.; Bardsiri, V.K. Poor and rich optimization algorithm: A new human-based and multi populations algorithm. Eng. Appl. Artif. Intell. 2019, 86, 165–181. [Google Scholar] [CrossRef]
  24. Shabani, A.; Asgarian, B.; Salido, M.; Gharebaghi, S.A. Search and rescue optimization algorithm: A new optimization method for solving constrained engineering optimization problems. Expert Syst. Appl. 2020, 161, 113698. [Google Scholar] [CrossRef]
  25. Houssein, E.H.; Abdalkarim, N.; Hussain, K.; Mohamed, E. Accurate multilevel thresholding image segmentation via oppositional Snake Optimization algorithm: Real cases with liver disease. Comput. Biol. Med. 2024, 169, 107922. [Google Scholar] [CrossRef] [PubMed]
  26. Wang, J.; Bao, Z.; Dong, H. An Improved Northern Goshawk Optimization Algorithm for Mural Image Segmentation. Biomimetics. 2025, 10, 373. [Google Scholar] [CrossRef] [PubMed]
  27. Qiao, L.; Liu, K.; Xue, Y.; Tang, W.; Salehnia, T. A multi-level thresholding image segmentation method using hybrid Arithmetic Optimization and Harris Hawks Optimizer algorithms. Expert Syst. Appl. 2024, 241, 122316. [Google Scholar] [CrossRef]
  28. Yuan, C.; Zhao, D.; Heidari, A.A.; Liu, L.; Chen, Y.; Wu, Z.; Chen, H. Artemisinin optimization based on malaria therapy: Algorithm and applications to medical image segmentation. Displays 2024, 84, 102740. [Google Scholar] [CrossRef]
  29. Chen, D.; Ge, Y.; Wan, Y.; Deng, Y.; Chen, Y.; Zou, F. Poplar optimization algorithm: A new meta-heuristic optimization technique for numerical optimization and image segmentation. Expert Syst. Appl. 2022, 200, 117118. [Google Scholar] [CrossRef]
  30. Wang, J.; Bei, J.; Song, H.; Zhang, H.; Zhang, P. A whale optimization algorithm with combined mutation and removing similarity for global optimization and multilevel thresholding image segmentation. Appl. Soft Comput. 2023, 137, 110130. [Google Scholar] [CrossRef]
  31. Das, A.; Namtirtha, A.; Dutta, A. Lévy–Cauchy arithmetic optimization algorithm combined with rough K-means for image segmentation. Appl. Soft Comput. 2023, 140, 110268. [Google Scholar] [CrossRef]
  32. Wang, Z.; Yu, F.; Wang, D.; Liu, T.; Hu, R. Multi-threshold segmentation of breast cancer images based on improved dandelion optimization algorithm. J. Supercomput. 2024, 80, 3849–3874. [Google Scholar] [CrossRef]
  33. Lian, J.; Hui, G.; Ma, L.; Zhu, T.; Wu, X.; Heidari, A.A.; Chen, H. Parrot optimizer: Algorithm and applications to medical problems. Comput. Biol. Med. 2024, 172, 108064. [Google Scholar] [CrossRef] [PubMed]
  34. Zhang, X.; Lin, Q. Three-learning strategy particle swarm algorithm for global optimization problems. Inf. Sci. 2022, 593, 289–313. [Google Scholar] [CrossRef]
  35. Wu, F.; Li, S.; Zhang, J.; Xie, R.; Yang, M. Bernstein-based oppositional-multiple learning and differential enhanced exponential distribution optimizer for real-world optimization problems. Eng. Appl. Artif. Intell. 2024, 138, 109370. [Google Scholar] [CrossRef]
  36. Huynh-Thu, Q.; Ghanbari, M. Scope of validity of PSNR in image/video quality assessment. Electron. Lett. 2008, 44, 800–801. [Google Scholar] [CrossRef]
  37. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  38. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef]
  39. Taheri, A.; RahimiZadeh, K.; Beheshti, A.; Baumbach, J.; Rao, R.V.; Mirjalili, S.; Gandomi, A.H. Partial reinforcement optimizer: An evolutionary optimization algorithm. Expert Syst. Appl. 2024, 238, 122070. [Google Scholar] [CrossRef]
  40. Lian, J.; Hui, G. Human evolutionary optimization algorithm. Expert Syst. Appl. 2024, 241, 122638. [Google Scholar] [CrossRef]
  41. Zhu, F.; Li, G.; Tang, H.; Li, Y.; Lv, X.; Wang, X. Dung beetle optimization algorithm based on quantum computing and multi-strategy fusion for solving engineering problems. Expert Syst. Appl. 2024, 236, 121219. [Google Scholar] [CrossRef]
  42. Sallam, K.M.; Elsayed, S.M.; Chakrabortty, R.K.; Ryan, M.J. Improved multi-operator differential evolution algorithm for solving unconstrained problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
Figure 1. Schematic of adaptive learning strategy.
Figure 1. Schematic of adaptive learning strategy.
Biomimetics 10 00482 g001
Figure 2. Sigmod function term.
Figure 2. Sigmod function term.
Biomimetics 10 00482 g002
Figure 3. Inverse cosine function term.
Figure 3. Inverse cosine function term.
Biomimetics 10 00482 g003
Figure 4. Arctangent function term.
Figure 4. Arctangent function term.
Biomimetics 10 00482 g004
Figure 5. Nonlinear factor.
Figure 5. Nonlinear factor.
Biomimetics 10 00482 g005
Figure 6. Third-order Bernstein polynomial.
Figure 6. Third-order Bernstein polynomial.
Biomimetics 10 00482 g006
Figure 7. Third-order Bernstein guided strategy schematic.
Figure 7. Third-order Bernstein guided strategy schematic.
Biomimetics 10 00482 g007
Figure 8. ANBPO process diagram.
Figure 8. ANBPO process diagram.
Biomimetics 10 00482 g008
Figure 9. Mural image information.
Figure 9. Mural image information.
Biomimetics 10 00482 g009
Figure 10. Using the ANBPO algorithm to solve mural image segmentation.
Figure 10. Using the ANBPO algorithm to solve mural image segmentation.
Biomimetics 10 00482 g010
Figure 11. Average ranking under different population sizes.
Figure 11. Average ranking under different population sizes.
Biomimetics 10 00482 g011
Figure 14. Exploration/exploitation ratio.
Figure 14. Exploration/exploitation ratio.
Biomimetics 10 00482 g014
Figure 15. Average ranking of algorithms under different strategy guidance.
Figure 15. Average ranking of algorithms under different strategy guidance.
Biomimetics 10 00482 g015
Figure 16. Ranking of fitness function values.
Figure 16. Ranking of fitness function values.
Biomimetics 10 00482 g016
Figure 17. Convergence curve of fitness function value.
Figure 17. Convergence curve of fitness function value.
Biomimetics 10 00482 g017
Figure 18. Ranking of PSNR values.
Figure 18. Ranking of PSNR values.
Biomimetics 10 00482 g018
Figure 19. Ranking of SSIM values.
Figure 19. Ranking of SSIM values.
Biomimetics 10 00482 g019
Figure 20. Ranking of FSIM values.
Figure 20. Ranking of FSIM values.
Biomimetics 10 00482 g020
Figure 21. Average ranking of runtime algorithm.
Figure 21. Average ranking of runtime algorithm.
Biomimetics 10 00482 g021
Figure 22. Stacking rank.
Figure 22. Stacking rank.
Biomimetics 10 00482 g022
Table 1. Baseline algorithms information.
Table 1. Baseline algorithms information.
AlgorithmsYearParameter Settings
PRO [39]2024 τ = t T
HEOA [40]2024 ω = 0.2 cos π 2 1 t T
PO [33]2024No Parameters
QHDBO [41]2024 R D B = 6 ,   E F D B O = 13 ,   S D B = 11
IMODE [42]2020 D = 2 ,   a r c h _ r a t e = 2.6
Table 2. Actual segmentation results of mural images.
Table 2. Actual segmentation results of mural images.
FunnTH = 2nTH = 4nTH = 6nTH = 8
M1Biomimetics 10 00482 i001
M2Biomimetics 10 00482 i002
M3Biomimetics 10 00482 i003
M4Biomimetics 10 00482 i004
M5Biomimetics 10 00482 i005
M6Biomimetics 10 00482 i006
M7Biomimetics 10 00482 i007
M8Biomimetics 10 00482 i008
M9Biomimetics 10 00482 i009
M10Biomimetics 10 00482 i010
M11Biomimetics 10 00482 i011
M12Biomimetics 10 00482 i012
Table 3. Fitness function value.
Table 3. Fitness function value.
FunnTHPRO HEOA PO QHDBO IMODE ANBPO
MeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
M121.813 × 10+038.352 × 10−011.729 × 10+033.675 × 10−011.850 × 10+032.879 × 10−011.806 × 10+033.554 × 10−011.878 × 10+033.003 × 10−011.968 × 10+033.856 × 10−03
41.960 × 10+033.968 × 10−011.974 × 10+034.176 × 10−011.977 × 10+035.018 × 10−011.914 × 10+032.319 × 10−031.957 × 10+039.386 × 10−022.055 × 10+036.688 × 10−03
62.100 × 10+039.918 × 10−012.002 × 10+034.686 × 10−012.087 × 10+039.665 × 10−012.089 × 10+037.851 × 10−012.079 × 10+036.825 × 10−012.187 × 10+032.458 × 10−03
82.137 × 10+031.707 × 10−012.125 × 10+036.466 × 10−012.171 × 10+033.681 × 10−012.175 × 10+036.164 × 10−022.177 × 10+039.645 × 10−012.281 × 10+034.850 × 10−03
M221.862 × 10+035.941 × 10−011.878 × 10+037.653 × 10−011.802 × 10+032.357 × 10−011.887 × 10+032.251 × 10−011.871 × 10+035.155 × 10−011.901 × 10+039.242 × 10−03
41.998 × 10+031.871 × 10−011.958 × 10+035.132 × 10−021.994 × 10+037.300 × 10−011.927 × 10+031.509 × 10−011.959 × 10+038.456 × 10−012.036 × 10+038.011 × 10−03
62.051 × 10+035.163 × 10−012.090 × 10+035.548 × 10−022.051 × 10+034.892 × 10−012.027 × 10+031.213 × 10−012.069 × 10+032.477 × 10−012.172 × 10+033.464 × 10−03
82.161 × 10+035.903 × 10−012.188 × 10+033.963 × 10−022.177 × 10+032.244 × 10−012.187 × 10+037.401 × 10−012.161 × 10+031.428 × 10−012.281 × 10+031.847 × 10−03
M321.704 × 10+035.725 × 10−011.702 × 10+035.769 × 10−011.757 × 10+034.049 × 10−011.763 × 10+037.642 × 10−011.722 × 10+033.311 × 10−011.952 × 10+032.416 × 10−03
41.910 × 10+034.311 × 10−011.961 × 10+035.306 × 10−011.943 × 10+032.542 × 10−021.945 × 10+033.695 × 10−011.967 × 10+032.824 × 10−012.069 × 10+032.703 × 10−01
62.051 × 10+036.846 × 10−012.035 × 10+035.228 × 10−012.027 × 10+033.230 × 10−012.093 × 10+035.396 × 10−012.042 × 10+031.183 × 10−012.110 × 10+039.911 × 10−01
82.107 × 10+037.664 × 10−012.111 × 10+035.327 × 10−012.107 × 10+035.773 × 10−012.160 × 10+035.689 × 10−012.104 × 10+037.490 × 10−012.230 × 10+039.125 × 10−01
M421.915 × 10+031.423 × 10−012.072 × 10+032.723 × 10−012.067 × 10+032.441 × 10−012.061 × 10+037.754 × 10−021.924 × 10+034.893 × 10−012.118 × 10+034.486 × 10−03
42.109 × 10+035.008 × 10−012.198 × 10+036.679 × 10−012.138 × 10+032.440 × 10−012.106 × 10+031.184 × 10−012.161 × 10+036.653 × 10−012.212 × 10+033.122 × 10−03
62.254 × 10+032.135 × 10−012.220 × 10+033.077 × 10−012.296 × 10+034.107 × 10−012.220 × 10+031.161 × 10−012.265 × 10+032.023 × 10−022.302 × 10+035.954 × 10−03
82.340 × 10+036.163 × 10−012.359 × 10+036.875 × 10−012.316 × 10+037.406 × 10−012.357 × 10+033.099 × 10−012.326 × 10+039.467 × 10−012.488 × 10+037.470 × 10−03
M521.965 × 10+034.318 × 10−011.914 × 10+039.641 × 10−012.068 × 10+036.459 × 10−011.951 × 10+032.746 × 10−022.069 × 10+038.577 × 10−012.184 × 10+033.591 × 10−03
42.183 × 10+039.105 × 10−012.168 × 10+034.511 × 10−012.122 × 10+039.968 × 10−012.154 × 10+031.662 × 10−012.141 × 10+036.976 × 10−012.140 × 10+033.207 × 10−03
62.210 × 10+038.396 × 10−012.214 × 10+031.822 × 10−012.279 × 10+032.783 × 10−022.294 × 10+035.123 × 10−012.294 × 10+033.414 × 10−012.335 × 10+037.437 × 10−03
82.380 × 10+033.985 × 10−012.379 × 10+034.268 × 10−022.397 × 10+038.447 × 10−012.385 × 10+038.494 × 10−022.355 × 10+038.251 × 10−022.446 × 10+035.010 × 10−03
M622.146 × 10+039.882 × 10−012.233 × 10+034.068 × 10−012.140 × 10+038.554 × 10−012.172 × 10+035.330 × 10−012.215 × 10+034.111 × 10−012.382 × 10+038.480 × 10−03
42.321 × 10+033.821 × 10−012.327 × 10+031.676 × 10−012.385 × 10+037.740 × 10−012.363 × 10+035.560 × 10−012.390 × 10+039.804 × 10−012.484 × 10+039.184 × 10−03
62.418 × 10+039.068 × 10−012.438 × 10+036.645 × 10−012.462 × 10+037.520 × 10−012.486 × 10+039.283 × 10−012.489 × 10+034.272 × 10−012.582 × 10+036.756 × 10−03
82.502 × 10+032.733 × 10−012.541 × 10+037.706 × 10−012.527 × 10+034.829 × 10−012.527 × 10+031.037 × 10−012.543 × 10+035.780 × 10−012.582 × 10+031.078 × 10−03
M721.830 × 10+039.584 × 10−011.878 × 10+039.725 × 10−021.884 × 10+034.415 × 10−031.867 × 10+039.208 × 10−011.823 × 10+032.722 × 10−011.991 × 10+034.926 × 10−03
41.937 × 10+038.134 × 10−011.919 × 10+035.443 × 10−011.996 × 10+038.147 × 10−011.935 × 10+031.655 × 10−011.926 × 10+039.502 × 10−012.092 × 10+032.248 × 10−03
62.086 × 10+037.735 × 10−012.031 × 10+033.954 × 10−012.031 × 10+031.736 × 10−012.064 × 10+035.458 × 10−012.012 × 10+036.190 × 10−012.125 × 10+039.585 × 10−03
82.122 × 10+033.275 × 10−012.119 × 10+032.701 × 10−012.129 × 10+031.146 × 10−012.151 × 10+036.068 × 10−012.136 × 10+037.348 × 10−012.150 × 10+033.670 × 10−01
M822.193 × 10+038.043 × 10−012.168 × 10+032.752 × 10−012.174 × 10+039.430 × 10−012.246 × 10+039.186 × 10−012.256 × 10+031.558 × 10−012.381 × 10+036.996 × 10−01
42.346 × 10+036.934 × 10−012.308 × 10+037.018 × 10−012.397 × 10+039.771 × 10−022.384 × 10+038.133 × 10−012.306 × 10+036.269 × 10−012.460 × 10+033.807 × 10−03
62.429 × 10+032.730 × 10−012.411 × 10+036.778 × 10−012.403 × 10+034.274 × 10−012.439 × 10+037.378 × 10−012.466 × 10+032.616 × 10−012.577 × 10+032.923 × 10−03
82.516 × 10+034.396 × 10−012.522 × 10+038.564 × 10−012.527 × 10+036.899 × 10−012.543 × 10+033.740 × 10−012.548 × 10+038.250 × 10−012.578 × 10+032.502 × 10−04
M921.899 × 10+036.740 × 10−011.720 × 10+034.882 × 10−011.803 × 10+035.393 × 10−011.751 × 10+035.164 × 10−011.703 × 10+032.144 × 10−011.990 × 10+031.143 × 10−03
41.936 × 10+032.621 × 10−011.918 × 10+038.270 × 10−011.938 × 10+032.595 × 10−011.928 × 10+033.427 × 10−011.992 × 10+037.983 × 10−012.072 × 10+039.555 × 10−03
62.080 × 10+034.779 × 10−012.091 × 10+035.835 × 10−012.091 × 10+033.184 × 10−012.076 × 10+032.208 × 10−012.044 × 10+036.504 × 10−012.116 × 10+037.134 × 10−03
82.188 × 10+033.904 × 10−012.110 × 10+031.766 × 10−012.148 × 10+036.578 × 10−012.117 × 10+033.469 × 10−012.109 × 10+038.407 × 10−012.293 × 10+037.798 × 10−03
M1021.719 × 10+031.438 × 10−021.743 × 10+037.830 × 10−011.884 × 10+036.177 × 10−011.895 × 10+036.832 × 10−011.870 × 10+037.778 × 10−021.969 × 10+036.079 × 10−03
41.909 × 10+033.621 × 10−011.910 × 10+034.601 × 10−011.912 × 10+034.820 × 10−011.956 × 10+034.957 × 10−011.916 × 10+032.702 × 10−012.051 × 10+036.521 × 10−03
62.081 × 10+031.636 × 10−012.054 × 10+038.964 × 10−012.037 × 10+031.358 × 10−012.008 × 10+035.350 × 10−012.066 × 10+032.550 × 10−012.154 × 10+033.527 × 10−03
82.125 × 10+034.139 × 10−012.121 × 10+034.987 × 10−012.170 × 10+032.339 × 10−012.162 × 10+033.894 × 10−022.132 × 10+034.937 × 10−012.249 × 10+031.102 × 10−03
M1121.787 × 10+035.036 × 10−011.848 × 10+032.007 × 10−011.750 × 10+038.128 × 10−031.817 × 10+031.156 × 10−011.845 × 10+037.805 × 10−011.973 × 10+033.508 × 10−03
41.988 × 10+037.509 × 10−011.998 × 10+038.576 × 10−021.963 × 10+034.031 × 10−011.928 × 10+032.261 × 10−011.932 × 10+036.414 × 10−012.048 × 10+034.671 × 10−01
62.077 × 10+037.419 × 10−022.068 × 10+031.906 × 10−012.064 × 10+033.102 × 10−012.025 × 10+032.401 × 10−022.036 × 10+032.432 × 10−012.182 × 10+039.253 × 10−01
82.165 × 10+038.111 × 10−012.150 × 10+031.194 × 10−012.168 × 10+031.836 × 10−012.116 × 10+033.910 × 10−012.156 × 10+031.398 × 10−012.228 × 10+038.540 × 10−01
M1221.923 × 10+036.196 × 10−012.098 × 10+034.279 × 10−011.925 × 10+037.119 × 10−012.085 × 10+033.927 × 10−012.060 × 10+037.114 × 10−022.169 × 10+033.564 × 10−03
42.152 × 10+035.787 × 10−012.142 × 10+039.525 × 10−012.164 × 10+033.607 × 10−012.131 × 10+033.459 × 10−012.132 × 10+035.931 × 10−012.277 × 10+033.058 × 10−03
62.284 × 10+032.190 × 10−022.233 × 10+036.926 × 10−012.275 × 10+033.104 × 10−012.295 × 10+039.227 × 10−012.209 × 10+033.474 × 10−012.331 × 10+034.266 × 10−03
82.363 × 10+031.109 × 10−012.367 × 10+031.743 × 10−012.337 × 10+034.451 × 10−012.332 × 10+035.746 × 10−012.330 × 10+035.481 × 10−012.414 × 10+032.103 × 10−03
Friedman4.15 4.15 3.81 3.85 3.94 1.10
Final Rank5 5 2 3 4 1
Table 4. Wilcoxon rank sum test results.
Table 4. Wilcoxon rank sum test results.
FunnTHPROHEOAPOQHDBOIMODE
M121.3642 × 10−10/−1.9289 × 10−10/−5.0678 × 10−10/−3.1608 × 10−10/−1.4736 × 10−10/−
45.8498 × 10−04/−9.9219 × 10−10/−6.5623 × 10−10/−9.7275 × 10−10/−4.5822 × 10−10/−
62.0318 × 10−05/−5.0506 × 10−06/−6.2763 × 10−06/−2.6400 × 10−06/−8.6537 × 10−10/−
84.9936 × 10−06/−7.4739 × 10−06/−5.2109 × 10−10/−4.9712 × 10−10/−2.7877 × 10−10/−
M226.4912 × 10−05/−4.4042 × 10−06/−1.6101 × 10−10/−7.7586 × 10−10/−5.8618 × 10−10/−
45.5287 × 10−10/−7.9693 × 10−06/−2.5759 × 10−05/−8.4656 × 10−05/−6.6049 × 10−10/−
61.1824 × 10−10/−5.2306 × 10−06/−2.1946 × 10−05/−5.0859 × 10−05/−9.1871 × 10−08/−
83.3066 × 10−10/−3.7303 × 10−06/−2.4485 × 10−05/−6.7861 × 10−05/−4.7468 × 10−07/−
M326.5368 × 10−04/−3.4066 × 10−05/−5.5765 × 10−05/−2.0654 × 10−04/−2.1267 × 10−07/−
41.5440 × 10−04/−1.6304 × 10−07/−4.6252 × 10−05/−8.6085 × 10−04/−9.9296 × 10−07/−
61.7762 × 10−04/−6.0955 × 10−05/−7.8324 × 10−05/−1.6428 × 10−04/−7.1721 × 10−08/−
83.8749 × 10−04/−3.0658 × 10−05/−7.4577 × 10−05/−7.7117 × 10−04/−2.8969 × 10−07/−
M424.6158 × 10−04/−3.3361 × 10−05/−6.6339 × 10−05/−3.5820 × 10−04/−6.7440 × 10−07/−
48.6778 × 10−04/−4.1070 × 10−05/−8.4311 × 10−10/−7.9695 × 10−05/−5.6552 × 10−07/−
66.3869 × 10−04/−1.1849 × 10−05/−8.9667 × 10−10/−2.5410 × 10−05/−1.8045 × 10−07/−
83.2806 × 10−04/−4.0863 × 10−05/−7.1984 × 10−10/−7.5516 × 10−05/−6.2633 × 10−07/−
M521.0676 × 10−04/−7.1504 × 10−05/−7.0213 × 10−10/−5.4913 × 10−06/−1.6192 × 10−07/−
44.5783 × 10−04/+8.6690 × 10−05/−2.2766 × 10−10/−1.9205 × 10−05/−5.6007 × 10−10/−
68.0291 × 10−04/−3.3926 × 10−05/−7.4347 × 10−10/−9.6063 × 10−06/−1.8141 × 10−10/−
89.0346 × 10−04/−2.4084 × 10−10/−5.1583 × 10−10/−5.1951 × 10−05/−9.5299 × 10−10/−
M621.9730 × 10−10/−6.7358 × 10−10/−7.3108 × 10−10/−8.3355 × 10−10/−5.3721 × 10−10/−
47.3229 × 10−08/−6.2939 × 10−10/−9.3810 × 10−10/−2.9181 × 10−10/−5.3625 × 10−10/−
62.0756 × 10−10/−8.3014 × 10−10/−9.7863 × 10−10/−4.1487 × 10−05/−7.7428 × 10−10/−
87.5736 × 10−10/−8.6631 × 10−10/−4.7379 × 10−10/−6.5172 × 10−05/−1.8356 × 10−10/−
M725.5886 × 10−10/−3.1267 × 10−10/−3.5775 × 10−08/−3.5684 × 10−05/−3.1033 × 10−10
46.5454 × 10−06/−3.1429 × 10−10/−3.0008 × 10−07/−3.9868 × 10−05/−2.6549 × 10−05/−
65.8425 × 10−06/−9.1127 × 10−10/−8.8427 × 10−07/−8.7889 × 10−05/−6.2331 × 10−07/−
88.0178 × 10−06/−5.7071 × 10−10/−3.7543 × 10−07/−9.2456 × 10−05/+3.5564 × 10−05/−
M822.7575 × 10−06/−5.4833 × 10−10/−7.3774 × 10−07/−2.0791 × 10−05/−1.0050 × 10−05/−
42.8384 × 10−06/−2.7683 × 10−10/−1.8976 × 10−07/−9.0112 × 10−05/−8.3328 × 10−05/−
64.5143 × 10−06/−9.6078 × 10−10/−7.7697 × 10−07/−7.2491 × 10−05/−8.4497 × 10−05/−
88.2140 × 10−06/−7.4315 × 10−10/−3.3812 × 10−08/−2.1328 × 10−05/−4.3536 × 10−05/−
M925.8162 × 10−06/−1.4464 × 10−10/−4.1826 × 10−07/−4.6928 × 10−05/−1.7966 × 10−05/−
46.6693 × 10−06/−9.2293 × 10−10/−9.3306 × 10−07/−9.2167 × 10−05/−3.5767 × 10−05/−
67.5149 × 10−06/−5.8577 × 10−10/−1.7713 × 10−10/−1.1862 × 10−05/−6.5782 × 10−05/−
88.7296 × 10−06/−3.8752 × 10−10/−6.5129 × 10−10/−7.0887 × 10−05/−9.5371 × 10−05/−
M1027.1784 × 10−06/−9.0082 × 10−10/−7.6108 × 10−10/−5.3151 × 10−05/−2.5902 × 10−05/−
44.3945 × 10−06/−2.1765 × 10−10/−2.7176 × 10−10/−1.8635 × 10−04/−7.7174 × 10−10/−
69.0192 × 10−10/−4.7446 × 10−10/−1.6493 × 10−10/−5.5588 × 10−04/−4.8579 × 10−10/−
82.4785 × 10−05/−8.3432 × 10−10/−5.4173 × 10−06/−8.6634 × 10−04/−7.9227 × 10−05/−
M1121.6496 × 10−05/−6.7877 × 10−10/−4.7899 × 10−07/−2.4490 × 10−04/−9.8824 × 10−05/−
45.7133 × 10−05/−2.4253 × 10−10/−3.2596 × 10−07/−2.6633 × 10−04/−3.2522 × 10−05/−
63.7551 × 10−05/−9.8707 × 10−10/−8.6215 × 10−08/−4.4853 × 10−04/−8.3948 × 10−05/−
82.8655 × 10−05/−3.4994 × 10−10/−3.2291 × 10−07/−1.0395 × 10−05/−5.1242 × 10−05/−
M1225.0533 × 10−05/−5.9923 × 10−10/−7.5117 × 10−07/−9.5975 × 10−04/−2.0433 × 10−05/−
41.4586 × 10−05/−7.3389 × 10−10/−2.5219 × 10−10/−7.1579 × 10−04/−4.2362 × 10−05/−
61.9028 × 10−10/−7.9507 × 10−10/−1.8715 × 10−10/−4.1668 × 10−04/−4.3946 × 10−05/−
88.1143 × 10−10/−6.4769 × 10−10/−3.2535 × 10−10/−1.2204 × 10−10/−8.4073 × 10−10/−
+/−/=NA1/47/00/48/00/48/01/47/00/48/0
Table 5. PSNR values.
Table 5. PSNR values.
FunnTHPRO HEOA PO QHDBO IMODE ANBPO
MeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
M1218.9486.431 × 10−0318.8715.434 × 10−0318.1797.396 × 10−0318.2826.507 × 10−0318.9966.192 × 10−0319.7377.942 × 10−03
423.0292.215 × 10−0323.5093.781 × 10−0323.7105.963 × 10−0323.6739.685 × 10−0323.4158.716 × 10−0324.0615.242 × 10−03
625.1109.285 × 10−0325.3762.059 × 10−0325.1863.874 × 10−0325.9849.170 × 10−0325.8232.748 × 10−0326.6692.993 × 10−03
826.1985.062 × 10−0326.1064.016 × 10−0326.4901.290 × 10−0326.8268.936 × 10−0326.9836.533 × 10−0327.3921.871 × 10−03
M2218.3984.722 × 10−0318.9603.664 × 10−0318.5998.896 × 10−0318.9359.255 × 10−0318.4207.506 × 10−0319.0812.716 × 10−03
423.7498.245 × 10−0323.5987.703 × 10−0323.7588.988 × 10−0323.2963.130 × 10−0323.4905.546 × 10−0324.6437.931 × 10−03
625.7903.760 × 10−0325.7086.644 × 10−0325.5772.420 × 10−0325.9463.913 × 10−0325.1001.942 × 10−0326.9249.338 × 10−03
827.4518.446 × 10−0327.0707.790 × 10−0327.9341.613 × 10−0327.5797.811 × 10−0327.8251.000 × 10−0328.5662.770 × 10−03
M3218.7614.736 × 10−0318.9146.974 × 10−0318.0523.960 × 10−0318.7473.189 × 10−0318.3561.548 × 10−0319.0872.699 × 10−03
423.5266.419 × 10−0323.4257.065 × 10−0323.0061.769 × 10−0323.8147.351 × 10−0323.3689.427 × 10−0324.2964.209 × 10−03
625.9154.281 × 10−0325.1831.710 × 10−0325.1647.487 × 10−0325.7563.281 × 10−0325.2074.932 × 10−0326.7593.714 × 10−03
827.1861.770 × 10−0327.7853.982 × 10−0327.5935.922 × 10−0327.7039.438 × 10−0327.0248.584 × 10−0328.8804.234 × 10−03
M4219.7642.542 × 10−0319.6134.726 × 10−0319.5568.408 × 10−0319.8073.400 × 10−0319.1304.268 × 10−0321.4244.715 × 10−03
424.5704.402 × 10−0324.0559.940 × 10−0324.4905.727 × 10−0324.9078.926 × 10−0324.7497.706 × 10−0326.2485.814 × 10−03
626.6717.628 × 10−0326.9667.511 × 10−0326.7967.843 × 10−0326.2437.350 × 10−0326.0554.836 × 10−0328.1479.914 × 10−03
827.8004.984 × 10−0327.7245.930 × 10−0327.5025.802 × 10−0327.5284.162 × 10−0327.8605.981 × 10−0329.4765.049 × 10−03
M5219.6969.531 × 10−0319.5253.142 × 10−0319.2948.580 × 10−0319.8661.327 × 10−0319.8082.690 × 10−0320.1645.219 × 10−03
424.9772.613 × 10−0324.6714.560 × 10−0324.5138.682 × 10−0324.4407.126 × 10−0324.1851.014 × 10−0324.3773.418 × 10−03
626.8386.689 × 10−0326.0613.680 × 10−0326.4622.890 × 10−0326.3421.909 × 10−0326.8214.198 × 10−0328.1221.387 × 10−03
827.5932.704 × 10−0327.5967.082 × 10−0327.5601.604 × 10−0327.8318.555 × 10−0327.8586.422 × 10−0329.4019.179 × 10−03
M6216.4631.833 × 10−0316.1782.184 × 10−0316.4597.405 × 10−0316.7723.672 × 10−0316.0482.002 × 10−0317.4161.503 × 10−03
423.7477.515 × 10−0323.9969.112 × 10−0323.7195.363 × 10−0323.2044.019 × 10−0323.2187.900 × 10−0324.7505.118 × 10−03
625.7321.062 × 10−0325.3291.404 × 10−0325.1011.534 × 10−0325.4462.963 × 10−0325.4397.117 × 10−0326.0176.295 × 10−03
827.5208.208 × 10−0327.5804.437 × 10−0327.0297.639 × 10−0327.7443.822 × 10−0327.3109.557 × 10−0328.5301.587 × 10−03
M7216.4201.003 × 10−0316.2535.090 × 10−0316.5199.744 × 10−0316.2554.468 × 10−0316.1215.017 × 10−0317.7955.527 × 10−03
425.1571.569 × 10−0325.1914.550 × 10−0325.9688.599 × 10−0325.9285.191 × 10−0325.6724.932 × 10−0325.9852.810 × 10−03
626.7417.496 × 10−0326.9481.184 × 10−0326.3588.283 × 10−0326.7362.786 × 10−0326.3969.040 × 10−0327.5029.672 × 10−03
827.3143.608 × 10−0327.7455.075 × 10−0327.5644.140 × 10−0327.7588.071 × 10−0327.1092.013 × 10−0327.4513.811 × 10−03
M8219.2706.595 × 10−0319.1966.562 × 10−0319.4178.707 × 10−0319.6341.081 × 10−0319.1546.760 × 10−0320.6613.827 × 10−03
423.6696.055 × 10−0323.0499.503 × 10−0323.7693.038 × 10−0323.2153.552 × 10−0323.6482.374 × 10−0324.0831.071 × 10−03
625.9614.647 × 10−0325.8742.147 × 10−0325.4379.950 × 10−0325.8244.299 × 10−0325.2463.990 × 10−0326.0699.972 × 10−03
827.7935.016 × 10−0327.2715.019 × 10−0327.2797.955 × 10−0327.7554.211 × 10−0327.9046.577 × 10−0328.9091.244 × 10−03
M9218.239 6.615 × 10−0318.390 5.847 × 10−0318.144 1.292 × 10−0318.154 4.918 × 10−0318.578 8.464 × 10−0319.475 3.923 × 10−03
423.685 2.169 × 10−0323.965 8.662 × 10−0323.499 8.968 × 10−0323.582 1.862 × 10−0323.266 2.991 × 10−0324.987 2.964 × 10−03
625.158 7.461 × 10−0325.592 7.900 × 10−0325.014 2.560 × 10−0325.734 3.023 × 10−0325.537 8.125 × 10−0326.813 6.944 × 10−03
826.661 6.750 × 10−0326.199 9.208 × 10−0326.876 7.285 × 10−0326.624 6.151 × 10−0326.574 4.887 × 10−0327.656 5.911 × 10−03
M10218.880 4.788 × 10−0318.430 5.841 × 10−0318.753 7.874 × 10−0318.051 7.757 × 10−0318.789 1.816 × 10−0319.566 1.263 × 10−03
423.831 6.837 × 10−0323.802 5.846 × 10−0323.355 5.962 × 10−0323.966 7.022 × 10−0323.268 2.739 × 10−0324.146 7.982 × 10−03
625.884 2.186 × 10−0325.702 2.248 × 10−0325.812 1.882 × 10−0325.735 8.936 × 10−0325.338 9.046 × 10−0326.532 4.592 × 10−03
827.423 1.888 × 10−0327.891 7.120 × 10−0327.854 4.631 × 10−0327.743 4.522 × 10−0327.964 5.321 × 10−0328.334 7.484 × 10−03
M11218.666 8.069 × 10−0318.624 4.246 × 10−0318.063 2.530 × 10−0318.816 4.687 × 10−0318.210 3.716 × 10−0319.615 5.701 × 10−03
423.492 2.167 × 10−0323.333 3.818 × 10−0323.981 4.918 × 10−0323.020 5.781 × 10−0323.367 5.340 × 10−0324.994 3.561 × 10−03
625.346 5.592 × 10−0325.502 3.352 × 10−0325.593 5.235 × 10−0325.829 4.435 × 10−0325.671 4.580 × 10−0326.035 2.118 × 10−03
827.458 6.636 × 10−0327.048 2.853 × 10−0327.250 4.665 × 10−0327.044 9.680 × 10−0327.800 1.171 × 10−0328.530 2.115 × 10−03
M12219.477 5.922 × 10−0319.043 5.443 × 10−0319.995 9.523 × 10−0319.220 9.256 × 10−0319.090 2.938 × 10−0321.642 6.402 × 10−03
424.152 6.922 × 10−0324.981 8.576 × 10−0324.824 9.627 × 10−0324.792 4.818 × 10−0324.723 6.732 × 10−0326.757 7.012 × 10−03
626.892 6.377 × 10−0326.936 4.513 × 10−0326.493 9.232 × 10−0326.586 1.399 × 10−0326.785 4.522 × 10−0328.944 1.292 × 10−03
827.645 2.737 × 10−0327.478 1.118 × 10−0327.275 4.327 × 10−0327.815 5.651 × 10−0327.362 8.295 × 10−0329.324 7.322 × 10−03
Friedman3.65 4.02 4.29 3.56 4.33 1.15
Final Rank3 4 5 2 6 1
Table 6. SSIM values.
Table 6. SSIM values.
FunnTHPRO HEOA PO QHDBO IMODE ANBPO
MeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
M120.7522.643 × 10−030.7566.077 × 10−030.7523.976 × 10−040.7518.020 × 10−050.7541.180 × 10−040.7781.588 × 10−07
40.7841.237 × 10−030.7825.389 × 10−030.7832.103 × 10−030.7883.215 × 10−050.7886.708 × 10−040.8011.714 × 10−07
60.8227.748 × 10−030.8206.726 × 10−030.8296.552 × 10−030.8227.201 × 10−050.8269.211 × 10−040.8441.706 × 10−07
80.8783.964 × 10−030.8728.733 × 10−030.8743.454 × 10−030.8723.717 × 10−050.8794.729 × 10−040.8821.636 × 10−07
M220.7889.816 × 10−030.7819.051 × 10−030.7845.497 × 10−030.7837.158 × 10−050.7885.055 × 10−040.8021.648 × 10−07
40.8046.565 × 10−030.8043.465 × 10−030.8017.470 × 10−030.8012.362 × 10−060.8024.222 × 10−040.8341.069 × 10−07
60.8222.464 × 10−030.8278.876 × 10−030.8206.397 × 10−030.8276.781 × 10−050.8239.681 × 10−040.8581.718 × 10−07
80.8717.032 × 10−030.8802.774 × 10−030.8744.130 × 10−030.8729.117 × 10−050.8757.286 × 10−050.8951.422 × 10−07
M320.7147.558 × 10−030.7194.954 × 10−030.7149.531 × 10−030.7113.333 × 10−050.7189.142 × 10−040.7301.470 × 10−07
40.7591.682 × 10−030.7548.266 × 10−030.7582.875 × 10−030.7518.104 × 10−050.7509.519 × 10−040.7831.839 × 10−07
60.8008.176 × 10−030.8093.518 × 10−040.8085.856 × 10−030.8029.555 × 10−050.8092.005 × 10−040.8381.463 × 10−07
80.8551.303 × 10−030.8595.973 × 10−030.8539.529 × 10−030.8571.264 × 10−050.8502.556 × 10−040.8741.323 × 10−07
M420.7395.422 × 10−030.7396.932 × 10−030.7319.682 × 10−040.7324.499 × 10−050.7367.911 × 10−040.7511.499 × 10−07
40.7829.262 × 10−030.7804.459 × 10−030.7898.671 × 10−030.7872.758 × 10−050.7899.956 × 10−040.8011.613 × 10−07
60.8139.479 × 10−030.8165.415 × 10−030.8144.823 × 10−030.8125.477 × 10−050.8107.253 × 10−040.8471.233 × 10−07
80.8597.569 × 10−030.8574.458 × 10−030.8529.781 × 10−030.8559.194 × 10−060.8608.859 × 10−040.8851.395 × 10−07
M520.7141.600 × 10−040.7175.490 × 10−030.7149.559 × 10−030.7184.458 × 10−050.7115.508 × 10−040.7431.807 × 10−07
40.7871.346 × 10−030.7885.936 × 10−030.7884.118 × 10−030.7885.605 × 10−050.7834.025 × 10−040.7831.563 × 10−07
60.8174.812 × 10−030.8122.412 × 10−030.8165.138 × 10−030.8121.639 × 10−050.8171.855 × 10−040.8601.412 × 10−07
80.8637.429 × 10−030.8697.705 × 10−030.8646.743 × 10−030.8656.225 × 10−050.8707.001 × 10−040.8821.527 × 10−07
M620.6968.681 × 10−030.6989.945 × 10−030.7007.385 × 10−030.6979.679 × 10−050.6918.415 × 10−040.7281.071 × 10−07
40.7594.026 × 10−030.7535.587 × 10−030.7511.084 × 10−030.7524.436 × 10−050.7565.586 × 10−040.7631.571 × 10−07
60.8063.782 × 10−030.8098.803 × 10−030.8084.708 × 10−040.8025.244 × 10−050.8012.218 × 10−040.8151.268 × 10−07
80.8519.798 × 10−030.8591.112 × 10−030.8567.189 × 10−030.8591.745 × 10−050.8576.549 × 10−040.8631.117 × 10−07
M720.7827.635 × 10−030.7897.779 × 10−040.7828.831 × 10−030.7872.032 × 10−050.7848.895 × 10−040.8111.374 × 10−07
40.8248.008 × 10−030.8306.911 × 10−030.8245.685 × 10−030.8216.014 × 10−050.8235.882 × 10−040.8551.602 × 10−07
60.8513.468 × 10−030.8523.604 × 10−030.8512.190 × 10−030.8558.776 × 10−050.8519.348 × 10−050.8841.372 × 10−07
80.8824.518 × 10−030.8845.880 × 10−030.8891.814 × 10−030.8829.813 × 10−050.8837.566 × 10−040.8871.580 × 10−07
M820.7988.703 × 10−030.7925.266 × 10−030.7965.415 × 10−030.7962.599 × 10−050.7979.361 × 10−050.8141.176 × 10−07
40.8259.483 × 10−030.8214.083 × 10−030.8285.120 × 10−030.8289.692 × 10−050.8268.452 × 10−040.8471.526 × 10−07
60.8742.405 × 10−030.8776.367 × 10−030.8804.223 × 10−030.8789.456 × 10−050.8792.382 × 10−040.8891.010 × 10−07
80.8995.310 × 10−030.8935.681 × 10−030.8947.969 × 10−030.8984.193 × 10−050.8929.386 × 10−040.9071.189 × 10−07
M920.756 8.144 × 10−030.754 6.554 × 10−030.757 2.265 × 10−040.752 8.115 × 10−050.756 6.495 × 10−040.777 1.255 × 10−07
40.782 2.582 × 10−030.785 9.374 × 10−030.787 4.008 × 10−030.781 2.158 × 10−050.781 1.551 × 10−040.805 1.265 × 10−07
60.820 8.710 × 10−030.824 6.838 × 10−030.828 3.086 × 10−030.822 3.669 × 10−050.826 9.403 × 10−040.844 1.407 × 10−07
80.874 1.014 × 10−030.873 1.129 × 10−030.879 5.269 × 10−030.878 1.959 × 10−050.875 3.415 × 10−040.889 1.035 × 10−07
M1020.787 5.769 × 10−030.787 4.003 × 10−030.786 5.706 × 10−030.784 7.099 × 10−050.780 7.754 × 10−040.796 1.661 × 10−07
40.808 9.965 × 10−030.806 9.339 × 10−040.808 3.055 × 10−030.800 3.720 × 10−050.805 8.917 × 10−040.832 1.115 × 10−07
60.821 3.505 × 10−030.821 4.432 × 10−030.827 7.472 × 10−040.823 4.702 × 10−050.821 4.791 × 10−040.855 1.879 × 10−07
80.879 9.525 × 10−030.880 7.301 × 10−040.872 9.392 × 10−030.876 2.024 × 10−050.878 3.000 × 10−060.895 1.778 × 10−07
M1120.712 1.516 × 10−030.719 1.697 × 10−030.710 3.505 × 10−030.716 1.506 × 10−050.711 6.611 × 10−050.731 1.670 × 10−07
40.758 3.075 × 10−030.760 3.079 × 10−030.760 4.222 × 10−030.757 6.337 × 10−050.754 2.883 × 10−040.793 1.296 × 10−07
60.801 9.692 × 10−030.808 9.521 × 10−030.809 9.285 × 10−030.800 1.703 × 10−050.805 6.038 × 10−040.838 1.538 × 10−07
80.855 8.048 × 10−030.858 5.214 × 10−030.856 6.452 × 10−040.855 1.718 × 10−060.850 4.815 × 10−040.876 1.002 × 10−07
M1220.734 8.004 × 10−030.734 6.843 × 10−030.737 4.436 × 10−030.736 4.736 × 10−050.730 8.503 × 10−040.754 1.204 × 10−07
40.788 4.352 × 10−030.783 5.876 × 10−030.790 7.261 × 10−030.788 3.007 × 10−050.788 6.040 × 10−040.806 1.892 × 10−07
60.817 8.133 × 10−030.815 5.369 × 10−040.814 4.129 × 10−030.811 9.668 × 10−050.820 7.695 × 10−040.849 1.184 × 10−07
80.859 4.379 × 10−030.856 8.538 × 10−030.859 8.583 × 10−030.854 2.483 × 10−050.856 6.861 × 10−040.889 1.203 × 10−07
Friedman4.06 3.71 3.60 4.38 4.15 1.10
Final Rank4 3 2 6 5 1
Table 7. FSIM values.
Table 7. FSIM values.
FunnTHPRO HEOA PO QHDBO IMODE ANBPO
MeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
M120.7987.671 × 10−100.7981.427 × 10−100.7992.433 × 10−100.8008.985 × 10−100.7932.913 × 10−100.8019.544 × 10−10
40.8252.919 × 10−100.8226.869 × 10−100.8274.706 × 10−100.8271.268 × 10−100.8281.297 × 10−100.8311.410 × 10−10
60.8525.967 × 10−100.8601.121 × 10−100.8565.529 × 10−100.8532.067 × 10−100.8598.796 × 10−100.8636.392 × 10−10
80.8783.223 × 10−100.8738.520 × 10−100.8701.153 × 10−100.8721.914 × 10−100.8797.789 × 10−100.8815.637 × 10−10
M220.7942.639 × 10−100.7964.973 × 10−100.7908.604 × 10−100.7936.794 × 10−100.7966.055 × 10−100.8054.785 × 10−10
40.8208.796 × 10−100.8242.833 × 10−100.8242.461 × 10−100.8303.970 × 10−100.8266.208 × 10−100.8429.479 × 10−10
60.8565.793 × 10−100.8591.674 × 10−100.8547.757 × 10−100.8551.905 × 10−100.8594.073 × 10−100.8737.561 × 10−10
80.8745.700 × 10−100.8801.008 × 10−100.8737.701 × 10−100.8803.857 × 10−100.8773.013 × 10−100.8972.136 × 10−10
M320.8012.604 × 10−100.8038.470 × 10−100.8084.405 × 10−100.8036.634 × 10−100.8093.888 × 10−100.8176.414 × 10−10
40.8315.514 × 10−100.8342.320 × 10−100.8329.171 × 10−100.8393.347 × 10−100.8327.324 × 10−100.8596.433 × 10−10
60.8639.629 × 10−100.8623.916 × 10−100.8682.226 × 10−100.8629.032 × 10−100.8637.993 × 10−100.8829.140 × 10−10
80.8763.382 × 10−100.8723.255 × 10−100.8774.873 × 10−100.8768.729 × 10−100.8782.738 × 10−100.8936.079 × 10−10
M420.7716.064 × 10−100.7749.973 × 10−100.7801.845 × 10−100.7713.158 × 10−100.7781.177 × 10−100.8025.641 × 10−10
40.8048.604 × 10−100.8002.854 × 10−100.8013.434 × 10−100.8014.915 × 10−100.8028.067 × 10−100.8295.578 × 10−10
60.8355.676 × 10−100.8351.105 × 10−100.8363.662 × 10−100.8307.323 × 10−100.8398.092 × 10−100.8593.150 × 10−10
80.8582.787 × 10−100.8576.825 × 10−100.8593.111 × 10−100.8604.983 × 10−100.8548.341 × 10−100.8939.650 × 10−10
M520.7818.391 × 10−100.7906.617 × 10−100.7836.852 × 10−100.7834.982 × 10−100.7866.889 × 10−100.8005.863 × 10−10
40.8131.329 × 10−100.8175.602 × 10−100.8193.138 × 10−100.8191.645 × 10−100.8166.383 × 10−100.8184.627 × 10−10
60.8311.607 × 10−100.8324.486 × 10−100.8388.340 × 10−100.8347.353 × 10−100.8312.477 × 10−100.8748.210 × 10−10
80.8549.556 × 10−100.8572.336 × 10−100.8564.751 × 10−100.8527.427 × 10−100.8591.125 × 10−100.8973.600 × 10−10
M620.7523.889 × 10−100.7609.458 × 10−100.7571.757 × 10−100.7593.092 × 10−100.7532.092 × 10−100.7732.091 × 10−10
40.7953.121 × 10−100.7965.610 × 10−100.7961.543 × 10−100.7924.536 × 10−100.7924.435 × 10−100.8189.195 × 10−10
60.8281.979 × 10−100.8296.555 × 10−100.8296.996 × 10−100.8249.506 × 10−100.8274.817 × 10−100.8421.939 × 10−10
80.8559.146 × 10−100.8539.433 × 10−100.8556.145 × 10−100.8605.015 × 10−100.8606.151 × 10−100.8894.784 × 10−10
M720.8113.174 × 10−100.8117.280 × 10−100.8104.794 × 10−100.8127.886 × 10−100.8185.175 × 10−100.8219.999 × 10−10
40.8391.771 × 10−100.8321.306 × 10−100.8362.525 × 10−100.8315.607 × 10−100.8371.601 × 10−100.8418.062 × 10−10
60.8699.444 × 10−100.8614.849 × 10−100.8687.737 × 10−100.8614.825 × 10−100.8601.632 × 10−100.8725.767 × 10−10
80.8765.568 × 10−100.8729.277 × 10−100.8704.472 × 10−100.8751.156 × 10−100.8803.822 × 10−100.8789.195 × 10−10
M820.8268.971 × 10−100.8237.145 × 10−100.8288.958 × 10−100.8263.129 × 10−100.8279.896 × 10−100.8345.600 × 10−10
40.8491.531 × 10−100.8426.552 × 10−100.8445.370 × 10−100.8464.742 × 10−100.8466.971 × 10−100.8754.977 × 10−10
60.8681.478 × 10−100.8606.614 × 10−100.8693.402 × 10−100.8648.251 × 10−100.8635.424 × 10−100.8836.214 × 10−10
80.8917.119 × 10−100.8938.060 × 10−100.8926.556 × 10−100.8913.667 × 10−100.8914.384 × 10−100.9075.019 × 10−10
M920.793 9.986 × 10−100.797 7.894 × 10−100.792 4.612 × 10−100.790 4.694 × 10−100.797 1.118 × 10−100.8011.455 × 10−10
40.821 7.033 × 10−100.821 6.354 × 10−100.823 8.126 × 10−100.825 3.741 × 10−100.822 8.662 × 10−100.8311.137 × 10−10
60.856 7.681 × 10−100.857 1.720 × 10−100.855 9.693 × 10−100.852 1.155 × 10−100.852 6.394 × 10−100.8652.102 × 10−10
80.872 1.839 × 10−100.879 1.873 × 10−100.870 9.710 × 10−100.879 5.005 × 10−100.879 3.250 × 10−100.8835.175 × 10−10
M1020.800 7.026 × 10−100.795 2.194 × 10−100.791 7.115 × 10−100.795 8.402 × 10−100.792 9.003 × 10−100.8022.652 × 10−10
40.824 3.238 × 10−100.825 1.209 × 10−100.830 3.773 × 10−100.826 2.237 × 10−100.829 1.837 × 10−100.8504.144 × 10−10
60.853 3.525 × 10−100.855 4.891 × 10−100.859 9.868 × 10−100.856 5.215 × 10−100.859 4.371 × 10−100.8751.995 × 10−10
80.879 9.666 × 10−100.878 8.274 × 10−100.876 1.575 × 10−100.874 5.380 × 10−100.872 6.101 × 10−100.8971.206 × 10−10
M1120.801 6.377 × 10−100.804 3.643 × 10−100.807 4.624 × 10−100.804 2.646 × 10−100.806 3.460 × 10−100.8101.991 × 10−10
40.833 2.882 × 10−100.836 8.051 × 10−100.836 5.154 × 10−100.837 2.010 × 10−100.839 2.509 × 10−100.8573.079 × 10−10
60.863 1.676 × 10−100.867 1.715 × 10−100.869 1.669 × 10−100.863 1.729 × 10−100.869 1.669 × 10−100.8897.239 × 10−10
80.872 7.825 × 10−100.871 4.413 × 10−100.873 2.579 × 10−100.870 8.019 × 10−100.871 6.703 × 10−100.8995.731 × 10−10
M1220.774 6.153 × 10−100.779 3.294 × 10−100.776 3.350 × 10−100.771 6.066 × 10−100.777 8.168 × 10−100.8054.239 × 10−10
40.806 2.949 × 10−100.801 9.813 × 10−100.806 4.758 × 10−100.802 7.147 × 10−100.800 6.686 × 10−100.8221.175 × 10−10
60.833 7.769 × 10−100.832 7.424 × 10−100.837 7.030 × 10−100.837 2.140 × 10−100.836 4.043 × 10−100.8512.262 × 10−10
80.856 3.042 × 10−100.855 8.737 × 10−100.858 6.157 × 10−100.851 6.032 × 10−100.860 1.685 × 10−100.8974.434 × 10−10
Friedman4.44 4.15 3.67 4.19 3.50 1.06
Final Rank6 4 3 5 2 1
Table 8. Run time results.
Table 8. Run time results.
FunnTHPROHEOAPOQHDBOIMODEANBPO
MeanMeanMeanMeanMeanMean
M1228.418 27.732 27.207 35.044 39.903 15.872
424.512 28.908 33.846 36.770 39.979 16.794
623.464 24.300 30.909 32.291 38.638 19.305
822.784 20.07426.257 35.075 34.071 21.193
M2227.419 29.395 26.448 32.333 39.982 19.741
425.149 27.124 34.035 35.442 35.360 20.360
624.935 22.816 32.923 30.494 36.075 20.240
826.584 21.500 30.867 36.214 37.853 15.106
M3224.426 25.703 34.552 36.414 35.329 19.073
425.513 24.394 28.710 31.650 38.250 17.598
620.143 21.771 32.414 36.254 34.201 17.671
820.16929.250 28.480 32.327 34.022 21.456
M4226.482 24.191 25.499 33.426 37.421 17.294
423.438 21.553 34.666 30.731 35.716 19.112
622.007 26.673 33.439 36.911 34.275 15.070
824.364 23.343 33.860 35.526 39.444 17.701
M5226.879 20.605 29.611 32.220 39.017 18.892
427.305 26.592 29.953 32.992 33.499 20.750
627.992 23.241 31.705 32.117 38.287 17.666
824.145 21.654 26.104 31.038 36.422 20.282
M6222.203 24.699 25.765 34.510 36.696 21.163
428.377 26.593 26.356 30.962 36.618 17.723
625.073 20.86033.474 36.140 35.659 20.983
822.162 22.441 34.902 35.936 36.514 20.026
M7229.083 23.346 27.788 34.974 36.710 19.411
423.179 23.147 30.020 34.872 38.901 15.789
629.370 27.238 30.134 36.375 36.557 19.072
826.347 28.823 29.418 35.208 39.064 20.600
M8228.969 28.479 33.897 30.082 34.456 19.406
428.659 27.214 33.084 30.700 33.066 15.193
622.041 22.837 31.072 32.710 38.444 17.133
824.313 29.987 33.631 31.249 37.243 21.258
M9225.127 29.235 27.664 33.707 38.235 18.514
420.308 23.832 27.896 32.329 35.982 19.858
621.316 20.95130.412 30.335 34.930 21.389
826.414 20.06129.903 31.622 39.414 20.975
M10226.904 25.833 34.601 31.322 39.103 16.341
426.366 29.611 33.333 34.433 38.777 15.381
629.743 23.551 28.855 35.384 34.092 19.018
821.983 29.155 32.242 32.647 35.574 18.382
M11227.982 26.065 26.284 31.345 39.354 18.038
421.640 21.216 26.877 34.835 37.051 20.997
627.838 29.623 25.736 35.382 33.683 17.983
820.425 24.790 25.219 30.543 35.112 18.741
M12221.569 29.425 33.211 32.090 38.819 20.157
422.223 26.895 28.659 31.042 35.850 21.930
624.816 29.507 27.613 33.344 39.436 20.267
829.805 20.535 25.413 34.566 34.154 17.624
Friedman 2.69 2.52 3.85 5.02 5.79 1.13
Final Rank 324561
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, J.; Fan, J.; Zhang, X.; Qian, B. Adaptive Nonlinear Bernstein-Guided Parrot Optimizer for Mural Image Segmentation. Biomimetics 2025, 10, 482. https://doi.org/10.3390/biomimetics10080482

AMA Style

Wang J, Fan J, Zhang X, Qian B. Adaptive Nonlinear Bernstein-Guided Parrot Optimizer for Mural Image Segmentation. Biomimetics. 2025; 10(8):482. https://doi.org/10.3390/biomimetics10080482

Chicago/Turabian Style

Wang, Jianfeng, Jiawei Fan, Xiaoyan Zhang, and Bao Qian. 2025. "Adaptive Nonlinear Bernstein-Guided Parrot Optimizer for Mural Image Segmentation" Biomimetics 10, no. 8: 482. https://doi.org/10.3390/biomimetics10080482

APA Style

Wang, J., Fan, J., Zhang, X., & Qian, B. (2025). Adaptive Nonlinear Bernstein-Guided Parrot Optimizer for Mural Image Segmentation. Biomimetics, 10(8), 482. https://doi.org/10.3390/biomimetics10080482

Article Metrics

Back to TopTop