Next Article in Journal
A Bipedal Robotic Platform Leveraging Reconfigurable Locomotion Policies for Terrestrial, Aquatic, and Aerial Mobility
Next Article in Special Issue
Three Strategies Enhance the Bionic Coati Optimization Algorithm for Global Optimization and Feature Selection Problems
Previous Article in Journal
The Effects of the Substrate Length and Cultivation Time on the Physical and Mechanical Properties of Mycelium-Based Cushioning Materials from Salix psammophila and Peanut Straw
Previous Article in Special Issue
IPO: An Improved Parrot Optimizer for Global Optimization and Multilayer Perceptron Classification Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Northern Goshawk Optimization Algorithm for Mural Image Segmentation

1
College of Design, Hanyang University, Ansan 15588, Republic of Korea
2
College of Art, Sungkyunkwan University, Seoul 03063, Republic of Korea
3
College of Media And Communication, Jiangsu Second Normal University, Nanjing 210003, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(6), 373; https://doi.org/10.3390/biomimetics10060373 (registering DOI)
Submission received: 20 April 2025 / Revised: 28 May 2025 / Accepted: 30 May 2025 / Published: 5 June 2025

Abstract

:
In the process of mural protection and restoration, using optimization algorithms for image segmentation is a common method for restoring mural details. However, existing optimization-based image segmentation methods often lack image segmentation quality. To alleviate the aforementioned issues, this paper proposes a mural image segmentation algorithm based on OPBNGO by integrating the Northern Goshawk Optimization (NGO) algorithm with the off-center learning strategy, partitioned learning strategy, and Bernstein-weighted learning strategy. In OPBNGO, firstly, the off-center learning strategy is proposed, which effectively improves the global search ability of the algorithm by utilizing biased center individuals. Secondly, the partitioned learning strategy is introduced, which achieves a better balance between the exploration and development phases by applying diverse learning methods to the population. Finally, the Bernstein-weighted learning strategy is proposed, which effectively improves the algorithm’s development performance. Subsequently, the OPBNGO algorithm is applied to solve the image segmentation problem for eight mural images. Experimental results show that it achieves a winning rate of over 96.87% in terms of fitness function value, achieves a winning rate of over 93.75% in terms of FSIM, SSIM, and PSNR metrics, and can be considered a promising mural image segmentation algorithm.

1. Introduction

As an indispensable component of world cultural heritage, murals embody profound historical and cultural connotations and values [1,2,3,4,5,6,7,8,9,10]. However, during the process of long-term preservation and maintenance, mural image information undergoes degradation, posing significant challenges to the scientific conservation of cultural heritage [11,12,13,14,15,16,17,18,19,20,21]. Currently, mural conservation scholars are dedicated to scientifically and effectively preserving and restoring murals through various means to inherit their historical, artistic, and scientific values [22,23,24,25,26,27]. In the process of mural image restoration, the identification of degraded details in images is of paramount importance [28]. Image segmentation is a common technique that effectively identifies inconspicuous details by dividing an image into multiple regions based on different regional attributes [29]. Currently, metaheuristic-based image segmentation algorithms have garnered considerable attention due to their structural simplicity, practicality, and computational efficiency [30].
Metaheuristic algorithms are computationally lightweight algorithms with efficient search capabilities, inspired by various natural phenomena and biological behaviors observed in nature. Common classification methods categorize them into four types: evolution-based algorithms, swarm-based algorithms, human-based algorithms, and physics and chemistry-based algorithms [31]. Among recent evolution-based algorithms, notable examples include the Liver Cancer Algorithm (LCA) [32], Electric Eel Foraging Optimization (EEFO) algorithm [33], Barnacles Mating Optimizer (BMO) algorithm [34], and Greylag Goose Optimization (GGO) algorithm [35]. These algorithms primarily update optimal solutions by simulating natural evolutionary behaviors of organisms, leveraging their inherent advantage of high search parallelism. This parallelism has enabled the application of LCA, EEFO, BMO, and GGO to solve diverse engineering problems, including medical feature selection, hydropower plant gate control, optimal reactive power dispatch, and UCL feature selection. Due to their low computational complexity, these algorithms have demonstrated excellent engineering applicability. Recent population-based algorithms include the Snake Optimizer (SO) [31], Genghis Khan Shark Optimizer (GKSO) [36], Sled Dog Optimizer (SDO) [37], and Zebra Optimization Algorithm (ZOA) [38]. The key advantage of these algorithms lies in their collective search mechanism, which maintains high population diversity during the search process, enabling thorough exploration of the solution space. Leveraging these strengths, SO, GKSO, SDO, and ZOA have been successfully applied to mechanical parameter design, constrained optimization problems in real-world scenarios, robotic path planning, and photovoltaic parameter optimization. These applications demonstrate their exceptional solution stability and high-precision optimization capabilities. Recent human-inspired algorithms include the Aitken optimizer (ATK) [39], Divine Religions Algorithm (DRA) [40], and Quadratic Interpolation Optimization (QIO) algorithm [41]. The primary advantage of these algorithms lies in their simulation of human-related cognitive processes, which endows them with adaptive capabilities. This adaptability accelerates the localization of optimal solutions during the search process. Currently, ATK, DRA, and QIO have been successfully applied to specific engineering tasks, including engineering design, neural network parameter tuning, and operation management of energy storage systems in microgrids. These applications demonstrate their high optimization stability. Recent physics and chemistry-inspired algorithms include the Tornado Optimizer with Coriolis Force (TOC) [42], Snow Ablation Optimizer (SAO) [43], Light Spectrum Optimizer (LSO) [44], and Chernobyl Disaster Optimizer (CDO) [45]. These algorithms draw inspiration from physical and chemical phenomena, offering advantages such as simple structures, low computational costs, and strong local exploitation capabilities. They have been successfully applied to industrial process planning, photovoltaic parameter identification, engineering design problems, and high-dimensional real-world optimization tasks. Experimental results demonstrate their superior solution speed and precision. Due to their excellent search efficiency, researchers have extensively applied these algorithms to solve image segmentation problems.
In recent years, researchers have developed numerous efficient image segmentation algorithms based on metaheuristic algorithms. For instance, Houssein et al. proposed a Snake Optimization Algorithm with Opposition-Based Learning (SO-OBL) to address deficiencies in CT scan image segmentation for liver diseases. Subsequent experiments demonstrated that SO-OBL excels in global optimization and multi-level image segmentation, achieving excellent results in FSIM, SSIM, and PSNR metrics and outperforming various metaheuristic algorithms, thereby proving its efficiency and accuracy in computer-aided diagnostic systems for images [46]. Lian et al., aiming to enhance segmentation similarity and image quality in image segmentation problems, introduced a novel Parrot Optimizer (PO). Inspired by the behavior of African Gray Parrots, PO possesses robust global and local search capabilities. Experiments on disease diagnosis and medical image segmentation problems confirmed that PO is a promising and competitive image segmentation algorithm, significantly improving image segmentation quality [47]. Qiao et al. proposed a hybrid algorithm combining the Arithmetic Optimization Algorithm and Harris Hawks Optimizer (AOA-HHO) for solving multi-level threshold image segmentation problems. Through image segmentation at seven different threshold levels, AOA-HHO demonstrated superior performance over comparison algorithms in terms of segmentation accuracy, fitness function value, peak signal-to-noise ratio, and structural similarity, making it a promising image segmentation method [48]. Addressing challenges in medical image segmentation, Yuan et al. proposed an efficient metaheuristic algorithm called the Artemisinin Optimization (AO) Algorithm. By balancing global search and local exploitation capabilities, AO exhibits strong optimization performance. Experiments on six threshold level segmentations of fifteen real breast cancer pathology images confirmed that AO outperforms eight high-performing algorithms in segmentation accuracy, feature similarity index, peak signal-to-noise ratio, and structural similarity index values, making it an efficient image segmentation method [49]. To address deficiencies in current image segmentation performance, Chen et al. proposed a novel Poplar Optimization Algorithm (POA) to alleviate shortcomings in image segmentation problems. By enhancing population diversity, the POA significantly improves optimization performance. Experiments on multi-threshold segmentations of six standard images confirmed that the POA is a metaheuristic algorithm with excellent image segmentation performance [50]. Wang et al., addressing the limitations of the Whale Optimization Algorithm (WOA) in solving image segmentation problems, such as its weak local search capability, susceptibility to local optima, and inability to balance exploration and exploitation, proposed a WOA with Crossover and Removal of Similarity (CRWOA). Using the CRWOA for multi-level threshold segmentation of 10 grayscale images, experimental results showed that the CRWOA outperformed five comparison algorithms in convergence and segmentation quality, demonstrating the superiority of the proposed algorithm [51]. Arunita et al. proposed an image segmentation method combining the Lévy–Cauchy Arithmetic Optimization Algorithm (LCAOA) and Rough K-Means (RKM). By introducing Lévy flight and Cauchy distribution to balance exploration and exploitation phases and employing opposition-based learning to enhance algorithm efficiency, experiments on multiple images demonstrated excellent segmentation performance across various image types, particularly in traditional color images, oral pathology images, and leaf images, with high feature similarity and accuracy [52]. Wang et al. proposed a multi-threshold segmentation method for breast cancer images based on an improved Dandelion Optimization Algorithm (IDOA) to address deficiencies of traditional threshold segmentation methods in complex structures and fuzzy cell boundaries. Experiments on multi-threshold segmentation of breast cancer images showed that the method, by combining opposition-based learning and an IDOA, achieves more accurate lesion area segmentation and performs well on multiple performance metrics, validating the effectiveness and superiority of the proposed method [53]. Mostafa et al. proposed an Improved Chameleon Swarm Algorithm (ICSA) to address medical image segmentation challenges. By integrating an optimal stochastic mutation strategy utilizing Lévy, Gaussian, and Cauchy distribution functions, ICSA mitigates premature convergence and local optima entrapment issues commonly encountered in medical image segmentation, thereby enhancing the quality of segmented images. However, the algorithm’s runtime efficiency was not explicitly addressed [54]. Hashim et al. introduced a modified Exponential Distribution Optimizer (mEDO) to improve multilevel image segmentation. The mEDO overcomes the limitations of the original EDO in handling complex image segmentation tasks by incorporating phasor operators and an adaptive optimal mutation strategy. Experimental results indicate that mEDO outperforms other optimizers in terms of convergence speed and accuracy, demonstrating superior performance in multi-threshold image segmentation tasks [55]. The detailed information on image segmentation algorithms is summarized in Table 1.
The aforementioned image segmentation methods based on optimization algorithms demonstrate the effectiveness of optimization algorithms in the field of image segmentation and also confirm that the introduction of improvement strategies has a positive effect on enhancing image segmentation performance. However, existing image segmentation methods based on optimization algorithms still face issues such as getting trapped in local optimal segmentation threshold combinations when addressing specific image segmentation problems, such as mural image segmentation, leading to poor image segmentation quality. Therefore, there is an urgent need to propose a novel and robust image segmentation tool with efficient image segmentation performance to alleviate this deficiency. Fortunately, the Northern Goshawk Optimization (NGO) [56] algorithm is a novel optimization algorithm that has efficient search performance. Hence, in this paper, the NGO algorithm is introduced into the field of image segmentation to achieve better mural image segmentation results. However, with the increasing dimension and complexity of optimization problems and the growing complexity of image information, the original NGO algorithm exhibits issues such as insufficient exploration capability, insufficient development capability, and an imbalance between exploration and development phases, resulting in decreased optimization accuracy when solving complex optimization problems and mural image segmentation problems. Based on the aforementioned issues, this paper proposes an improved NGO algorithm with efficient performance, called OPBNGO, by integrating the off-center learning strategy, partitioned learning strategy, and Bernstein-weighted learning strategy. In the OPBNGO algorithm, firstly, to address the issue of insufficient exploration capability of the NGO algorithm when solving mural image segmentation problems, the off-center learning strategy is introduced. By combining the fitness function values of different individuals, the off-center individuals of the population are summarized. Subsequently, the population is guided by these off-center individuals, enhancing the algorithm’s global search capability. Secondly, to tackle the imbalance between the exploration phase and the development phase of the NGO algorithm when solving mural image segmentation and high-dimensional optimization problems, the partitioned learning strategy is proposed. The population are divided into exploration-phase individuals and development-phase individuals based on their fitness function values, and diverse learning methods are applied to them, achieving better balance between the exploration and development phases and improving the algorithm’s ability to escape from local suboptimal solutions. Finally, to address the issue of decreased optimization accuracy caused by the insufficient development capability of the NGO algorithm when solving mural image segmentation problems, the Bernstein-weighted learning strategy is proposed. By leveraging the weighted properties of Bernstein polynomials, individuals with different properties are weighted to form weighted individuals, which are then used to guide the population individuals, effectively improving the algorithm’s development performance and promoting optimization accuracy. The main contributions of this paper are as follows:
  • The off-center learning strategy was integrated by incorporating the fitness function values of individuals, specifically by guiding the population with off-center individuals, effectively enhancing the algorithm’s global search capability.
  • The partitioned learning strategy was included by integrating fitness function values, and by applying diverse learning methods to the population, better balance between the exploration and development phases was achieved, thereby improving the algorithm’s ability to escape from local suboptimal solutions.
  • The Bernstein-weighted learning strategy was applied by leveraging the weighted properties of second-order Bernstein polynomials, and by guiding the population with weighted individuals, the algorithm’s development performance was effectively improved.
  • Building upon the NGO algorithm, an enhanced NGO algorithm named OPBNGO was proposed by integrating the aforementioned three learning strategies.
  • The OPBNGO algorithm was employed to solve multi-threshold segmentation problems on eight mural images, achieving remarkable results in terms of fitness function values, PSNR, SSIM, and FSIM metrics, thereby confirming that the OPBNGO algorithm is a promising image segmentation method.
The subsequent work plan of this paper is as follows. Section 2 introduces the mathematical model and implementation logic of the NGO algorithm. In Section 3, the off-center learning strategy, partitioned learning strategy, and Bernstein-weighted learning strategy are proposed, and based on these, the implementation logic of the OPBNGO algorithm is presented. Section 4 applies the OPBNGO algorithm to solve the multi-threshold segmentation problem of eight mural images, verifying its potential in the field of mural image segmentation. Section 5 provides the research conclusions of this paper and outlines future work plans.

2. Mathematical Model of Northern Goshawk Optimization

In this section, the main focus is on the mathematical modeling of the NGO algorithm. The NGO algorithm is a novel optimization algorithm inspired by the hunting behavior of the northern goshawk, which consists of two main behaviors. In the first behavior, it locates its prey by orienting itself to its position over a wide area and moving quickly towards it. In the second behavior, after reaching an area very close to the prey, it attacks the prey for the purpose of hunting. In the resulting NGO algorithm, the exploration phase corresponding to the algorithm is formed by simulating the first behavior, and the exploitation phase corresponding to the algorithm is formed by simulating the second behavior. In the follow-up, the exploration and exploitation phases of the NGO algorithm will be mathematically modeled in detail, and the execution logic of the NGO algorithm in solving the optimization problem will be given.

2.1. Population Initialization

When solving an optimization problem using the NGO algorithm, an initialization population operation is first required. Each individual in the population represents a candidate solution to the optimization problem to be solved, and at the beginning of the algorithm iteration, the individuals are generated by randomly generating them within the constraints of the upper and lower bounds of the variables of the optimization problem to be solved, and the initialized population that is formed is represented as Equation (1):
X = X 1 X i X N N × D i m = x 1 , 1 x 1 , j x 1 , D i m x i , 1 x i , j x i , D i m x N , 1 x N , j x N , D i m N × D i m
where N denotes the population size, D i m denotes the dimension of the variable to be optimized, X i denotes the information of the i t h individual, and x i , j denotes the information of the j t h dimension of the i t h individual, where the value of x i , j is randomly generated in the interval [ l b , u b ] at the beginning of the iteration, l b and u b denote the lower and upper bound constraints of the variable to be optimized, respectively. As mentioned earlier, an individual in the population represents a solution to the problem to be optimized; the fitness function values are used to differentiate the quality of the solutions when solving the optimization problem; and each individual corresponds to a fitness function value, denoted as Equation (2):
F = F 1 = F ( X 1 ) F i = F ( X i ) F N = F ( X N ) N × 1
where F ( · ) denotes the objective function of the problem to be optimized; and F i denotes the value of the fitness function corresponding to the i t h individual.

2.2. Exploration Phase

During the population initialization phase, a set of initial candidate solutions is generated. Subsequently, the hunting behavior of the northern goshawk is emulated to optimize the solutions, making them more suitable for solving optimization problems. In the hunting behavior, the first step is to locate the prey area in a vast space, which corresponds to the exploration phase in algorithm implementation. The primary goal of the exploration phase is to identify regions that may contain potential optimal solutions. During this phase, the positions of individuals are updated using Equation (3):
x i , j n e w = x i , j + r · P j I · x i , j , F P < F i x i , j + r · x i , j P j , F P F i
where x i , j n e w represents the new state of the j t h dimension information of the i t h individual after being updated through the exploration phase, x i , j represents the j t h dimension information of the i t h individual, r represents a random number generated within the interval [0, 1], I represents a constant randomly selected from the set {1, 2}, P represents a random individual different from individual X i , P j represents the j t h dimension information of the randomly selected individual P , F P represents the fitness function value corresponding to the randomly selected individual P , and F i represents the fitness function value corresponding to the i t h individual. After undergoing the update of their new states during the exploration phase, individuals obtain a new individual X i n e w . Subsequently, it is necessary to retain the generated new state to ensure that the quality of the individuals is not compromised. Equation (4) is used to preserve individual information:
X i = X i n e w , F i n e w < F i X i , F i n e w F i

2.3. Exploitation Phase

During the exploration phase of the aforementioned algorithm, the optimal solution region was identified. Subsequently, the second phase of simulating the hunting behavior of the northern goshawk, which involves attacking the prey at a very short distance, is utilized to implement the exploitation behavior of the algorithm. This allows the algorithm to further exploit potential local optimal regions, thereby ensuring the precision of the algorithm when solving optimization problems. Assuming that the northern goshawk attacks the prey when the distance to the prey is R , Equation (5) is used to represent the update method of individual positions during the exploitation phase of the algorithm:
x i , j n e w = x i , j + R · 2 · r 1 · x i , j
where x i , j n e w represents the new state of the j t h dimension information of the i t h individual after being updated during the exploitation phase. r denotes a random number generated within the interval [0, 1]. R signifies the distance from the northern goshawk to its prey, as represented by Equation (6):
R = 0.02 · ( 1 t T )
where t represents the current number of algorithm iterations, and T represents the maximum number of algorithm iterations. After updating the new state of individuals during the exploitation phase, a new individual X i n e w is obtained. Subsequently, it is necessary to retain the generated new state to ensure that the quality of the individuals is enhanced. Equation (4) is used to preserve individual information.

2.4. Implementation of the NGO Algorithm

In the previous section, detailed mathematical modeling was conducted for the initialization phase, exploration phase, and exploitation phase involved in the NGO algorithm. Consequently, in this section, the execution logic of the NGO algorithm when addressing practical optimization problems is summarized, integrating the aforementioned theoretical knowledge. The execution pseudocode is represented as Algorithm 1, and the execution flowchart is depicted in Figure 1a.
Algorithm 1: Pseudo code for NGO algorithm
Input: Population size ( N ), Dimension of the optimization problem ( D i m ), Upper bound of the optimization problem ( U b ) and lower bound ( L b ), Maximum number of iterations ( T ).
Output: Global best solution ( X b e s t ).
1. 
Initialize the population using Equation (1) and calculate the individual fitness function values of the population.
2. 
for  t = 1 : T
3. 
   for i = 1 : N
4. 
      exploration phase
5. 
      for  j = 1 : D i m
6. 
           Calculate the j t h dimensional new state of the i t h individual using Equation (3).
7. 
      end for
8. 
      Use Equation (4) to preserve the new state of individual X i .
9. 
      Exploitation Phase
10.
     for j = 1 : D i m
11.
          Calculate the j t h dimensional new state of the i t h individual using Equation (5).
12.
     end for
13.
     Use Equation (4) to preserve the new state of individual X i .
14.
  end for
15.
  Save the global best solution X b e s t .
16.
end for
17.
Output the global best solution X b e s t obtained by solving the optimization problem using the NGO algorithm.

3. Mathematical Model of Improved Northern Goshawk Optimization

The previous section mainly reviewed the mathematical model and execution logic of the NGO algorithm. However, when solving high-dimensional complex optimization problems and image segmentation issues, the NGO algorithm, due to the limitations of its strategy, suffers from insufficient exploration capability, insufficient exploitation capability, and an imbalance between the exploration and exploitation phases. This leads to a lack of population diversity and a tendency to fall into local suboptimal solutions when solving optimization problems, resulting in poor optimization accuracy and local stagnation. To effectively enhance the performance of the NGO algorithm in solving high-dimensional complex optimization problems and image segmentation issues, this section addresses its existing deficiencies and proposes corresponding improvement strategies to enhance the algorithm’s optimization performance. First, to address the insufficient exploration capability when solving image segmentation issues, this section introduces the off-center learning strategy. By considering the fitness function values of different individuals, the off-center individuals of the population are identified. Guiding the population through these off-center individuals enhances population diversity during the execution process and strengthens the algorithm’s global search capability. Second, to address the imbalance between the exploration and exploitation phases when solving image segmentation and high-dimensional optimization problems, this section proposes the partitioned learning strategy. By segmenting the population into exploration and exploitation individuals based on fitness function values and updating them through different learning methods, better balance between exploration and exploitation phases is achieved, enhancing the algorithm’s ability to escape local suboptimal solutions. Finally, to address the insufficient exploitation capability leading to decreased optimization accuracy when solving image segmentation issues, this section introduces the Bernstein-weighted learning strategy. Using the weighted properties of Bernstein polynomials, individuals of different natures are weighted to form weighted individuals, which are then used to guide the population, effectively improving the algorithm’s exploitation performance and promoting optimization accuracy. The following section will detail the mathematical models of the off-center learning strategy, partitioned learning strategy, and Bernstein-weighted learning strategy proposed in this section, and propose an improved NGO algorithm (OPBNGO) based on the above strategies.

3.1. Off-Center Learning Strategy

The original NGO algorithm faces a deficiency in global exploration capability when solving image segmentation and high-dimensional complex optimization problems, leading to a decrease in population diversity during the iterative process. This is detrimental to the further expansion of algorithm performance and application extension. To alleviate this issue, there is an urgent need to propose a novel and efficient global search strategy. Deng et al. [57] pointed out that guiding the population using the average point of individuals within the population can effectively enhance the algorithm’s global exploration performance, providing significant inspiration for the global exploration strategy proposed in this section. In this section, to further enhance the algorithm’s global exploration capability, an off-center learning strategy is proposed based on the aforementioned idea of average point guidance. Traditional average point guidance does not consider the nature of different individuals; thus, although the algorithm has strong global search capabilities, the local exploitation capability during the iterative process is not guaranteed, resulting in certain deficiencies in optimization accuracy. Based on this consideration, in this section, different fitness function values of individuals are utilized, with the main idea being that the mean point of individuals in the population should be closer to high-quality individuals. This ensures that when guiding the population through the off-center point, both the global search capability of the population and the local exploitation capability of the algorithm are maintained. Combining the aforementioned theoretical ideas forms the off-center weighted point, represented by Equation (7):
X o f f = i = 1 N F m a x F i + ε j = 1 N ( F m a x F i + ε ) · X i
where X o f f represents the off-center weighted point, F m a x denotes the maximum fitness function value corresponding to the individuals in the population, and ε signifies a very small positive number, the purpose of which is to ensure that the denominator is not zero; in this section, ε = 1 0 15 . The traditional average point does not consider individual attributes, which can easily lead to insufficient convergence behavior of the algorithm in the later stages of iteration. In contrast, the off-center weighted point pointed out in this section tends to explore areas with higher quality individuals, while also incorporating the concept of the average point. This effectively enhances the algorithm’s global search capability and overcomes the deficiency of insufficient convergence behavior in the later stages of iteration. Additionally, Equation (7) also shows that individuals with higher quality, corresponding to smaller fitness function values, have greater weighting applied, favoring exploration of high-quality areas.
The aforementioned theoretical analysis highlighted the advantages of the off-center weighted point. Subsequently, this off-center weighted point is used to guide individual positions to enhance the algorithm’s actual global exploration capability. During the guidance process, considering that global exploration behavior should gradually decrease as the iteration progresses to better improve the algorithm’s convergence speed and accuracy, an adaptive factor D f a c t o r , is incorporated into the guidance mechanism. The formed off-center learning strategy is represented by Equation (8):
x i , j n e w = x i , j + cos ( ( 1 ( t T ) ) · π ) · D f a c t o r · ( x o f f , j x i , j )
where x o f f , j denotes the j t h dimensional information of the off-center weighted point. The adaptive factor D f a c t o r is represented by Equation (9), and the image visualization is shown in Figure 2:
D f a c t o r = 1 t T 1 + A · sin ω · t T 2
where the values of A and ω are 0.3 and 10 π , respectively.
From the figure, it can be observed that as iterations progress, D f a c t o r decreases from 1 to 0 in an adaptively decaying manner. This property ensures that the off-center learning strategy effectively enhances the algorithm’s global search capability during the early iterations. As iterations continue, to ensure the convergence accuracy of the algorithm, this learning process gradually diminishes. In summary, the off-center learning strategy proposed in this section can effectively strengthen the algorithm’s global search ability, thereby improving the algorithm’s performance in solving image segmentation problems. Unlike conventional global exploration strategies, the off-center learning strategy proposed in this section is guided not by a global average point but by an off-center point. Moreover, instead of relying on ordinary linear factor control, this strategy introduces a novel sinusoidal nonlinear factor for guidance. This innovation enables adaptive modulation of the guidance process, thereby enhancing the algorithm’s global search capabilities.

3.2. Partitioned Learning Strategy

The original NGO algorithm exhibits an imbalance between the exploration and exploitation phases when addressing image segmentation and high-dimensional optimization problems. This imbalance hinders the algorithm’s ability to effectively escape from local suboptimal traps, leading to deficiencies in the selection of threshold combinations and the precision of optimization problems. Wu et al. [58] indicated that categorizing individuals in the population based on fitness function values into multiple groups can achieve a certain balance between the exploration and exploitation phases of the algorithm. Inspired by this, a novel partitioned learning strategy is proposed in this section, aimed at reasonably balancing the exploration and exploitation phases of the algorithm to enhance its ability to escape local optimal traps. In the partitioned learning strategy, first, the entire population is divided into four individual sets, P 1 , P 2 , P 3 , and P 4 , based on fitness function values. Subsequently, individuals in P 4 learn from those in P 3 , and individuals in P 3 learn from those in P 2 , primarily enhancing the algorithm’s exploitation capability. Individuals in P 2 learn from those in P 1 and random individuals in the population, while individuals in P 1 undergo small-scale Gaussian–Cauchy perturbations and learn from random individuals, mainly ensuring the algorithm’s global exploration capability. By applying different learning methods to individuals of different natures, a good balance between the global search and local exploitation phases of the algorithm is achieved. To visually represent this learning process, the execution of the partitioned learning strategy is visualized in Figure 3. Subsequently, the four learning methods are introduced in detail below.
Firstly, individuals from segment P 4 learn from those in segment P 3 , a process represented by Equation (10):
x i , j n e w = x i , j + r a n d · ( 1 t T ) · ( x P 3 , j x i , j )
where r a n d denotes a random number within the interval [0, 1], and x P 3 , j represents the j t h dimensional value of an individual from segment P 3 . Secondly, individuals from segment P 3 learn from those in segment P 2 , a process represented by Equation (11):
x i , j n e w = x i , j + r a n d · e l · cos ( 2 π l ) · ( x P 2 , j x i , j )
where l represents a value randomly selected from the set {−1, 1}, and x P 2 , j denotes the j t h dimensional value of an individual from segment P 2 . Thirdly, individuals from segment P 2 learn from those in segment P 1 , as well as from random individuals within the population, a process represented by Equation (12):
x i , j n e w = x i , j + 0.5 · 1 t T · 1 + 0.3 · s i n 10 π · t · ( x P 1 , j x i , j ) + 0.5 · r a n d · ( x r a n d , j x i , j )
where x P 1 , j denotes the j t h dimensional value of an individual from segment P 1 , and x r a n d , j represents the j t h dimensional value of a random individual different from X i within the population. Finally, individuals from segment P 1 learn from random individuals in the population and undergo Gaussian–Cauchy perturbations, a process represented by Equation (13):
x i , j n e w x i , j + 1 + 0.3 · a r c t a n ( 10 π · t ) · ( x r a n d , j x i , j )     i f   r a n d < 0.5 x i , j · ( 1 + τ 1 · cauchy ( 0 , σ 2 ) + τ 2 · gaussian ( 0 , σ 2 ) )     o t h e r w i s e
where cauchy ( 0 , σ 2 ) represents a random number that follows the standard Cauchy distribution, gaussian ( 0 , σ 2 ) represents a random number that follows the standard Gaussian (normal) distribution, τ 1 takes the value of 1 ( t / T ) 2 , and τ 2 takes the value of ( t / T ) 2 . X is represented by Equation (14):
σ = F m i n F r F r
where F m i n denotes the minimum fitness function value corresponding to an individual in the population, while F r represents the fitness function value of a randomly selected individual within the population. By segmenting individuals in the population based on fitness function values and applying diverse learning methods, the algorithm achieves a good balance between the global exploration phase and the local exploitation phase, enhancing its ability to escape from local optimal traps. Unlike conventional balance-oriented strategies, the partitioned learning strategy proposed in this section divides the population into multiple regions based on fitness function values. Subsequently, it applies adaptive learning to individuals with distinct characteristics. This approach offers greater rationality, thereby enhancing the algorithm’s balance and improving its ability to escape local optima traps.

3.3. Bernstein-Weighted Learning Strategy

The original NGO algorithm exhibits a deficiency in local exploitation when tackling high-dimensional optimization and image segmentation challenges. This shortcoming prevents the algorithm from further refining local optima once they are identified, resulting in a loss of optimization precision. Consequently, there is an urgent need in this section to introduce a high-quality local exploitation strategy to enhance the NGO algorithm’s capabilities and improve its segmentation and optimization accuracy. Zhang et al. [59] have noted that individuals can effectively strengthen the algorithm’s local exploitation by learning from those with superior performance. Inspired by this, to further boost the NGO algorithm’s local exploitation and achieve higher optimization precision, a strategy based on the weighting of candidates using second-order Bernstein polynomials is proposed in this section. This strategy weights individuals from a pool of better candidates and random individuals and then uses these weighted individuals to guide the search of the current individual. The pool of better candidates is defined as the set of individuals with fitness function values lower than that of the current individual X i . To provide an intuitive understanding of this concept, the strategy based on second-order Bernstein polynomial weighting is visualized as shown in Figure 4, and the formation process of this strategy will be detailed subsequently.
Firstly, the n t h -order Bernstein polynomial is defined by Equation (15):
B w , n ( p ) = C n w · p w · ( 1 p ) n p
where p represents the probability of success in an experiment, with 0 p 1 ; n denotes the total number of precise trials; and w signifies the number of successful trials within those precise trials, where w = 1 , 2 , , n . C represents the binomial coefficient, calculated using Equation (16):
C n w = n ! w ! ( n w ) !
where the ! denotes factorial operation. For the second-order Bernstein polynomial, n takes the value of 2, and w can be 0, 1, or 2. According to Equations (15) and (16), when w = 0 and n = 2 , it can be derived that B 0 , 2 ( p ) = ( 1 p ) 2 ; when w = 1 and n = 2 , it can be derived that B 1 , 2 ( p ) = 2 · p · ( 1 p ) ; and when w = 2 and n = 2 , it can be derived that B 2 , 2 ( p ) = p 2 . In summary, the second-order Bernstein polynomial consists of three polynomials, as shown in Equation (17):
B 0 , 2 ( p ) = ( 1 p ) 2 B 1 , 2 ( p ) = 2 · p · ( 1 p ) B 2 , 2 ( p ) = p 2
where, to better analyze the properties of the second-order Bernstein polynomials, they are visualized as shown in Figure 5. The horizontal axis represents p ranging from 0 to 1, and the vertical axis represents the values of the Bernstein polynomial functions as p changes. From the curves, it can be observed that when 0 p 1 , the values of B 0 , 2 ( p ) , B 1 , 2 ( p ) , and B 2 , 2 ( p ) are also confined between 0 and 1. Moreover, for any value of p , the sum of the function values of these three polynomials always equals 1. This property is then utilized to generate Bernstein-weighted individuals, defining p = ( t / T ) 2 , where p nonlinearly increases from 0 to 1 with the increase in the number of function evaluations.
The process of generating Bernstein-weighted individuals is as follows: Firstly, individuals in the pool of superior candidate solutions are evenly divided into two segments based on their fitness function values, ordered from smallest to largest. These segments are defined as segments P S 1 and P S 2 . Subsequently, one random individual is drawn from each of these segments, denoted as X P S 1 and X P S 2 . Then, another random individual is selected from the population, excluding the pool of superior candidate solutions, defined as X f o r . The Bernstein polynomials B 0 , 2 ( p ) , B 1 , 2 ( p ) , and B 2 , 2 ( p ) are assigned as weighting coefficients to individuals X P S 1 , X P S 2 , and X f o r , respectively, to generate the weighted individual X W e i g h t , represented by Equation (18):
X W e i g h t = B 0 , 2 ( p ) · X P S 1 + B 1 , 2 ( p ) · X P S 2 + B 2 , 2 ( p ) · X f o r
Subsequently, the generated Bernstein-weighted individual X W e i g h t is used to guide the individuals, as represented in Equation (19):
x i , j n e w = x i , j + 1 t T · 1 + 0.3 · a r c t a n ( 10 π · t ) 2 · ( x W e i g h t , j x i , j )
By employing the Bernstein-weighted learning strategy to guide individuals, the local exploitation capability of the algorithm is significantly enhanced. Additionally, the introduction of a certain degree of randomness in this strategy reduces the probability of the algorithm falling into local optimal traps, further optimizing convergence precision. Unlike conventional strategies that rely solely on the optimal individual to guide the population and enhance local exploitation capabilities, the Bernstein-weighted learning strategy proposed in this section leverages the parameter-weighted nature of Bernstein polynomial factors to generate representative weighted individuals for guiding the population. The advantage of this approach lies in its ability to utilize the broader representational capacity of weighted individuals when the optimal individual is trapped in a local optimum. By doing so, the weighted individuals can guide the algorithm to escape local optima, thereby simultaneously improving local exploitation efficiency and preserving global search capabilities. This dual enhancement ultimately leads to higher convergence accuracy.

3.4. Implementation of the OPBNGO Algorithm

The initial NGO algorithm encounters several issues when solving high-dimensional optimization and image segmentation problems, such as a deficiency in global exploration, an imbalance between the global exploration and local exploitation phases, and insufficient local exploitation. These issues result in a loss of population diversity, a propensity to fall into local optima, and a decrease in optimization accuracy. To mitigate these challenges, this paper introduces an off-center learning strategy to enhance the algorithm’s global search ability, leveraging off-center individuals based on their fitness values. Additionally, a partitioned learning strategy is proposed to balance the exploration and exploitation phases by segmenting the population and applying distinct learning methods. Lastly, a Bernstein-weighted learning strategy is presented to improve the algorithm’s exploitation performance and optimize accuracy by weighting individuals based on Bernstein polynomials. These strategies are integrated into an enhanced NGO algorithm, termed OPBNGO, with Algorithm 2 providing its pseudocode and Figure 1b depicting its execution flowchart.
Algorithm 2: Pseudo code for OPBNGO algorithm
Input: Population size ( N ), Dimension of the optimization problem ( D i m ), Upper bound of the optimization problem ( U b ) and lower bound ( L b ), Maximum number of iterations ( T ).
Output: Global best solution ( X b e s t ).
1. 
Initialize the population using Equation (1) and calculate the individual fitness function values of the population.
2. 
for  t = 1 : T
3. 
   for i = 1 : N
4. 
      exploration phase
5. 
      for  j = 1 : D i m
6. 
           if  r a n d < 0.5
7. 
                   Calculate the j t h dimensional new state of the i t h individual using Equation (3).
8. 
           else
9. 
                   Calculate the j t h dimensional new state of the i t h individual using Equation (8).
10.
          end if
11.
     end for
12.
     Use Equation (4) to preserve the new state of individual X i .
13.
     Balance Phase
14.
     for  j = 1 : D i m
15.
          Calculate the j t h dimensional new state of the i t h individual using Equations (10)–(13).
16.
     end for
17.
     Use Equation (4) to preserve the new state of individual X i .
18.
     Exploitation Phase
19.
     for j = 1 : D i m
20.
          if  r a n d < 0.5
21.
                  Calculate the j t h dimensional new state of the i t h individual using Equation (5).
22.
          else
23.
                  Calculate the j t h dimensional new state of the i t h individual using Equation (19).
24.
          end if
25.
     end for
26.
     Use Equation (4) to preserve the new state of individual X i .
27.
   end for
28.
   Save the global best solution X b e s t .
29.
end for
30.
Output the global best solution X b e s t obtained by solving the optimization problem using OPBNGO algorithm.

3.5. Time Complexity of the OPBNGO Algorithm

In this section, the time complexity of the OPBNGO algorithm is mainly analyzed. The OPBNGO algorithm mainly includes the population initialization stage and the algorithm iteration stage. Among them, the basic operation of the algorithm is to calculate the fitness function value of individuals. Therefore, the computational complexity of the algorithm initialization stage is O ( N ) . In the algorithm iteration stage, a total of T iterations are required, including the exploration stage, partitioned learning stage, and exploitation stage. Therefore, the fitness function value of individuals in the iteration stage is evaluated 3 N times, and the computational complexity of the algorithm iteration stage is O ( 3 N · T ) . Therefore, the computational complexity of the OPBNGO algorithm is O ( N · ( 1 + 3 T ) ) .

4. Results and Discussion on Mural Image Segmentation

In this section, we primarily assess the performance of the proposed OPBNGO algorithm in solving multi-threshold segmentation problems for mural images. Thresholding mural images primarily involves separating different regions within the mural image, such as the mural itself, the background, and damaged areas. This is beneficial for promoting the conservation and restoration of murals. Through multi-threshold segmentation, areas with different attributes are separated to better capture image information, allowing researchers to more effectively restore and protect the images. In this section, we mainly use the Otsu method as the objective function to identify the optimal image segmentation thresholds. The core idea is to maximize the inter-class variance between different regions of the image to confirm the optimal threshold combination. Therefore, the objective function here refers to the core concept of the Otsu method, which is to maximize the inter-class variance. By searching with the OPBNGO algorithm, we obtain the threshold combination that maximizes the inter-class variance between different image regions, effectively segmenting the different areas of the mural image. In the following sections, we will detail the Otsu method and evaluate the performance of the OPBNGO algorithm based on the Otsu method in solving multi-threshold segmentation problems for mural images.

4.1. Concept of Otsu Thresholding Technique

In this section, the concept of the Otsu method is primarily introduced. The core idea behind the Otsu method for image segmentation is to maximize the inter-class variance between different regions of the image, effectively separating distinct areas. This section will detail the concept further. Assuming the pixel matrix of the image to be segmented is represented as I , and there are L grayscale levels within the image, with n i being the number of pixels at grayscale level i , the total number of pixels N in image I can be calculated using Equation (20):
N = i = 0 L 1 n i
Subsequently, the probability distribution P i for the grayscale level i is derived and calculated using Equation (21):
P i = n i N , i = 0 , 1 , , L 1
where P i 0 ,   P 0 + P 1 + + P L 1 = 1 . The number of thresholds used for image segmentation is set to k . Assuming the threshold for image segmentation is t , the threshold t divides the image into two regions: the region with pixel grayscale values in the interval [1, t] is classified as the target region, and the region with pixel grayscale values in the interval [t + 1, L−1] is classified as the background region. The ratio of the number of pixels in the classified target region to the total number of pixels in the image is defined as ω 0 , and the average grayscale value of the region is μ 0 . The ratio of the number of pixels in the classified background region to the total number of pixels in the image is defined as ω 1 , and the average grayscale value of the region is μ 1 . The average grayscale value of the entire image is μ , and the variance between different image segmentation regions is ν . Based on the above assumptions, ω 0 , μ 0 , ω 1 , μ 1 , μ , and ν are calculated using Equations (22)–(27), respectively:
ω 0 = i = 0 t P i
μ 0 = i = 0 t i P i ω 0
ω 1 = i = t + 1 L 1 P i
μ 1 = i = t + 1 L 1 i P i ω 1
μ = i = 0 k 1 ω i μ i = i = 0 L 1 i P i
v ( t ) = ω 0 ( μ 0 μ ) 2 + ω 1 ( μ 1 μ ) 2 = ω 0 ω 1 ( μ 0 μ 1 ) 2
Subsequently, the optimal image segmentation threshold t b e s t is calculated using Equation (28):
t best = arg max 0 t L v ( t )
Subsequently, the inter-class variance for multiple threshold values k is calculated using Equation (29):
v ( t 1 , t 2 , , t k ) = ω 0 ω 1 ( μ 0 μ 1 ) 2 + ω 0 ω 2 ( μ 0 μ 2 ) 2 + + ω 0 ω k ( μ 0 μ k ) 2 + ω 1 ω 2 ( μ 1 μ 2 ) 2 + + ω 1 ω 3 ( μ 1 μ 3 ) 2 + + ω k 1 ω k ( μ k 1 μ k ) 2
where ω i and μ i are calculated using Equations (30) and (1), respectively.
ω i 1 = i = t i 1 + 1 t i P i , 1 i k + 1
μ i 1 = i = t i 1 + 1 t i i P i ω i 1 ,   1 i k + 1
Assuming the best segmentation threshold combination for the image is T best = ( t 1 , t 2 , , t k ) , the optimal threshold combination is calculated using Equation (32):
T best = arg max 0 t 1 t 2 t k v ( t 1 , t 2 , , t k )

4.2. Experimental Results Analysis of Mural Image Segmentation

In this section, the performance of the OPBNGO algorithm based on Otsu in solving the problem of multi-threshold image segmentation of murals is mainly evaluated. Through image segmentation experiments on eight mural images, the image information is shown in Figure 6, and the performance of the proposed algorithm is comprehensively evaluated. Meanwhile, to ensure fairness, the maximum number of function iterations is set to 100, the population size is set to 40, and each experiment is independently executed 30 times without repetition. In addition, to better evaluate the performance of the OPBNGO algorithm based on Otsu in mural image segmentation, it is compared with six highly efficient algorithms. The specific parameter settings of the comparison algorithms are shown in Table 2. Subsequently, through statistical analysis of the commonly used evaluation indicators in the field of image segmentation, namely the peak signal-to-noise ratio (PSNR) [60], the structural similarity index measure (SSIM) [61], and the feature similarity index measure (FSIM) [62], the analysis mainly includes the mean and standard deviation of the statistical results, so as to comprehensively evaluate the performance of the OPBNGO algorithm based on Otsu in multi-threshold segmentation of mural images. Among them, the higher the PSNR and SSIM values are, the lower the distortion of the segmented image will be, and the higher the FSIM value is, the lower the error rate of image segmentation will be. The above evaluation indicators will be analyzed in detail below. Furthermore, to offer an intuitive visualization of the final image segmentation outcomes and streamline the subsequent analysis of segmentation performance metrics, Table 3 displays the segmentation results achieved by the OPBNGO when the number of segmentation thresholds is set to 2, 4, 6, and 8, respectively.

4.2.1. Strategy Effectiveness Analysis of Mural Image Segmentation

This section primarily focuses on validating the effectiveness of the three learning strategies incorporated into the OPBNGO algorithm to demonstrate that each strategy can effectively enhance the algorithm’s performance. First, we define the ONGO algorithm as the NGO algorithm augmented with the off-center learning strategy, the PNGO algorithm as the NGO algorithm integrated with the partitioned learning strategy, and the BNGO algorithm as the NGO algorithm enhanced with the Bernstein-weighted learning strategy. Based on these definitions, we employ NGO, ONGO, PNGO, BNGO, and OPBNGO algorithms to solve the mural image segmentation problems for eight images with threshold counts of 2, 4, 6, and 8, respectively, thereby analyzing the effectiveness of the strategies. The experimental results are illustrated in Figure 7, where “Mean Rank” denotes the average ranking based on the mean fitness values across the eight mural image segmentation tasks. As depicted in the figure, under varying threshold conditions, ONGO, PNGO, and BNGO all exhibit lower bar heights compared to NGO, indicating that the incorporation of a single learning strategy effectively enhances the performance of mural image segmentation, thereby validating the effectiveness of the strategies. Furthermore, it is noteworthy that OPBNGO, which integrates all three learning strategies simultaneously, demonstrates even lower bar heights than ONGO, PNGO, and BNGO. This suggests that OPBNGO achieves superior performance in mural image segmentation, confirming that the simultaneous integration of the three learning strategies into NGO further elevates the algorithm’s segmentation capabilities. In summary, these findings demonstrate that introducing each learning strategy individually into the NGO algorithm effectively enhances its performance, while integrating all three strategies concurrently maximizes the algorithm’s performance gains.

4.2.2. Fitness Value Metric Analysis of Mural Image Segmentation

This section primarily focuses on analyzing the fitness function values of the OPBNGO algorithm for mural image segmentation. The detailed experimental results are presented in Supplementary Table S1, encompassing the mean values and standard deviations derived from 30 independent trials. To provide a more intuitive demonstration of the algorithm’s superiority, Figure 8 illustrates the ranking of the algorithm’s fitness function values across the segmented mural images, while Figure 9 depicts the algorithm’s average ranking of fitness function values for varying numbers of segmentation thresholds. Specifically, in Figure 8, subplots (a), (b), (c), and (d) correspond to the rankings of the algorithm’s fitness function values when the number of segmentation thresholds is set to two, four, six, and eight, respectively.
As illustrated in Figure 8, when the number of segmentation thresholds is set to two, the OPBNGO algorithm achieves the optimal fitness function values for seven out of eight mural image segmentation tasks, yielding a success rate of 87.5%. For segmentation thresholds of four, six, and eight, the OPBNGO algorithm attains the optimal fitness function values across all eight mural image segmentation tasks, achieving a 100% success rate. These experimental results confirm that the OPBNGO algorithm benefits from the advanced strategies integrated into its design, enabling efficient exploration of the threshold combination space and ensuring a high degree of exploratory capability in the solution space. Furthermore, as depicted in Figure 9, when the number of segmentation thresholds is two, four, six, or eight, OPBNGO exhibits lower bar heights (indicating superior performance) and achieves a lower average ranking of fitness function values compared to the benchmark algorithms. In summary, the integration of these strategies into the OPBNGO algorithm facilitates effective exploration of segmentation threshold combinations, enhancing its performance in mural image segmentation. The algorithm demonstrates superior segmentation outcomes compared to the benchmarks, positioning it as a promising approach for mural image segmentation.

4.2.3. PSNR Metric Analysis of Mural Image Segmentation

In this section, we primarily analyze the peak signal-to-noise ratio (PSNR) achieved by the OPBNGO algorithm for mural image segmentation to evaluate the distortion level post segmentation. The detailed experimental results for the PSNR are presented in Supplementary Table S2, encompassing the mean values and standard deviations derived from 30 independent trials. To facilitate the analysis of the PSNR metric, Figure 10 illustrates the ranking of the algorithm’s PSNR values when addressing the segmentation of eight mural images, while Figure 11 depicts the algorithm’s average ranking of PSNRs across varying numbers of segmentation thresholds. Specifically, in Figure 10, subplots (a), (b), (c), and (d) correspond to the rankings of the algorithm’s PSNR values when the number of segmentation thresholds is set to two, four, six, and eight, respectively.
As depicted in Figure 10, when the number of segmentation thresholds is set to two, the OPBNGO algorithm achieves the optimal PSNR values for seven out of eight mural image segmentation tasks, yielding a success rate of 87.5%. For a segmentation threshold count of four, the OPBNGO algorithm attains the optimal PSNR values across all eight mural image segmentation tasks, achieving a 100% success rate. When the number of segmentation thresholds is six, OPBNGO again achieves the optimal PSNR values for seven out of eight tasks, with an 87.5% success rate. Finally, for a segmentation threshold count of eight OPBNGO secures the optimal PSNR values for all eight mural image segmentation tasks, reaching a 100% success rate. Concurrently, as shown in Figure 11, across experiments with varying numbers of segmentation thresholds, OPBNGO exhibits lower bar heights (indicating superior performance) and achieves a lower average ranking of PSNR values compared to the benchmark algorithms. These experimental results confirm that, when addressing mural image segmentation tasks, the OPBNGO algorithm demonstrates a lower image distortion rate compared to the benchmarks. It effectively captures image details while maximizing the quality of the segmented images. Furthermore, these findings validate that the OPBNGO algorithm is more precise in balancing noise suppression and detail preservation when identifying optimal segmentation parameters, thereby enhancing the overall mural segmentation performance.

4.2.4. SSIM Metric Analysis of Mural Image Segmentation

In this section, we primarily analyze the structural similarity index measure (SSIM) achieved by the OPBNGO algorithm for mural image segmentation to evaluate the structural similarity between the segmented images and their original counterparts. The detailed experimental results for the SSIM are presented in Supplementary Table S3, encompassing the mean values and standard deviations derived from 30 independent trials. To facilitate the analysis of the SSIM metric, Figure 12 illustrates the ranking of the algorithm’s SSIM values when addressing the segmentation of eight mural images, while Figure 13 depicts the algorithm’s average ranking of SSIM across varying numbers of segmentation thresholds. Specifically, in Figure 12, subplots (a), (b), (c), and (d) correspond to the rankings of the algorithm’s SSIM values when the number of segmentation thresholds is set two, four, six, and eight, respectively.
As illustrated in Figure 12, when the number of segmentation thresholds is set to two and 4, the OPBNGO algorithm achieves the optimal SSIM values across all eight mural image segmentation tasks, attaining a 100% success rate compared to the benchmark algorithms. For segmentation threshold counts of six and eight, OPBNGO secures the optimal SSIM values for seven out of eight tasks, yielding an 87.5% success rate. Concurrently, as depicted in Figure 13, across experiments with varying numbers of segmentation thresholds, OPBNGO exhibits lower bar heights (indicating superior performance) and achieves a lower average ranking of SSIM values compared to the benchmark algorithms. These experimental results demonstrate that the OPBNGO algorithm effectively searches for optimal combinations of segmentation thresholds when addressing mural image segmentation tasks, achieving a favorable trade-off among evaluation metrics. It outperforms the benchmark algorithms by obtaining the highest SSIM values, which implies that the segmented images exhibit high structural similarity to the original images in terms of key visual features such as structural information, contrast, and luminance. This indicates that OPBNGO effectively preserves the boundaries, textures, and details of objects within the images, thereby validating it as a promising approach for mural image segmentation.

4.2.5. FSIM Metric Analysis of Mural Image Segmentation

In this section, we primarily analyze the feature similarity index measure (FSIM) achieved by the OPBNGO algorithm for mural image segmentation to evaluate the feature similarity between the segmented images and their original counterparts. The detailed experimental results for the FSIM are presented in Supplementary Table S4, encompassing the mean values and standard deviations derived from 30 independent trials. To facilitate the analysis of the FSIM metric, Figure 14 illustrates the ranking of the algorithm’s FSIM values when addressing the segmentation of eight mural images, while Figure 15 depicts the algorithm’s average FSIM ranking across varying numbers of segmentation thresholds. Specifically, in Figure 14, subplots (a), (b), (c), and (d) correspond to the rankings of the algorithm’s FSIM values when the number of segmentation thresholds is set to two, four, six, and eight, respectively.
As depicted in Figure 14, when the number of segmentation thresholds is set to two and six, the OPBNGO algorithm achieves the optimal FSIM values for seven out of eight mural image segmentation tasks, yielding an 87.5% success rate compared to the benchmark algorithms. For segmentation threshold counts of four and eight, OPBNGO attains the optimal FSIM values across all eight tasks, achieving a 100% success rate. Concurrently, as illustrated in Figure 15, across experiments with varying numbers of segmentation thresholds, OPBNGO exhibits lower bar heights (indicating superior performance) and achieves a lower average ranking of FSIM values compared to the benchmark algorithms. These experimental results demonstrate that the OPBNGO algorithm is capable of achieving high-quality segmentation of mural images, effectively extracting intricate details while maximizing the preservation of the original image’s feature information. It strikes a favorable balance between detail extraction and retention. Compared to the benchmark algorithms, OPBNGO achieves the highest FSIM values, signifying greater feature similarity and lower distortion between the segmented images and their original counterparts. This indicates superior retention of critical structural elements within the images, along with enhanced preservation of edge clarity and texture definition, thereby minimizing information loss post segmentation. Collectively, these findings validate OPBNGO as an effective image segmentation methodology.
In summary, through the analysis of the fitness function values PSNR, SSIM, and FSIM in multi-threshold image segmentation, it is confirmed that the OPBNGO algorithm proposed in this paper, due to the advanced nature of its search strategy, achieves outstanding results in these four metrics. It can effectively segment mural images with the minimum image distortion rate and the highest structural similarity, making it a promising method for multi-threshold image segmentation.

4.2.6. Convergence Analysis of Mural Image Segmentation

The preceding analysis of the key performance metrics for the OPBNGO algorithm in addressing mural image segmentation tasks has validated its potential as a promising approach for mural image segmentation. However, beyond segmentation accuracy, the convergence characteristics of an algorithm during image segmentation are equally critical. A robust algorithm should exhibit a faster convergence rate to ensure its practical applicability. Consequently, this section focuses on analyzing the convergence behavior of the OPBNGO algorithm when solving mural image segmentation problems. The experimental results for a segmentation threshold count of eight are illustrated in Figure 16, where the X-axis denotes the number of iterations, and the Y-axis represents the fitness function value.
As illustrated in the figure, each algorithm effectively enhances the fitness function value and, in most cases, attains a stable convergence state, thereby validating the efficacy of each algorithm in solving mural image segmentation problems. Notably, the OPBNGO algorithm establishes a notable lead after 30 iterations in the majority of scenarios. This is primarily attributed to the integration of three learning strategies, which significantly bolster the algorithm’s exploitation capabilities, enabling it to secure an early advantage during the initial iterations. As the iterations progress, the OPBNGO algorithm’s lead widens further, largely due to its superior global search capabilities, which allow it to effectively escape local optima traps in segmentation threshold combinations during the later stages of iteration. Additionally, it is evident that the OPBNGO algorithm achieves a stable convergence state after approximately 60 iterations, demonstrating a faster convergence rate compared to the benchmark algorithms. These experimental observations substantiate that the proposed OPBNGO algorithm, owing to its advanced strategies, not only achieves higher convergence accuracy but also exhibits a faster convergence speed, thereby qualifying it as a practical and effective approach for mural image segmentation.

4.2.7. Computational Time Analysis of Mural Image Segmentation

In this section, we primarily focus on quantifying the actual runtime performance of the algorithm when addressing mural image segmentation tasks. The detailed numerical statistics are presented in Supplementary Table S5. To facilitate discussion, Figure 17 illustrates the ranking of algorithms’ runtime metrics when solving the segmentation problem for eight mural images, while Figure 18 depicts the average runtime rankings of the algorithms across varying numbers of segmentation thresholds. Specifically, in Figure 17, subplots (a), (b), (c), and (d) correspond to the runtime rankings of the algorithm when the number of segmentation thresholds is set to two, four, six, and eight, respectively.
As illustrated in Figure 17, the OPBNGO algorithm achieves the top runtime ranking across all eight mural image segmentation tasks when the number of segmentation thresholds is set to two and six, attaining a 100% success rate. For threshold counts of four and eight, OPBNGO ranks first in seven out of eight tasks, yielding an 87.5% success rate. Furthermore, Figure 18 demonstrates that, across varying numbers of segmentation thresholds, the OPBNGO algorithm consistently exhibits lower bar heights, indicating shorter actual runtimes. These experimental analyses substantiate that the proposed OPBNGO algorithm achieves faster execution speeds when solving mural image segmentation problems, effectively reducing computational overhead and minimizing resource wastage. Consequently, it qualifies as a promising methodology for mural image segmentation.

5. Conclusions and Future Works

In this paper, to address the issue of decreased optimization accuracy caused by insufficient global exploration capability, inadequate exploitation capability, and imbalance between the exploration and exploitation phases in the original NGO algorithm when solving mural image segmentation problems, an enhanced NGO algorithm called OPBNGO is proposed by integrating three learning strategies, which significantly improves the algorithm’s optimization performance. In the proposed OPBNGO algorithm, firstly, to tackle the problem of inadequate exploration capability of the NGO algorithm in solving mural image segmentation problems, an off-center learning strategy is introduced, which enhances population diversity during the algorithm’s execution and strengthens the algorithm’s global search capability. Secondly, to address the imbalance between the exploration and exploitation phases of the NGO algorithm in solving mural image segmentation, a partitioned learning strategy is proposed, which updates individual information through different learning methods, achieving better balance between the exploration and exploitation phases and improving the algorithm’s ability to escape local suboptimal solutions. Finally, to address the issue of decreased optimization accuracy caused by the inadequate exploitation capability of the NGO algorithm in solving mural image segmentation problems, a Bernstein-weighted learning strategy is proposed, which utilizes the weighted properties of Bernstein polynomials to guide population individuals using weighted individuals, effectively enhancing the algorithm’s exploitation performance and promoting optimization accuracy. Subsequently, the OPBNGO algorithm was applied to address the segmentation problem for eight mural images. The experimental results demonstrate the following:
(1) Compared to high-performing benchmark algorithms, the OPBNGO algorithm achieves a 96.87% win rate in terms of the fitness function value. (2) It attains a 93.75% win rate across the peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and feature similarity index measure (FSIM) metrics. These outcomes highlight the algorithm’s robust performance in mural image segmentation, as it maximizes inter-class variance in the images while preserving structural, edge, and luminance information to the greatest extent possible. Consequently, the OPBNGO algorithm yields superior image segmentation quality and reduces distortion rates, qualifying it as a promising methodology for mural image segmentation.
While the OPBNGO algorithm proposed in this study has shown promising segmentation performance on mural images, there is still room for improvement, particularly on specific images where performance limitations exist. Therefore, future work will focus on the following aspects: (1) developing tailored improvement strategies, such as leveraging reinforcement learning for dynamic strategy selection, to enhance segmentation efficacy on these challenging cases; (2) expanding the evaluation of OPBNGO beyond mural images to broader image segmentation domains for comprehensive performance testing and validation; and (3) developing a multi-objective variant to address segmentation tasks across diverse application fields, since the current OPBNGO is designed for single-objective segmentation, and thereby advancing its versatility and applicability.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/biomimetics10060373/s1, Table S1: The fitness function value of algorithms in mural image segmentation; Table S2: The PSNR value of algorithms in mural image segmentation; Table S3: The SSIM value of algorithms in mural image segmentation; Table S4: The FSIM value of algorithms in mural image segmentation; Table S5: The runtime of algorithms in mural image segmentation.

Author Contributions

Conceptualization, J.W.; methodology, J.W. and Z.B.; software, J.W.; validation, Z.B., J.W. and H.D.; formal analysis, J.W.; investigation, J.W. and H.D.; resources, Z.B. and H.D.; data curation, J.W.; writing—original draft preparation, J.W.; writing—review and editing, J.W., Z.B. and H.D.; visualization, J.W. and Z.B.; supervision, H.D.; project administration, H.D.; funding acquisition, H.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

If there are legitimate and reasonable needs, one can request the relevant data from the corresponding author.

Acknowledgments

We extend our deepest gratitude to the reviewers and the editorial team for their invaluable efforts and time invested in thoroughly reviewing our manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, H. Intelligent restoration of ancient murals based on discrete differential algorithm. J. Comput. Methods Sci. Eng. 2021, 21, 803–814. [Google Scholar] [CrossRef]
  2. Krishnan, B.; Ganesan, A.R.; Balasubramani, R.; Nguyen, D.D.; Chang, S.W. Chrysoeriol ameliorates hyperglycemia by regulating the carbohydrate metabolic enzymes in streptozotocin-induced diabetic rats. Food Sci. Hum. Wellness 2020, 9, 346–354. [Google Scholar] [CrossRef]
  3. Teshita, A.; Khan, W.; Ullah, A.; Iqbal, B.; Ahmad, N. Soil Nematodes in Agroecosystems: Linking Cropping System’s Rhizosphere Ecology to Nematode Structure and Function. J. Soil Sci. Plant Nutr. 2024, 24, 6467–6482. [Google Scholar] [CrossRef]
  4. Yuexia, C.; Long, C.; Ruochen, W.; Xing, X.; Yujie, S. Modeling and test on height adjustment system of electrically-controlled air suspension for agricultural vehicles. Int. J. Agric. Biol. Eng. 2016, 9, 40–47. [Google Scholar]
  5. Qiu, H.; Gao, L.; Wang, J.; Pan, J.; Yan, Y.; Zhang, X. A precise and efficient detection of Beta-Cyfluthrin via fluorescent molecularly imprinted polymers with ally fluorescein as functional monomer in agricultural products. Food Chem. 2017, 217, 620–627. [Google Scholar] [CrossRef]
  6. Mahmood, A.; Hu, Y.; Tanny, J.; Asante, E.A. Effects of shading and insect-proof screens on crop microclimate and production: A review of recent advances. Sci. Hortic. 2018, 241, 241–251. [Google Scholar] [CrossRef]
  7. Du, Z.; Hu, Y.; Ali Buttar, N.; Mahmood, A. X-ray computed tomography for quality inspection of agricultural products: A review. Food Sci. Nutr. 2019, 7, 3146–3160. [Google Scholar] [CrossRef]
  8. Wang, Y.; Xu, J.; Qiu, Y.; Li, P.; Liu, B.; Yang, L. Highly specific monoclonal antibody and sensitive quantum dot beads-based fluorescence immunochromatographic test strip for tebuconazole assay in agricultural products. J. Agric. Food Chem. 2019, 67, 9096–9103. [Google Scholar] [CrossRef]
  9. Shen, L.; Xiong, X.; Zhang, D.; Zekrumah, M.; Hu, Y.; Gu, X. Optimization of betacyanins from agricultural by-products using pressurized hot water extraction for antioxidant and in vitro oleic acid-induced steatohepatitis inhibitory activity. J. Food Biochem. 2019, 43, e13044. [Google Scholar] [CrossRef]
  10. Mao, Y.; Sun, M.; Hong, X.; Chakraborty, S.; Du, D. Semi-quantitative and quantitative detection of ochratoxin A in agricultural by-products using a self-assembling immunochromatographic strip. J. Sci. Food Agric. 2021, 101, 1659–1665. [Google Scholar] [CrossRef]
  11. Jiao, L.J.; Wang, W.J.; Li, B.J.; Zhao, Q.S. Wutai mountain mural inpainting based on improved block matching algorithm. J. Comput. Aid Des. Comput. Graph 2019, 31, 119–125. [Google Scholar] [CrossRef]
  12. Zhang, J.; Fu, W. Sponge effect of aerated concrete on phosphorus adsorption-desorption from agricultural drainage water in rainfall. Soil Water Res. 2020, 15, 220–227. [Google Scholar] [CrossRef]
  13. Osae, R.; Essilfie, G.; Alolga, R.N.; Akaba, S.; Song, X.; Owusu-Ansah, P.; Zhou, C. Application of non-thermal pretreatment techniques on agricultural products prior to drying: A review. J. Sci. Food Agric. 2020, 100, 2585–2599. [Google Scholar] [CrossRef] [PubMed]
  14. Jin, Y.; Liu, J.; Xu, Z.; Yuan, S.; Li, P.; Wang, J. Development status and trend of agricultural robot technology. Int. J. Agric. Biol. Eng. 2021, 14, 1–19. [Google Scholar] [CrossRef]
  15. Ahmed, S.; Qiu, B.; Ahmad, F.; Kong, C.W.; Xin, H. A state-of-the-art analysis of obstacle avoidance methods from the perspective of an agricultural sprayer UAV’s operation scenario. Agronomy 2021, 11, 1069. [Google Scholar] [CrossRef]
  16. Hu, Y.; Chen, Y.; Wei, W.; Hu, Z.; Li, P. Optimization design of spray cooling fan based on CFD simulation and field experiment for horticultural crops. Agriculture 2021, 11, 566. [Google Scholar] [CrossRef]
  17. Guo, Z.; Chen, P.; Yosri, N.; Chen, Q.; Elseedi, H.R.; Zou, X.; Yang, H. Detection of heavy metals in food and agricultural products by surface-enhanced Raman spectroscopy. Food Rev. Int. 2023, 39, 1440–1461. [Google Scholar] [CrossRef]
  18. Awais, M.; Li, W.; Hussain, S.; Cheema, M.J.M.; Li, W.; Song, R.; Liu, C. Comparative evaluation of land surface temperature images from unmanned aerial vehicle and satellite observation for agricultural areas using in situ data. Agriculture 2022, 12, 184. [Google Scholar] [CrossRef]
  19. Ma, J.; Liu, K.; Dong, X.; Chen, C.; Qiu, B.; Zhang, S. Effects of leaf surface roughness and contact angle on in vivo measurement of droplet retention. Agronomy 2022, 12, 2228. [Google Scholar] [CrossRef]
  20. Yu, Y.; Hao, S.; Guo, S.; Tang, Z.; Chen, S. Motor torque distribution strategy for different tillage modes of agricultural electric tractors. Agriculture 2022, 12, 1373. [Google Scholar] [CrossRef]
  21. Zhu, Z.; Zeng, L.; Chen, L.; Zou, R.; Cai, Y. Fuzzy adaptive energy management strategy for a hybrid agricultural tractor equipped with HMCVT. Agriculture 2022, 12, 1986. [Google Scholar] [CrossRef]
  22. Yu, T.; Lin, C.; Zhang, S.; Ding, X.; Wu, J.; Zhang, J. End-to-end partial convolutions neural networks for Dunhuang grottoes wall-painting restoration. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Republic of Korea, 27–28 October 2019. [Google Scholar]
  23. Cui, M.; Guo, Y.; Chen, J. Influence of transfer plot area and location on chemical input reduction in agricultural production: Evidence from China. Agriculture 2023, 13, 1794. [Google Scholar] [CrossRef]
  24. Jiang, Y.; Zhang, Y.; Li, H. Research progress and analysis on comprehensive utilization of livestock and poultry biogas slurry as agricultural resources. Agriculture 2023, 13, 2216. [Google Scholar] [CrossRef]
  25. Pan, Q.; Lu, Y.; Hu, H.; Hu, Y. Review and research prospects on sprinkler irrigation frost protection for horticultural crops. Sci. Hortic. 2024, 326, 112775. [Google Scholar] [CrossRef]
  26. Chen, L.; Zhang, Z.; Li, H.; Zhang, X. Maintenance skill training gives agricultural socialized service providers more advantages. Agriculture 2023, 13, 135. [Google Scholar] [CrossRef]
  27. Wang, B.; Du, X.; Wang, Y.; Mao, H. Multi-machine collaboration realization conditions and precise and efficient production mode of intelligent agricultural machinery. Int. J. Agric. Biol. Eng. 2024, 17, 27–36. [Google Scholar]
  28. Han, P.H.; Chen, Y.S.; Liu, I.S.; Jang, Y.P.; Tsai, L.; Chang, A.; Hung, Y.P. A compelling virtual tour of the dunhuang cave with an immersive head-mounted display. IEEE Comput. Graph. Appl. 2019, 40, 40–55. [Google Scholar] [CrossRef]
  29. Abualigah, L.; Almotairi, K.H.; Elaziz, M.A. Multilevel thresholding image segmentation using meta-heuristic optimization algorithms: Comparative analysis, open challenges and new trends. Appl. Intell. 2023, 53, 11654–11704. [Google Scholar] [CrossRef]
  30. Khishe, M.; Mosavi, M.R. Chimp optimization algorithm. Expert Syst. Appl. 2020, 149, 113338. [Google Scholar] [CrossRef]
  31. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A novel meta-heuristic optimization algorithm. Knowl.-Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  32. Houssein, E.H.; Oliva, D.; Samee, N.A.; Mahmoud, N.F.; Emam, M.M. Liver cancer algorithm: A novel bio-inspired optimizer. Comput. Biol. Med. 2023, 165, 107389. [Google Scholar] [CrossRef] [PubMed]
  33. Zhao, W.; Wang, L.; Zhang, Z.; Fan, H.; Zhang, J.; Mirjalili, S.; Cao, Q. Electric eel foraging optimization: A new bio-inspired optimizer for engineering applications. Expert Syst. Appl. 2024, 238, 122200. [Google Scholar] [CrossRef]
  34. Sulaiman, M.H.; Mustaffa, Z.; Saari, M.M.; Daniyal, H. Barnacles mating optimizer: A new bio-inspired algorithm for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2020, 87, 103330. [Google Scholar] [CrossRef]
  35. El-Kenawy, E.S.M.; Khodadadi, N.; Mirjalili, S.; Abdelhamid, A.A.; Eid, M.M.; Ibrahim, A. Greylag goose optimization: Nature-inspired optimization algorithm. Expert Syst. Appl. 2024, 238, 122147. [Google Scholar] [CrossRef]
  36. Hu, G.; Guo, Y.; Wei, G.; Abualigah, L. Genghis Khan shark optimizer: A novel nature-inspired algorithm for engineering optimization. Adv. Eng. Inform. 2023, 58, 102210. [Google Scholar] [CrossRef]
  37. Hu, G.; Cheng, M.; Houssein, E.H.; Hussien, A.G.; Abualigah, L. SDO: A novel sled dog-inspired optimizer for solving engineering problems. Adv. Eng. Inform. 2024, 62, 102783. [Google Scholar] [CrossRef]
  38. Trojovská, E.; Dehghani, M.; Trojovský, P. Zebra optimization algorithm: A new bio-inspired optimization algorithm for solving optimization algorithm. IEEE Access 2024, 10, 49445–49473. [Google Scholar] [CrossRef]
  39. Zhao, Y.; Fu, S.; Zhang, L.; Huang, H. Aitken optimizer: An efficient optimization algorithm based on the Aitken acceleration method. J. Supercomput. 2025, 81, 264. [Google Scholar] [CrossRef]
  40. Mozhdehi, A.T.; Khodadadi, N.; Aboutalebi, M.; El-kenawy, E.S.M.; Hussien, A.G.; Zhao, W.; Mirjalili, S. Divine Religions Algorithm: A novel social-inspired metaheuristic algorithm for engineering and continuous optimization problems. Clust. Comput. 2025, 28, 253. [Google Scholar] [CrossRef]
  41. Zhao, W.; Wang, L.; Zhang, Z.; Mirjalili, S.; Khodadadi, N.; Ge, Q. Quadratic Interpolation Optimization (QIO): A new optimization algorithm based on generalized quadratic interpolation and its applications to real-world engineering problems. Comput. Methods Appl. Mech. Eng. 2023, 417, 116446. [Google Scholar] [CrossRef]
  42. Braik, M.; Al-Hiary, H.; Alzoubi, H.; Hammouri, A.; Azmi Al-Betar, M.; Awadallah, M.A. Tornado optimizer with Coriolis force: A novel bio-inspired meta-heuristic algorithm for solving engineering problems. Artif. Intell. Rev. 2025, 58, 123. [Google Scholar] [CrossRef]
  43. Deng, L.; Liu, S. Snow ablation optimizer: A novel metaheuristic technique for numerical optimization and engineering design. Expert Syst. Appl. 2023, 225, 120069. [Google Scholar] [CrossRef]
  44. Abdel-Basset, M.; Mohamed, R.; Sallam, K.M.; Chakrabortty, R.K. Light spectrum optimizer: A novel physics-inspired metaheuristic optimization algorithm. Mathematics 2022, 10, 3466. [Google Scholar] [CrossRef]
  45. Shehadeh, H.A. Chernobyl disaster optimizer (CDO): A novel meta-heuristic method for global optimization. Neural Comput. Appl. 2023, 35, 10733–10749. [Google Scholar] [CrossRef]
  46. Houssein, E.H.; Abdalkarim, N.; Hussain, K.; Mohamed, E. Accurate multilevel thresholding image segmentation via oppositional Snake Optimization algorithm: Real cases with liver disease. Comput. Biol. Med. 2024, 169, 107922. [Google Scholar] [CrossRef]
  47. Lian, J.; Hui, G.; Ma, L.; Zhu, T.; Wu, X.; Heidari, A.A.; Chen, H. Parrot optimizer: Algorithm and applications to medical problems. Comput. Biol. Med. 2024, 172, 108064. [Google Scholar] [CrossRef]
  48. Qiao, L.; Liu, K.; Xue, Y.; Tang, W.; Salehnia, T. A multi-level thresholding image segmentation method using hybrid Arithmetic Optimization and Harris Hawks Optimizer algorithms. Expert Syst. Appl. 2024, 241, 122316. [Google Scholar] [CrossRef]
  49. Yuan, C.; Zhao, D.; Heidari, A.A.; Liu, L.; Chen, Y.; Wu, Z.; Chen, H. Artemisinin optimization based on malaria therapy: Algorithm and applications to medical image segmentation. Displays 2024, 84, 102740. [Google Scholar] [CrossRef]
  50. Chen, D.; Ge, Y.; Wan, Y.; Deng, Y.; Chen, Y.; Zou, F. Poplar optimization algorithm: A new meta-heuristic optimization technique for numerical optimization and image segmentation. Expert Syst. Appl. 2022, 200, 117118. [Google Scholar] [CrossRef]
  51. Wang, J.; Bei, J.; Song, H.; Zhang, H.; Zhang, P. A whale optimization algorithm with combined mutation and removing similarity for global optimization and multilevel thresholding image segmentation. Appl. Soft Comput. 2023, 137, 110130. [Google Scholar] [CrossRef]
  52. Das, A.; Namtirtha, A.; Dutta, A. Lévy–Cauchy arithmetic optimization algorithm combined with rough K-means for image segmentation. Appl. Soft Comput. 2023, 140, 110268. [Google Scholar] [CrossRef]
  53. Wang, Z.; Yu, F.; Wang, D.; Liu, T.; Hu, R. Multi-threshold segmentation of breast cancer images based on improved dandelion optimization algorithm. J. Supercomput. 2024, 80, 3849–3874. [Google Scholar] [CrossRef]
  54. Mostafa, R.R.; Houssein, E.H.; Hussien, A.G.; Singh, B.; Emam, M.M. An enhanced chameleon swarm algorithm for global optimization and multi-level thresholding medical image segmentation. Neural Comput. Appl. 2024, 36, 8775–8823. [Google Scholar] [CrossRef]
  55. Hashim, F.A.; Hussien, A.G.; Bouaouda, A.; Samee, N.A.; Khurma, R.A.; Alamro, H.; Al-Betar, M.A. An enhanced exponential distribution optimizer and its application for multi-level medical image thresholding problems. Alex. Eng. J. 2024, 93, 142–188. [Google Scholar] [CrossRef]
  56. Dehghani, M.; Hubálovský, Š.; Trojovský, P. Northern goshawk optimization: A new swarm-based algorithm for solving optimization problems. IEEE Access 2021, 9, 162059–162080. [Google Scholar] [CrossRef]
  57. Deng, H.; Peng, L.; Zhang, H.; Yang, B.; Chen, Z. Ranking-based biased learning swarm optimizer for large-scale optimization. Inf. Sci. 2019, 493, 120–137. [Google Scholar] [CrossRef]
  58. Wu, G.; Mallipeddi, R.; Suganthan, P.N.; Wang, R.; Chen, H. Differential evolution with multi-population based ensemble of mutation strategies. Inf. Sci. 2016, 329, 329–345. [Google Scholar] [CrossRef]
  59. Zhang, X.; Lin, Q. Three-learning strategy particle swarm algorithm for global optimization problems. Inf. Sci. 2022, 593, 289–313. [Google Scholar] [CrossRef]
  60. Huynh-Thu, Q.; Ghanbari, M. Scope of validity of PSNR in image/video quality assessment. Electron. Lett. 2008, 44, 800–801. [Google Scholar] [CrossRef]
  61. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  62. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [PubMed]
  63. Tanabe, R.; Fukunaga, A.S. Improving the search performance of SHADE using linear population size reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 1658–1665. [Google Scholar]
  64. Sallam, K.M.; Elsayed, S.M.; Chakrabortty, R.K.; Ryan, M.J. Improved multi-operator differential evolution algorithm for solving unconstrained problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; IEEE: New York, NY, USA, 2020; pp. 1–8. [Google Scholar]
  65. Ren, L.; Heidari, A.A.; Cai, Z.; Shao, Q.; Liang, G.; Chen, H.L.; Pan, Z. Gaussian kernel probability-driven slime mould algorithm with new movement mechanism for multi-level image segmentation. Measurement 2022, 192, 110884. [Google Scholar] [CrossRef]
  66. Ma, G.; Yue, X. An improved whale optimization algorithm based on multilevel threshold image segmentation using the Otsu method. Eng. Appl. Artif. Intell. 2022, 113, 104960. [Google Scholar] [CrossRef]
  67. Yuan, C.; Zhao, D.; Heidari, A.A.; Liu, L.; Chen, Y.; Chen, H. Polar lights optimizer: Algorithm and applications in image segmentation and feature selection. Neurocomputing 2024, 607, 128427. [Google Scholar] [CrossRef]
Figure 1. Execution flowchart of algorithm. (a). NGO algorithm. (b). OPBNGO algorithm.
Figure 1. Execution flowchart of algorithm. (a). NGO algorithm. (b). OPBNGO algorithm.
Biomimetics 10 00373 g001
Figure 2. The change chart of the adaptive factor.
Figure 2. The change chart of the adaptive factor.
Biomimetics 10 00373 g002
Figure 3. The schematic diagram of the partitioned learning strategy.
Figure 3. The schematic diagram of the partitioned learning strategy.
Biomimetics 10 00373 g003
Figure 4. The schematic diagram of the Bernstein-weighted learning strategy.
Figure 4. The schematic diagram of the Bernstein-weighted learning strategy.
Biomimetics 10 00373 g004
Figure 5. The change chart of second-order Bernstein polynomials.
Figure 5. The change chart of second-order Bernstein polynomials.
Biomimetics 10 00373 g005
Figure 6. The information of eight mural images.
Figure 6. The information of eight mural images.
Biomimetics 10 00373 g006
Figure 7. Figure illustrating results of strategy ablation experiments.
Figure 7. Figure illustrating results of strategy ablation experiments.
Biomimetics 10 00373 g007
Figure 8. Ranking plot of fitness function values for mural image segmentation. (a). nTH = 2. (b). nTH = 4. (c). nTH = 6. (d). nTH = 8.
Figure 8. Ranking plot of fitness function values for mural image segmentation. (a). nTH = 2. (b). nTH = 4. (c). nTH = 6. (d). nTH = 8.
Biomimetics 10 00373 g008
Figure 9. Bar chart of average rankings of fitness function values for mural image segmentation.
Figure 9. Bar chart of average rankings of fitness function values for mural image segmentation.
Biomimetics 10 00373 g009
Figure 10. Ranking plot of PSNR values for mural image segmentation. (a). nTH = 2. (b). nTH = 4. (c). nTH = 6. (d). nTH = 8.
Figure 10. Ranking plot of PSNR values for mural image segmentation. (a). nTH = 2. (b). nTH = 4. (c). nTH = 6. (d). nTH = 8.
Biomimetics 10 00373 g010
Figure 11. Bar chart of average rankings of PSNR values for mural image segmentation.
Figure 11. Bar chart of average rankings of PSNR values for mural image segmentation.
Biomimetics 10 00373 g011
Figure 12. Ranking plot of SSIM values for mural image segmentation. (a). nTH = 2. (b). nTH = 4. (c). nTH = 6. (d). nTH = 8.
Figure 12. Ranking plot of SSIM values for mural image segmentation. (a). nTH = 2. (b). nTH = 4. (c). nTH = 6. (d). nTH = 8.
Biomimetics 10 00373 g012
Figure 13. Bar chart of average rankings of SSIM values for mural image segmentation.
Figure 13. Bar chart of average rankings of SSIM values for mural image segmentation.
Biomimetics 10 00373 g013
Figure 14. Ranking plot of FSIM values for mural image segmentation. (a). nTH = 2. (b). nTH = 4. (c). nTH = 6. (d). nTH = 8.
Figure 14. Ranking plot of FSIM values for mural image segmentation. (a). nTH = 2. (b). nTH = 4. (c). nTH = 6. (d). nTH = 8.
Biomimetics 10 00373 g014
Figure 15. Bar chart of average rankings of FSIM values for mural image segmentation.
Figure 15. Bar chart of average rankings of FSIM values for mural image segmentation.
Biomimetics 10 00373 g015
Figure 16. Convergence plot for mural image segmentation algorithms. (a). M1. (b). M2. (c). M3. (d). M4. (e). M5. (f). M6. (g). M7. (h). M8.
Figure 16. Convergence plot for mural image segmentation algorithms. (a). M1. (b). M2. (c). M3. (d). M4. (e). M5. (f). M6. (g). M7. (h). M8.
Biomimetics 10 00373 g016
Figure 17. Ranking plot of runtime for mural image segmentation. (a). nTH = 2. (b). nTH = 4. (c). nTH = 6. (d). nTH = 8.
Figure 17. Ranking plot of runtime for mural image segmentation. (a). nTH = 2. (b). nTH = 4. (c). nTH = 6. (d). nTH = 8.
Biomimetics 10 00373 g017
Figure 18. Bar chart of average rankings of runtime for mural image segmentation.
Figure 18. Bar chart of average rankings of runtime for mural image segmentation.
Biomimetics 10 00373 g018
Table 1. Summary of current research on image segmentation algorithms.
Table 1. Summary of current research on image segmentation algorithms.
AuthorsYearAlgorithmsMain StrategiesResults
Houssein et al. [46]2024SO-OBLOpposition-based learningHigh search performance
Lian et al. [47]2024POInnovative development strategiesHigh image segmentation performance
Qiao et al. [48]2024AOA-HHOHybrid algorithmLow image distortion rate
Yuan et al. [49]2024AOInnovative exploration strategiesHigh structural similarity
Chen et al. [50]2022POAInnovative exploration strategiesHigh feature similarity
Wang et al. [51]2023CRWOACrossover and similarity removal strategiesExcellent fitness function value
Arunita et al. [52]2023LCAOALévy–Cauchy variationHigh feature similarity
Wang et al. [53]2024IDOAOpposition-based learningHigh image segmentation performance
Mostafa et al. [54]2024ICSALévy, Gaussian, and Cauchy perturbation strategiesHigh feature similarity
Hashim et al. [55]2024mEDOPhasor operators and an adaptive optimal mutation strategyLow image distortion rate
Table 2. Compare algorithm parameter settings on mural image segmentation.
Table 2. Compare algorithm parameter settings on mural image segmentation.
NameTimeParameter Settings
LSHADE [63]2014 C o n t r o l   p a r a m e t e r s :   N P i n i t = 18 · D , N P m i n = 4 , | A | = 2.6 · N P , p = 0.11 , H = 6
IMODE [64]2020 C o n t r o l   p a r a m e t e r s :   D = 2 ,   arch_rate = 2.6
MGSMA [65]2022 V a r i a b l e :   W E P = W E P m i n + t · W E P m a x W E P m i n T
NGO [56]2021 V a r i a b l e :   R = 0.02 · ( 1 t T )
RAVWOA [66]2022 V a r i a b l e :   a = 2 · ( 1 t T )
PLO [67]2024 V a r i a b l e :   W 1 = 2 1 + e 2 ( t / T ) 4 1 ,   W 2 = e ( 2 t / T ) 3
Table 3. Visualization of mural image segmentation results.
Table 3. Visualization of mural image segmentation results.
ImagenTH = 2nTH = 4nTH = 6nTH = 8
M1Biomimetics 10 00373 i001
M2Biomimetics 10 00373 i002
M3Biomimetics 10 00373 i003
M4Biomimetics 10 00373 i004
M5Biomimetics 10 00373 i005
M6Biomimetics 10 00373 i006
M7Biomimetics 10 00373 i007
M8Biomimetics 10 00373 i008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, J.; Bao, Z.; Dong, H. An Improved Northern Goshawk Optimization Algorithm for Mural Image Segmentation. Biomimetics 2025, 10, 373. https://doi.org/10.3390/biomimetics10060373

AMA Style

Wang J, Bao Z, Dong H. An Improved Northern Goshawk Optimization Algorithm for Mural Image Segmentation. Biomimetics. 2025; 10(6):373. https://doi.org/10.3390/biomimetics10060373

Chicago/Turabian Style

Wang, Jianfeng, Zuowen Bao, and Hao Dong. 2025. "An Improved Northern Goshawk Optimization Algorithm for Mural Image Segmentation" Biomimetics 10, no. 6: 373. https://doi.org/10.3390/biomimetics10060373

APA Style

Wang, J., Bao, Z., & Dong, H. (2025). An Improved Northern Goshawk Optimization Algorithm for Mural Image Segmentation. Biomimetics, 10(6), 373. https://doi.org/10.3390/biomimetics10060373

Article Metrics

Back to TopTop