Next Article in Journal
Assessment of Oak Roundwood Quality Using Photogrammetry and Acoustic Surveys
Previous Article in Journal
Regional Forest Carbon Stock Estimation Based on Multi-Source Data and Machine Learning Algorithms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Forest Canopy Image Segmentation Based on the Parametric Evolutionary Barnacle Optimization Algorithm

1
School of Electrical Engineering, Suihua University, Suihua 152000, China
2
College of Computer and Control Engineering, Northeast Forestry University, Harbin 150040, China
*
Author to whom correspondence should be addressed.
Forests 2025, 16(3), 419; https://doi.org/10.3390/f16030419
Submission received: 26 January 2025 / Revised: 17 February 2025 / Accepted: 22 February 2025 / Published: 25 February 2025
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)

Abstract

:
Forest canopy image is a necessary technical means to obtain canopy parameters, whereas image segmentation is an essential factor that affects the accurate extraction of canopy parameters. To address the limitations of forest canopy image mis-segmentation due to its complex structure, this study proposes a forest canopy image segmentation method based on the parameter evolutionary barnacle optimization algorithm (PEBMO). The PEBMO algorithm utilizes an extensive range of nonlinear incremental penis coefficients better to balance the exploration and exploitation process of the algorithm, dynamically decreasing reproduction coefficients instead of the Hardy-Weinberg law coefficients to improve the exploitation ability; the parent generation of barnacle particles ( p l = 0.5) is subjected to the Chebyshev chaotic perturbation to avoid the algorithm from falling into premature maturity. Four types of canopy images were used as segmentation objects. Kapur entropy is the fitness function, and the PEBMO algorithm selects the optimal value threshold. The segmentation performance of each algorithm is comprehensively evaluated by the fitness value, standard deviation, structural similarity index value, peak signal-to-noise ratio value, and feature similarity index value. The PEBMO algorithm outperforms the comparison algorithm by 91.67 % , 55.56 % , 62.5 % , 69.44 % , and 63.89 % for each evaluation metric, respectively. The experimental results show that the PEBMO algorithm can effectively improve the segmentation accuracy and quality of forest canopy images.

1. Introduction

Image segmentation is the process of dividing the pixel points of the original image into different categories of sub-regions according to attributes such as color, texture, and intensity value, and then completing the process of extracting the target in the image [1]. Standard segmentation methods include the thresholding method [2,3,4,5], region technique [6], watershed algorithm [7], and deep learning [8,9,10]. The region-based segmentation method has a severe segmentation deficiency for aggregated particle adhesion [11]. The watershed algorithm has the disadvantage of over-segmentation and reflected light interfering with the image [12]. Deep learning is widely used in forest canopy image segmentation owing to its high segmentation accuracy, robustness, adaptability, and ability to handle large-scale data. U-Net, Mask R-CNN and Transformers are classical segmentation modules in deep learning. However, segmentation methods based on deep learning also suffer from high training cost. They are long and time-consuming, have limitations in being sensitive to noise, and are not suitable for segmenting small-sample images with little feature information and high randomness. Compared to the deep learning method, the threshold segmentation method utilizes the information in the image histogram to find the threshold value of the image. It distinguishes different target and background regions by setting multiple threshold values [13]. Since it relies on simple mathematical operations, it outperforms complex deep learning models in terms of processing speed. The key to threshold segmentation is selecting an appropriate segmentation method to determine the segmentation threshold. Threshold selection methods based on information entropy have attracted considerable attention owing to their clear mathematical concepts. Otsu entropy [14], Renyi entropy [15], and Kapur entropy [16] are typical information entropy methods. The Otsu method selects segmentation thresholds to maximize the variance between the target and background classes. It is suitable for the case where the area occupied by the target and the background in the image are close, and the Otsu segmentation method will be invalid when the area occupied by the two is disparate [17]. The essence of Renyi entropy is to use the subordination function to convert the image histogram information into a fuzzy domain and then obtain the optimal threshold value to segment the image based on the principle of maximum entropy [18]. The adjustable parameter α is an essential parameter in Renyi entropy, and the value of α determines the form of Renyi entropy. Although different segmentation effects can be obtained by adjusting α , it is a challenge to select the optimal parameter reasonably. Improper selection may lead to over-segmentation or under-segmentation phenomenon. The segmentation principle of Kapur entropy is that the total entropy value of the target class and the background class is the maximum. Because Kapur entropy is based on the grey level intra-class probability when calculating the entropy value, it is not sensitive to the size of the divided target and background, can better retain small targets in the image [19], and is widely used in color image segmentation.
Forest canopy images have various complex backgrounds, and the gray levels vary widely, requiring multiple thresholds for accurate segmentation. Traditional image segmentation methods use enumeration, and when performing threshold segmentation, each combination must be individually tested to select the optimal threshold. The computational complexity increases exponentially with an increase in the number of thresholds. To overcome these limitations, scholars view finding the optimal thresholds in multithreshold segmentation as an optimization problem with constraints. Heuristic optimization algorithms search for optimal thresholds to reduce the computational complexity. Intelligent optimization algorithms, such as the Golden Jackal Optimization Algorithm [20], Elephant Herding Optimization Algorithm [21], Sand Cat Optimization Algorithm [22], and Dung Beetle Optimization Algorithm [23] have been successfully applied to image segmentation. Studies have shown that the performance of intelligent algorithms significantly affects the segmentation accuracy and efficiency of the images. Therefore, improving intelligent algorithms to enhance their performance and image segmentation accuracy and efficiency has become the focus of research. Sukanta Nama proposes a quasi-reflected slime mold (QRSMA) method combined with a quasi-reflective learning mechanism. The results of the CEC2020 benchmark function tests showed that QRSMA outperformed the statistical, convergence, and diversity measures of the slime mold algorithm. It also outperformed other existing methods for segmenting COVID-19 X-ray images [24]. Jiquan Wang et al. proposed a Whale Optimization Algorithm combining mutation and de-similarity to address the limitations of the whale optimization Algorithm easily falling into local optimal solutions. Multilevel threshold segmentation was performed on ten grayscale images using Kapur and Otsu as the objective function. The experimental results showed that CRWOA outperformed the five compared algorithms in terms of convergence and segmentation quality [25]. Essam H. Houssein et al. proposed a chimpanzee optimization algorithm (IChOA) based on the strategy of contrastive learning and Levy Flight improvement. IChOA was tested on a dataset from the database of infrared images using Otsu and Kapur methods. The experimental results show that the IChOA is robust for segmenting various positive and negative cases [26]. Laith Abualigah et al. proposed a novel hybrid algorithm (MPASSA) to determine the optimal thresholds in multi-threshold segmentation by borrowing the properties of the Marine Predator and Salp Swarm Algorithm. Multiple benchmark images were used to verify the performance of the MPASSA. The experimental results show that MPASSA avoids the problem of falling into the local optima. It also has a strong solution capability for image-segmentation problems [27].
To address the problems of poor segmentation accuracy of forest canopy images due to complex structure and uneven illumination, Wu et al. proposed a segmentation method based on differential evolutionary whale optimization algorithm [28]; Tong et al. proposed a segmentation method based on improved northern goshawk algorithm by using multi-strategy fusion [29]; Wang et al. proposed a segmentation method based on improved tuna swarm optimization [30]. The experimental results show that compared with the original algorithm, the improved algorithm obtains higher segmentation accuracy in segmenting forest canopy images. It proves the effectiveness and necessity of the improved intelligent algorithm in multi-threshold segmentation of forest canopy images. In recent years, scholars have been enthusiastic about innovative algorithms, and many intelligent algorithms with excellent performance have appeared, bringing new vitality to the research on image multi-threshold segmentation. The BMO is a novel bionic evolutionary optimization algorithm proposed by Sulaiman et al. (2018) that simulates the mating behavior of barnacles in nature to solve the optimization problem [31]. Since its proposal, it has received extensive attention from scholars. In [32], quasi-inverse dyadic learning and logistic chaotic mapping were used to solve the problem of the premature convergence of the BMO algorithm. Literature [33] used random wandering was used to replace the random logarithm r a n d in the remote insemination iteration formula. The parameter p was replaced with a logistic chaotic mapping operator to improve the problem of premature convergence of the BMO algorithm. Literature [34] improved the penis coefficient p l and the remote insemination iterative formula using a logistic model to achieve adaptive transformation of parameters as well as to avoid falling into local optima. In [35], cubic chaos initialization was used to improve the diversity of the initial population, the parameter p was replaced with a hyperbolic sinusoidal modulation factor to improve the convergence speed of the algorithm, and Gaussian and Cosey variants were introduced to solve the shortcomings of the BMO, which is prone to fall into local optimother. The literature [36] integrates barnacle larvae settlement and adhesion behavior to increase population diversity and improve the global exploration performance of the algorithm. It also uses positive and negative decline casting strategies to improve the local mining ability.
The BMO algorithm exhibited an excellent performance and fewer control parameters. However, these parameters have a more significant impact on the performance of the BMO algorithm. Many improvements have also focused on the importance of the parameters of the BMO algorithm, and the direction of its improvement is almost always centered around the parameters. This is explicitly embodied in the following aspects: (1) The fixed penis coefficient makes the exploration and exploitation process unbalanced, although the literature [34] realized the adaptive conversion of p l , the range of variation of p l was small, and there was not much difference with the fixed penis coefficient. (2) The Hardy-Weinberg law coefficient p plays a decisive role in the exploitation ability of the BMO algorithm, and fixed coefficients cannot maintain the diversity of the populations, which weakens the exploitation ability of the BMO algorithm. Literature [33] utilized Logistic Chaos coefficients instead of the parameter p. Although Logistic Chaos coefficients have strong ergodicity, their iteration sequences are unevenly distributed, which is not conducive to increasing population diversity. Reference [35] designed a nonlinearly decreasing parameter p in the interval [ 0 , 1 ] , which aims to accelerate the convergence speed of the algorithm, but p tends to 0 in the late iteration, which also results in the loss of population diversity. In addition, to improve the ability of the BMO algorithm to jump out of the local optimal solution, ref. [35] added a double perturbation, which effectively enhances the ability of the algorithm to jump out of the local optimal solution but also slows down the convergence speed of the algorithm.
This paper proposes a PEBMO algorithm and applies it to the multi-threshold segmentation of forest canopy images. Specifically, the main contributions of this study are (a) to address the limitations of the BMO algorithm, a wide range of nonlinear increasing penis coefficients, dynamically decreasing reproduction coefficients, and Chebyshev chaotic perturbation strategies are employed to improve the balance between exploration and exploitation, the exploitation ability, and the resistance to premature maturation of the BMO algorithm; (b) the PEBMO algorithm is combined with Kapur entropy to determine the optimal segmentation threshold to improve the segmentation efficiency and accuracy of forest canopy images; and (c) to investigate the performance of PEBMO and PEBMO-based segmentation methods on the 2017 test function and forest canopy image problems, respectively. (d) We compared PEBMO with five BMO variant algorithms. PEBMO performed better than the other BMO-variant algorithms. The forest canopy image segmentation results show that PEBMO-based segmentation methods outperform the artificial bee colony optimization algorithm (ABC) [37], chimp optimization algorithm (ChOA) [38], cuckoo algorithm (CS) [39], flower pollination algorithm (FPA) [40], mayfly algorithm (MA) [41], improved mayfly algorithm (IMA) [42], and BMO.

2. Materials and Methods

2.1. Materials

The canopy images used in the experiment were obtained from the Liangshui Experimental Forestry Farm of Northeast Forestry University. They were acquired randomly by a Panasonic DMC-LX5 camera (Panasonic Company, Kadoma City, Osaka, Japan) fisheye lens Samyang AE 8/3.5 Aspherical IF MC Fisheye (Samyang Optics, Changwon, Gyeongsangnam-do, Republic of Korea). The Liangshui Experimental Forest Farm is located in the southern section of the Xiaoxinganling Mountain Range (eastern slope of the Dali Belt Range branch) in the eastern mountains of northeast China. The geographic coordinates are 128° 47 8 –128° 57 19 E, 47° 6 49 –47° 16 10 N. This location is on the eastern edge of the Eurasia continent, which is deeply influenced by the oceanic climate and has prominent temperate continental monsoon climate characteristics. In winter, under the control of the metamorphic continental air masses, the climate is cold, dry, and windy. In summer, under the influence of subtropical metamorphic oceanic air masses, precipitation is concentrated (of which June–August accounts for more than 60% of annual precipitation), and temperature is high. The climate in spring and fall is variable and spring is windy, with little precipitation, and prone to drought. In the fall, the temperature drops sharply, and early frosts occur more often. The soils in the region are divided into four soil classes (drenching soil class, Semi-hydro pedogenic class, pedogenic class, and organic soil class), four soil classes (dark brown loam, meadow soil, swamp soil, and peat soil), and 14 subclasses. Because the altitude of this area is not high, there is no apparent high mountain, and the vertical distribution of soil is not apparent, only a territorial distribution pattern. Red pine, spruce forests and birch forests dominate in the forest field, accounting for 63.7%, 11.4% and 6.4% of the forested area respectively. The forest cover rate is 98%.

2.2. Basic BMO

Initialization: In the BMO algorithm, barnacles are candidate solutions and the following matrix represents their population vectors:
X = X 1 1 X 1 N X n 1 X n N
N is the number of control variables (problem dimensions) and n is the population size or the number of barnacles. The upper-and lower-bound constraints for each control variable are as follows:
u b = u b 1 , , u b i
l b = l b 1 , , l b i
where u b and l b denote the upper and lower bounds of the ith variable, respectively. First, the initial population X is evaluated, then the fitness value of each barnacle is ranked, and the barnacle with the best current fitness value is ranked at the top of X.
Selection: Randomly selecting the father and mother to be mated from the barnacle population, the formulae for selection are shown in (4) and (5):
x d = randperm ( n )
x m = randperm ( n )
where x d denotes the father to be mated and x m denotes the mother to be mated.
Reproduction: The value p l is essential to determine the exploration and exploitation processes. The value p l in the basic BMO algorithm is set to 7. If p l < 7 , the barnacle algorithm reproduces the offspring according to the Hardy-Weinberg theorem, which is the exploitation phase of the BMO algorithm. The barnacle mating iteration formula is given by (6).
x i N _ n e w = p x d N + q x m N
where p and q are the percentages of characteristics of the father and mother in the next generation, respectively. Where p is a random number uniformly distributed in [ 0 , 1 ] and q = ( 1 p ) , and are the variables of father and mother selected by Equations (4) and (5), respectively.
If p l > 7 , the offspring are produced by remote insemination, and this process can be considered as an exploration phase with the iterative formula shown in (7):
x i n n e w = r a n d ( ) × x m n
where r a n d ( ) is a random number within [ 0 , 1 ] , which can be seen from Equation (7) that the mother determines the generation of offspring during remote insemination.

2.3. PEBMO

2.3.1. Large Range of Non-Linear Increasing Penis Coefficients

The penis coefficient p l directly determines the transition between the exploration and exploitation of BMO algorithms. In the basic BMO algorithm, p l is fixed value 7. A fixed penis coefficient defeats the original purpose of the metaheuristic algorithm of dynamic balance between exploration and exploitation. Therefore, it is necessary to design a dynamic adaptive penis coefficient to improve the flexibility between exploration and exploitation. The improved p l parameter is given by Equation (8).
p l n e w = p l 1 + p l 2 + p l 3
p l 1 = 27
p l 2 = e T 2 t T 3
p l 3 = tan t T
where t is the current number of iterations, and T is the maximum number of iterations.
As can be seen from Equation (8), p l n e w consists of three parts: p l 1 is the range adjustment part that determines the overall range of p l n e w . p l n e w was initially designed to utilize the nonlinearity of p l n e w to enhance the balance between algorithm exploration and exploitation, and the nonlinearity of the exponential function meets this requirement. The designed p l 2 increases nonlinearly in the interval [7, 26.95], which makes the p l n e w value smaller at the beginning of the iteration and gradually increases at the end of the iteration. p l 2 determines the overall trend of p l n e w and is the core part of p l n e w . However, p l 2 gradually reached its maximum value at the end of the iteration and remained there until the end. The p l n e w that no more changes increases the probability of the algorithm falling into a locally optimal solution. Therefore, it is necessary to make p l 2 change slightly at the end to enhance the diversity of the population. However, a significant increase is detrimental to the fast convergence of the algorithm. Therefore, a nonlinearly increasing p l 3 in the interval [0, 1.5506] compensates for these shortcomings.
A p l n e w parameter ranging in the interval [7, 28] and increasing nonlinearly with the number of iterations was designed. At the beginning of the iteration, the p l n e w value was small, and most of the barnacle particles were outside the range of the p l n e w value to fully explore the solution space. As the number of iterations increased, the p l n e w value increased nonlinearly, and the algorithm gradually transitioned from the exploration phase to the exploitation phase. At the end of the iteration, the p l n e w value reached approximately 28, at which time most particles were in the exploitation stage. However, a small number of barnacle particles still perform global exploration to find possible global optimal solutions. This design not only makes the exploration and exploitation process of the BMO algorithm well balanced and speeds up the algorithm’s convergence but also increases the adaptivity of the algorithm. The proposed optimization mechanism is illustrated in Figure 1.

2.3.2. Dynamically Decreasing Reproduction Factor

In the exploitation phase, the parameter p significantly affects the exploitation capability of the BMO algorithm. In the basic BMO algorithm, p is a fixed value of 0.6 for each iteration, and the fixed parameter decreases the diversity of offspring propagated by Equation (6), resulting in a weak local exploitation capability of the algorithm and a slow convergence rate.
The dynamic decreasing coefficients in the white-crowned chicken optimization algorithm [43] provide a faster convergence rate and stronger exploitation capability. Inspired by the white-crowned chicken optimization algorithm, this study uses the dynamic parameter instead of p in the exploitation phase of the white-crowned chicken optimization algorithm. The iterative formula is as follows:
p _ n e w = 2 t T R cos 2 R 1 π
where t is the number of current iterations; T is the maximum number of iterations; R 1 is a random number in [−1, 1]; and R is a random number in the interval (0, 1). It can be seen from Figure 2a that R cos 2 R 1 π fluctuates uniformly in the interval (−1, 1). The introduction of R cos 2 R 1 π increases the variability of p n e w . From Figure 2b, the value of p n e w decreases gradually from the (−2, 2) interval to the (−1, 1) interval. Even at the end of the iteration, p n e w fluctuates in the (−1, 1) interval, which fully retains the diversity of the population and improves the exploitation ability of the algorithm. 2 t T decreases linearly with the number of iterations in the (1, 2) interval, and it can be seen from the comparison of Figure 2a,b that the addition of 2 t T not only improves the range of fluctuation of p n e w but also accelerates the convergence speed of the algorithm. The improved iteration formula is as follows:
x i N _ n e w = p n e w x d N + q x m N

2.3.3. Optimal Neighbourhood Chebyshev Chaotic Perturbations

When the algorithm is stalled, there are two cases: (1) the algorithm finds the global optimal solution, and the search for the optimal solution is complete; (2) the algorithm is stuck in the local optimal solution. It is not sufficiently comprehensive to perturb the algorithm solely because it stagnates. Some scholars add perturbation at the optimal solution, which increases the population diversity near the optimal solution and prevents the algorithm from maturing prematurely at the optimal solution; however, it also affects the convergence speed. Therefore, in this study, we added Chebyshev chaotic perturbation to the mother generation of barnacle particles at p l = 0.5 . The iterative formula of Chebyshev chaotic mapping is shown below:
x n + 1 = cos k arccos x n , x n [ 1 , 1 ]
where k is the order.
When p l = 0.5 , the perturbation equation is as follows:
x i N n e w = x n + 1 + x m N
Figure 3 shows ten common chaotic mappings. The figure shows that the Chebyshev chaotic perturbation has good randomness with a perturbation range of [−1, 1], which is more significant than other chaotic perturbations and produces a wider global distribution of particles with more comprehensive information. Adding the perturbation at p l = 0.5 can maximize the retention of possible optimal solutions in the late iteration and increase the algorithm’s ability to jump out of the local optimal solution.

2.3.4. Convergence Rate Analysis

The PEBMO algorithm is designed to achieve improved performance by improving the algorithm parameters. The large range of nonlinear increasing penis coefficients, dynamic decreasing reproduction coefficients, and optimal neighborhood Chebyshev chaotic perturbations all have some effect on the convergence speed. It is analyzed explicitly as follows:
In intelligent algorithm optimization, the balance between exploration and exploitation has a significant impact on the convergence speed of the algorithm. If the algorithm is too biased towards exploration (global search), it may waste a lot of time in irrelevant regions, resulting in slow convergence. If the algorithm is too biased toward exploitation (local search), it may fall into the local optimal solution prematurely and fail to find the global optimum. The p l is a parameter that regulates the balance between the exploration and exploitation of the BMO algorithm, and it is a fixed value 7. A fixed search range may result in the algorithm not being able to cover the global space at the initial stage or not being able to optimize it at a later stage. Therefore, in this study, a large range of non-linear increasing penis coefficients is designed to better balance the BMO algorithm exploration and exploitation process. At the beginning of the iteration, the p l value is large, and the vast majority of barnacle particles are in the exploration stage to explore the solution space. As the number of iterations increases, the p l value decreases, and most of the particles are in the exploration stage, finely exploiting the searched region. The large range of non-linear increasing penis coefficients enables the algorithm to find the best trade-off between global search and local optimization, thus finding the optimal solution more efficiently and significantly accelerating the convergence speed.
The parameter p of the BMO algorithm in the exploitation phase directly determines the behavior of the algorithm in the local search and optimization process. If the exploitation intensity is too great, it may lead to a decrease in population diversity, and the algorithm falls into a local optimum; if the exploitation intensity is too low, it cannot fully utilize the regional information, and the convergence speed is slow. In the BMO algorithm, p is a random number between (0, 1). Although the random number can increase the population diversity in the exploitation phase, over-reliance on random exploration will lead to unclear search direction and slow convergence. The dynamic decreasing reproduction coefficient varies in a large range in the early period of iteration, giving the algorithm a strong ability to fully exploit the explored region in search of a possible optimal solution. As the number of iterations increases, the range of p decreases, avoiding over-exploitation and speeding up convergence. The design of the dynamically decreasing reproduction coefficient better balances the exploitation intensity and thus accelerates the convergence rate.
In order to improve the ability of the algorithm to jump out of the local optimal solution, adding perturbations to the optimal solution to increase the diversity of the population is an effective strategy for improvement. However, it also brings the defect of slow convergence speed of the algorithm. Therefore, in this study, the ability of the algorithm to jump out of the local optimal solution is increased by adding the Chebyshev chaotic perturbation at p l = 0.5. The limitation of excessive perturbation at the optimal solution, which leads to non-convergence of the algorithm, is avoided.

2.4. PEBMO Algorithm Design

In this study, the exploration and exploitation process of the BMO algorithm is made more homogeneous by the design of a large range of penis coefficients; the introduction of the dynamic decreasing reproduction coefficients increases the diversity of the populations and improves the exploitation capability of the algorithm; and the Chebyshev chaotic perturbation at p l = 0.5 increases the ability of the algorithm to jump out of the local optimal solution. The flowchart of PEBMO is shown in Figure 4.
The basic steps of the PEBMO algorithm are as follows:
1. Randomly generate an initial population with a population size of N. Initialize the position of each barnacle particle using Equation (1) and set the parameters required for the execution of the algorithm: N, n, T, p l 1 .
2. Calculate the fitness value of each barnacle particle and rank them, The optimal solution at the top ( B b e s t = the current optimal solution).
3. Calculate p l n e w value according to Equation (8); Calculate p n e w value according to Equation (12); x n + 1 Calculate value according to Equation (14).
4. Select dad according to Equation (4) and Mum according to Equation (5).
5. If the penis length of the chosen father is less than p l , the offspring are generated according to Equation (13); otherwise, the offspring are generated according to Equation (7).
6. If the penis length of the selected parent is 0.5, offspring are generated according to Equation (15).
7. The fitness value of each barnacle particle is calculated.
8. Sorting and update B b e s t .
9. Determine whether the iteration termination condition is met; if so, exit the loop; otherwise, proceed to Step 2.

2.5. Forest Canopy Image Segmentation Based on the PEBMO Algorithm

2.5.1. Multi-Level Threshold Image Segmentation

Threshold segmentation can be classified into second-and multilevel thresholds according to the number of thresholds [44]. Second-level segmentation divides an image into two parts, target and background, by a single threshold and is suitable for segmenting simple images. Multilevel threshold segmentation divides an image into regions with different gray levels using various thresholds, which is ideal for segmenting complex multi-target images. Canopy images contain more information and single-threshold segmentation is insufficient to complete the segmentation task. For multilevel threshold segmentation, suppose n thresholds are t 1 , t 2 , t n , and the grey level mapping is
f = l 0 , 0 f t 1 l 1 , t 1 f t 2 l n 1 , t n 1 f t n l n , t n f L 1
where l 0 , l 1 , , l n is n + 1 gray level of the segmented image; L = 256 . Equation (16) shows that the key step of threshold segmentation is to select the optimal threshold related to the segmentation quality. Kapur entropy was used as the objective function in this study.

2.5.2. Kapur Entropy

Kapur entropy is a more commonly used information-entropy-based threshold selection method that is widely used in the multi-threshold segmentation of complex images. The principle of multi-threshold image segmentation based on Kapur entropy is as follows:
Assuming that the image gray level is L and the gray range is [ 0 , L 1 ] , the probability of a pixel in this gray level can be expressed as
p i = n i N
i 1 L 1 p i = 1
where n i represents the sum of the number of pixels with gray level i and N represents the total number of pixels in the image. The Kapur entropy objective function for multi-thresholding is defined as:
H t 1 , t 2 , , t n = H 0 + H 1 + + H n
Among them,
H 0 = j = 0 t 1 1 p j ω 0 ln p j ω 0 , ω 0 = j = 0 t 1 1 p j
H 1 = j = t 1 t 2 1 p j ω 1 ln p j ω 1 , ω 1 = j = t 1 t 2 1 p j
H n = j = t n L 1 p j ω n ln p j ω n , ω n = j = t n L 1 p j
H n is the entropy of different categories after segmentation, ω n is the probability of pixel points occurrence in each category, and p j is the probability of pixel points with grey value j.
A judgment is made according to Equation (23) to select the optimal combination of thresholds.
f k a p u r = arg max H t 1 , t 2 , , t n
The maximized combination is the optimal threshold sought to solve the problem of the best combination of thresholds. Therefore, using an intelligent optimization algorithm to select the optimal threshold can effectively solve the problems of low efficiency and accuracy of traditional methods. In this study, PEBMO was used to select an optimal threshold.

2.5.3. Implementation of the PEBMO-Based Segmentation Algorithm

In order to improve the segmentation performance of the canopy image, Kapur entropy is the fitness function, and the PEBMO algorithm searches for the optimal threshold to segment the canopy image accurately. Figrue Figure 5 shows the flowchart of the proposed multi-threshold segmentation of the canopy image based on the PEBMO algorithm. The framework of the proposed algorithm is as follows:
1. Input the image to be segmented.
2. Generate the population and parameters of PEBMO.
3. Calculate each grey level’s probability (22).
4. Calculate the fitness value of each search agent by using (19) for Kapur entropy.
5. PEBMO algorithm finds the optimal threshold to segment the canopy image.
6. Output the segmentation result.

3. Results

This section consists of two parts: the testing and validation of the PEBMO algorithm’s performance and the testing and validation of the PEBMO-based segmentation algorithm.

3.1. Performance Testing and Analysis of the PEBMO Algorithm

3.1.1. CEC2017 Test Functions

CEC2017 is a collection of test functions used in the 2017 IEEE Congress on Evolutionary Computation (CEC) competition, and the essential information is shown in Table 1. Among them, F 1 F 3 are single-peak functions containing only one global optimal solution, which are used to test the convergence speed and local search ability of the algorithm. F 2 shows unstable behavior, especially for higher dimensions, and significant performance variations for the same algorithm implemented. For high-dimensional test functions, F 2 is usually excluded. Since this study is not a high-dimensional testing study, however, F 2 is also tested. F 4 F 10 are multi-peak functions containing multiple local optimal solutions, which are used to test the exploration ability of the algorithms and the ability to jump out of the local optimal solutions. Such functions are adapted to test the performance of algorithms in solving complex multipeak optimization problems. F 11 F 20 are hybrid functions that contain multiple functions combined to create variety and complexity. F 21 F 30 are composite functions, which are combinations of various simple functions, each of which is univariate but combined to form a high-dimensional composite function. Hybrid and composite functions were used to solve large-scale optimization problems and test the potential of the algorithms to solve complex problems in the real world.

3.1.2. Experimental Setup

The experiment was conducted on a Dell Computer USA. The model number is the Alienware G15 5530 1526. It is configured for an Intel CPU @2.50 GHz, 16 GB of RAM, and a Windows 11 operating system. This equipment was purchased from Suihua, China. The PEBMO algorithm was compared with five BMO variants to verify its performance. The algorithm parameters are listed in Table 2. The population size ( N ) of the experiment was set to 30, and the maximum number of iterations ( T ) was 500. Each benchmark function was independently run 30 times to avoid chance errors in the optimization search. The fitness value, standard deviation, and convergence curve were used as evaluation indexes. The best results are shown in bold font.

3.1.3. Sensitivity Analysis

In order to objectively analyze the influence of the changes of parameters N and T on the accuracy and stability of the PEBMO algorithm’s optimization search, a sensitivity analysis is carried out. Table 1 is used as the test function, and the average adaptability and standard deviation values are the evaluation indices. The average fitness values reflect the optimization accuracy of the algorithm. The standard deviation reflects the stability of an algorithm. The optimization accuracy and stability of the PEBMO algorithm are verified at different N and T values, respectively. Figure 6a shows the number of the optimal average fitness values and standard deviation values obtained by the PEBMO algorithm at different values of N. The figure shows that the PEBMO algorithm obtains 13, 10, 6, and 4 optimal average fitness values and 11, 10, 3, and 6 optimal standard deviation values when N is 30, 40, 50, and 60, respectively. No optimal fitness values and standard deviation values were obtained for N of 5, 10 and 20. This is due to the fact that the algorithm cannot fully explore the solution space when N is small, and the computational complexity increases when N is large. Therefore, neither too large nor too small N is conducive to the algorithm’s optimization search. It is worth noting that there are three sets of juxtaposed optimal values in obtaining the average fitness value, thus leading to more than 30 total optimal averages. Figure 6b shows the number of the optimal average fitness values and standard deviation values obtained by the PEBMO algorithm at different values of T. From the figure, it can be seen that the effect on the average fitness values is more significant at different T values compared to the standard deviation values. Thirteen optimal average fitness values and eight optimal standard deviation values were obtained at T = 500, which are better than the other T values.
To summarize, the parameters N and T affect the evaluation of the algorithm’s performance. Too large and too small N and T are not conducive to the algorithm performance test. The experimental results show that the PEBMO algorithm obtains the most optimal fitness values and standard deviation values when N = 30, which is 43.33% and 36.67% better than other particle numbers, respectively. As for the number of iterations, the PEBMO algorithm obtained 43.33% of the optimal average fitness value and 26.67% of the optimal standard deviation value when T = 500, which is better than other T values. Therefore, in the algorithm performance testing experiments, the number of population particles is N = 30, and the number of iterations is chosen as T = 500.

3.1.4. Analysis of Optimization Accuracy and Stability Score

Figure 7 and Figure 8 show the histograms of the distribution of the optimal mean fitness and number of standard deviation values for each BMO variant algorithm on the CEC2017 test function, respectively. The average fitness value reflects the optimization accuracy of the algorithm and the standard deviation value reflects its stability.
Figure 7 and Figure 8 show that, in comparison with the BMO variant algorithm, the PEBMO algorithm performs well in terms of seeking accuracy, 96.67 % better than the BMO variant algorithm, and 100 % better than the base BMO algorithm. This shows that the PEBMO algorithm has a high optimization accuracy and that the improvement of the PEBMO algorithm by each strategy is effective. PEBMO algorithm outperformed the comparison algorithm by 100 % in optimization accuracy on the single-peak function and 66.67 % stability. This is owing to the introduction of the dynamically decreasing reproduction coefficient p, which effectively improves the exploitation capability of the BMO algorithm. IBMO [33] and IBMO [35] also improved parameter p to improve its exploitation capability. As can be seen from the test results in Table 3, the improvement effect is average and not as good as that of the other BMO variants of the algorithm.
The PEBMO algorithm has a more substantial optimization accuracy on the multipeak function, 85.7 % better than the comparison algorithm, indicating that the PEBMO algorithm has a strong global optimization ability and the ability to jump out of the local optimal solution. The BMO algorithm also performs well on the multipeak function, indicating that the BMO algorithm itself has good global optimization ability. To improve the global optimality-seeking ability of the BMO algorithm, IBMO [34] replaced r a n d ( ) with the chaos vector obtained from the logistic chaos mapping, and IBMO [36] added coefficients to r a n d ( ). The test results in Table 3 show that for the optimization of the multi-peak function, the optimization accuracy of IBMO [34] and IBMO [36] is not as good as that of the BMO algorithm, which is why this study does not improve the exploration phase of the BMO algorithm. The PEBMO algorithm outperforms the BMO algorithm because the extensive range of incremental penis coefficients, p l , provides a better balance between the exploration and exploitation of the BMO algorithm.
Chebyshev chaos perturbation at p l = 0.5 allows the algorithm to jump out of the local optimal. Chebyshev chaotic perturbation, although it provides the PEBMO algorithm with a higher accuracy in finding the optimal on the multi-peak function, it also leads to less stability than IBMO [35]. IBMO [35] obtained optimal standard deviation values for F 5 , F 6 , and F 7 , but their average fitness values were poor. This shows that the algorithm fails to effectively jump out of the local optimal solution for these three functions and falls into a precocious situation. In addition, IMBO [35] has a large stability gap for different functions and achieves optimal standard deviation values for F 3 ,   F 5 ,   F 6 ,   F 7 ,   F 20 , and F 26 . At the same time, it is almost the worst value for other functions, especially F 2 ,   F 13 ,   F 15 ,   F 19 , and F 30 , which is inferior to other algorithms. This phenomenon also proves that the instability effect caused by the double disturbance was significant. However, whether the perturbation of IMBO [35] is strong or weak, its optimization search accuracy is not as good as that of the other algorithm variants.
The PEBMO algorithm has a significant advantage in hybrid and composite functions and is 100 % better than the comparison algorithms in terms of optimization accuracy. It is 60 % better than the comparison algorithm in terms of stability, and 75 % better than the BMO algorithm. This shows that the PEBMO algorithm is more comprehensive and has a high optimization accuracy and stability in optimizing large-scale and realistic problems, with obvious advantages.

3.1.5. Statistical Analysis

The Wilcoxon rank–sum test is a nonparametric statistical method for comparing differences between two independent groups of samples. Compared to the t-distribution test, Fisher’s exact test, and Kruskal–Wallis H test, it has the advantage of not requiring the assumption that the data obeys a specific probability distribution, is stable and unaffected by outliers, and is simple, convenient, intuitive, and easy to understand. Therefore, it is widely used to evaluate the differences between the performance of each algorithm. Where 0.05 is the significance level, a p-value greater than 0.05 indicates that the difference between the two groups is not significant, and a p-value less than 0.05 indicates that the difference between the two groups is significant. The Wilcoxon–rand rank sum test values for each algorithm are shown in Figure 9. Where the black horizontal line is the significance level value of 0.05, from the figure, it can be seen that the p-values obtained by the PEBMO algorithm in the determination of the differences with the other algorithms are less than 0.05, which indicates that the PEBMO algorithm has been significantly improved. In addition, there is no distribution of values on some functions (e.g., F 1 F 4 ), which is because the p-value obtained on this function is too small, much less than 0.05; therefore, it is not significantly represented in the graph.

3.1.6. Convergence Analysis

The convergence curve of the function can intuitively reflect the convergence speed, accuracy, and ability of each algorithm to jump out of the local optimal solution. Figure 10 shows the convergence curve of some functions, in which the horizontal axis indicates the number of iterations and the vertical axis indicates the value of the optimal fitness.
On the convergence curve of the single-peak function, it can be seen that the red PEBMO curve has apparent fluctuations in the middle and late iteration, indicating that even in the late iteration, the diversity of the population is not lost and still has a strong exploitation ability. On the convergence curve of the multi-peak function F 6 , the fitness value of the PEBMO algorithm is not as good as that of IBMO [30], which is consistent with the data in Table 3. For F 10 , the convergence curve of the PEBMO algorithm is at the bottom of all curves, indicating that its fitness value is better than that of the comparison algorithm. The PEBMO curve was located at the bottom of the hybrid and composite function convergence curves. It converges slowly in the late iteration, indicating that the PEBMO algorithm has high optimization accuracy and good stability.

3.2. Experimental Results and Analysis of Canopy Segmentation

3.2.1. Data Analyses

The forest canopy is intricate and complex, and the canopy images obtained by the fisheye lens often have quality problems, such as uneven illumination and loss of details. They also often have the limitation of low segmentation accuracy when segmenting the canopy images. Therefore, before segmenting the canopy image, this paper uses the literature [45] method to enhance different canopy images. The enhanced canopy images are shown in Figure 11, Figure 12, Figure 13 and Figure 14. Figure 11 shows two different types of nonuniform canopy images. Among them, the overall brightness of the LS1–LS3 images was low, and only a small part was in the exposure state. By contrast, the brightness of the LB1–LB3 images was not as high as that of LS1–LS3. However, the bright area is large; therefore, accurately segmenting the boundaries of the bright regions is an excellent test for the segmentation algorithm. Figure 12 shows the canopy images acquired under low-illumination conditions. The tone is dark, especially at low tree trunks, and it is easily misclassified as background. Figure 13 shows a canopy image acquired under strong sunlight, where intense sunlight blurs the treetops bordering the sky because of the sun’s reflection, and it is very easy to misclassify faint foliage as the sky. Figure 14 shows a canopy image acquired under normal lighting conditions. However, because of the acquisition method (elevation shot) and the forest gap, there was low brightness and unclear details at the low tree trunks, even under ideal lighting conditions.

3.2.2. Experimental Setup

In this section, segmentation experiments are conducted on Figure 11, Figure 12, Figure 13 and Figure 14 to evaluate the effectiveness of the proposed PEBMO algorithm on the multi-threshold segmentation of forest canopy images. The population size (N) was set to 30, maximum number of iterations ( T ) was 500, and number of thresholds (dim) was set to 4 , 8 , 12 , and 16. To verify the effectiveness of PEBMO in the multithreshold segmentation of forest canopy images. The segmentation results of the PEBMO algorithm were compared with those of ABC, ChOA, CS, FPA, MA, IMA, and BMO. structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), feature similarity index (FSIM), and subjective evaluation metrics were used to assess the quality of the segmentation results.

3.2.3. Image Evaluation Metrics

SSIM considers the characteristics of the human visual system by measuring the similarity between the original and segmented images in terms of brightness, contrast, and structural similarity [46]. The SSIM value is located in the interval [0, 1], and the larger its value, the more similar is the segmented image to the original image. The formula for calculating SSIM is as follows:
SSIM ( x , y ) = 2 μ x μ y + C 1 2 σ x y + C 2 μ x 2 + μ y 2 + C 1 σ x 2 + σ y 2 + C 2
μ x = 1 R C i = 1 R j = 1 C X ( i , j )
σ x 2 = 1 R C 1 i = 1 R j = 1 C X ( i , j ) μ x 2
σ x y = 1 R C 1 i = 1 R j = 1 C X ( i , j ) μ x Y ( i , j ) μ y
where μ x and σ x denote the mean and variance of the original image, respectively; μ y and σ y denote the mean and variance of the segmented image; and σ x y is the covariance between the original image and the segmented image; and C 1 , C 2 and C 3 are constants. C 1 = ( 0.01 L ) 2 , C 2 = ( 0.03 L ) 2 , C 3 = C 2 / 2 , l = 255 .
PSNR is the ratio of an image’s maximum possible pixel value to the minimum pixel difference caused by noise in the image [47]. Its magnitude directly depends on the intensity of the image. The larger the PSNR value, the more significant the noise-removal effect and the lower the image distortion [48]. The PSNR value was calculated as follows:
P S N R = 20 · log ( 255 R M S E )
R M S E = i = 1 H j = 1 W I ( i , j ) I ( i , j ) H × W
I ( i , j ) and I ( i , j ) are the original and segmented images, respectively, and H and W are the image dimensions.
FSIM is another method for assessing the feature similarity after SSIM [49]. Unlike SSIM, it measures image quality by evaluating the feature similarity between the original and segmented images. The better the image quality, the closer the value is to 1. The formula for the FSIM is shown in Equation (30).
F S I M = x Ω S L ( x ) × P C m ( x ) P C m ( x )
where Ω is the pixel value of the entire image, S L ( x ) denotes the similarity value, and P C m ( x ) represents the phase consistency measure.
P C m ( x ) = max P C 1 ( x ) , P C 2 ( x )
P C 1 ( x ) and P C 2 ( x ) denote the phase coherence of the reference and segmented images, respectively.
S L ( x ) = S P C ( x ) α · S G ( x ) β
S P C ( x ) = 2 P C 1 ( x ) × P C 2 ( x ) + T 1 P C 1 2 ( x ) × P C 2 2 ( x ) + T
S G ( x ) = 2 G 1 ( x ) × G 2 ( x ) + T 2 G 1 2 ( x ) × G 2 2 ( x ) + T 2
where S P C ( x ) denotes the feature similarity of the image; S G ( x ) denotes the gradient similarity of the image; G 1 ( x ) and G 2 ( x ) represent the gradient magnitude of the reference image and the segmented image, respectively; α , β , T 1 and T 2 are constants.

3.2.4. Image Segmentation Accuracy Analysis

The line graph of the fitness value obtained by each algorithm is shown in Figure 15. The fitness value effectively reflects the segmentation accuracy of the image. The larger its value, the higher the segmentation accuracy. From the figure, it can be seen that the fitness value fold is approximated as a triangular wave, indicating that the segmentation accuracy of each algorithm increases with the increase in the number of thresholds. When the number of thresholds is 4, the adaptation value of each algorithm is low, and the difference is not significant, indicating that none of the algorithms can segment the image accurately. As the number of thresholds increases, the gap between the algorithms is gradually apparent. The PEBMO algorithm has a clear advantage and is located at the top, indicating that its segmentation accuracy is better than the comparison algorithms. As shown in Table A8 in Appendix A, the improved algorithms IMA and PEBMO obtain 22 and 29 optimal adaptation values, respectively, accounting for 92.73 % , while MA obtains the other two optimal values. In addition, compared with the essential MA and BMO algorithms, IMA and PEBMO improved 81.36 % and 90.9 % in segmentation accuracy, respectively. It shows that the improvement of algorithm performance can effectively improve the segmentation accuracy of canopy images. In the comparison between IMA and PEBMO, the PEBMO algorithm is 56.36 % better than IMA, indicating that the segmentation accuracy of the PEBMO algorithm is better than the IMA algorithm.

3.2.5. Image Segmentation Stability Analysis

The line graph of the standard deviation values obtained by each algorithm is shown in Figure 16. The standard deviation values reflect the stability of the algorithms. The lower its value, the higher the stability. As can be seen from the graph, the PEBMO algorithm has apparent advantages, except for the noticeable fluctuation in the S4 image with eight thresholds. It is located at the lowest end on all other images, which indicates that it has strong stability in segmenting the canopy image. As can be seen from Table A9, PEBMO obtains 55.56% of the optimal standard deviation value. This is due to the effective improvement in the performance of the PEBMO algorithm, which enables the PEBMO algorithm to segment all types of canopy images accurately. IMA, which is also an improved algorithm, however, only obtained 23.61% of the optimal values but outperformed MA by 79.17%, indicating the effectiveness of the improvement of the IMA algorithm as well. The ABC, ChOA, CS, FPA, MA and BMO algorithms, on the other hand, performed mediocrely, with optimal values of 1, 1, 2, 3, and 3, respectively.

3.2.6. Structural Similarity Analysis

The line graph of SSIM values obtained by each algorithm is shown in Figure 17. The SSIM value reflects the structural similarity between the segmented image and the original image. The larger its value, the better the quality of the segmented image. As the number of thresholds increases, the SSIM value increases, but the stability of the algorithms is poor on the N2 image. The FPA, MA, IMA, BMO and PEBMO algorithms all show higher SSIM values for low thresholds than for high thresholds, and this is due to the fact that the branches and leaves in the N2 image are intricate and more difficult to segment, which is more challenging for the algorithms. Therefore, an unstable fluctuation phenomenon occurs during segmentation. ABC, ChOA, and CS algorithms do not show any fluctuation. Still, their SSIM values are lower, which indicates that ABC, ChOA and CS fall into local optimal solutions in segmentation and fail to segment the image accurately. Overall, IMA and PEBMO algorithms have apparent advantages. As shown in Table A10, the IMA and PEBMO algorithms obtain 63.89% of the optimal SSIM values. The advantages of IMA are concentrated on S1–S4 and N1–N4. The PEBMO algorithm outperforms the comparison algorithm on LS1–LS3, LB1–LB3, and L1–L4 images, whereas it is significantly inferior to IMA on S1–S4 and N1–N4 images. This indicates that the PEMBO algorithm is not as good as the IMA algorithm in segmenting strong and normal light canopy images.

3.2.7. Analysis of Image Distortion Level

The line graph of PSNR values obtained by each algorithm is shown in Figure 18. The PSNR values reflect the degree of distortion of the segmented image. From the graph, it can be seen that the IMA and PEBMO algorithms have a clear advantage and are located at the top of the image, indicating that their segmented canopy images are less distorted and of better quality. The excellent performance of the IMA and PEBMO algorithms is due to the effective improvement of the MA and BMO algorithms. On the L1–L4 images, the segmentation stability of the algorithms is high, and there is no apparent fluctuation. In addition, as shown in Table A11, the IMA and PEBMO algorithms improved 72.22% and 76.39% in segmentation quality relative to the original algorithms, respectively. IMA only obtained three optimal values, although 75% better than the MA algorithm, which indicates that IMA is not good at segmenting non-uniform canopy images.

3.2.8. Feature Similarity Analysis

The line graph of FSIM values obtained by each algorithm is shown in Figure 19. From the figure, it is clear that the algorithms have obtained high FSIM values on all the images. Among them, it is higher than 0.83 on LS1–LB3 images, 0.94 on L1–L4 images, 0.93 on S1–S4 images, and 0.95 on L1–L4 images. This stems from the fact that the original image prior to segmentation has a better quality, and hence, enhancement before segmentation is necessary. As the number of thresholds increases, the FSIM value also increases. The advantage of the PEBMO algorithm is obvious: almost all of them are located in the topmost part of the image, which indicates that its segmented image is more similar to the original image in terms of structure and better quality. In addition, as shown in Table A12, the optimal FSIM values are mainly distributed in MA, BMO, IMA and PEBMO algorithms, accounting for 81.94%. Among them, MA, BMO, IMA and PEBMO obtained 9.72%, 15.28%, 30.56% and 33.33% of the optimal values, while ABC, ChOA, CS and FPA obtained 6.94%, 2.78%, 2.78% and 5.56% of the optimal values, respectively.

3.2.9. Segmentation Efficiency Analysis

The computation time for each algorithm for various types of canopy images is shown in Figure 20, Figure 21, Figure 22 and Figure 23. Computation time reflects the segmentation efficiency of each algorithm. Overall, the magnitude of the computation time for each algorithm is FPA < CS < CHOA < BMO < PEBMO < ABC < MA < IMA . The computation time of the PEBMO algorithm is located at a medium level and inferior to that of BMO, which stems from the fact that to increase the probability of the BMO algorithm jumping out of the locally optimal solution, the parent generation of barnacle particles of p l = 0.5 is subjected to the Chaos Perturbation of Chebyshev. The introduction of this link undoubtedly increased the running time of the PEBMO algorithm. However, the difference between the computational method times of the PEBMO and BMO algorithms was not significant. In addition, the time required by all the algorithms to increase from four dimensions to 16 dimensions is not more than 5 s, which also proves that the introduction of intelligent algorithms can effectively improve the limitations of the inefficiency of the multithreshold segmentation method.

3.3. Comparison of PEBMO and Deep Learning

3.3.1. Hyperparametric Analysis

In order to evaluate the PEBMO algorithm objectively and comprehensively, the segmentation results of the PEBMO algorithm are comprehensively compared with the deep learning segmentation method in terms of fitness values, standard deviation values, SSIM, PSNR, FSIM, and computation time. A Transformer segmentation module was used for deep learning. To analyze the effect of different hyperparameters (num_layers and l r ) on the model performance. Hyperparameter analysis experiments were designed. Figure 24 shows the distribution of adaptation values, PSNR, SSIM, FSIM, computation time and standard deviation values for different l r and num_layers conditions. The analysis and summary of Figure 24 is shown below:
(1) Effect of num_layers and learning rate ( l r ) on the Loss function
With the increase of num_layers, the Loss value shows an overall increasing trend. When num_layers = 2, the Loss is lower; while num_layers = 8, the Loss is higher. Increasing the number of layers may lead to overfitting of the model, especially when the amount of data is limited. A smaller learning rate ( l r = 0.0001) usually leads to lower Loss values.
(2) Effect of num_layers and l r on PSNR
The value of PSNR is higher when num_layers = 2; it decreases as the number of layers increases. When num_layers = 2, the average PSNR value is greater than 62 for l r = 0.0001; it decreases for num_layers = 8. Smaller learning rates ( l r = 0.0001) usually result in higher PSNR values. It is clearly seen that the FSNR values for l r = 0.0001 are generally higher than the FSNR values for l r = 0.01 and 0.001.
(3) Effect of num_layers and l r on SSIM
Similar to PSNR, SSIM is higher for num_layers = 2; it gradually decreases as the number of layers increases, and the value of SSIM is generally higher when the learning rate l r = 0.0001.
(4) Effect of num_layers and l r on FSIM
In contrast to PSNR and SSIM, FSIM is lower for num_layers = 2; it gradually increases as the number of layers increases, and the value of SSIM is generally higher when the learning rate l r = 0.01.
(5) Effect of num_layers and lr on computation time
The learning rate has a negligible impact on the computation time, and the average computation time is mainly affected by num_layers; the more num_layer layers there are, the longer the computation time will be.
(6) Standard deviation analysis
The standard deviation of Loss: A smaller l r (e.g., l r = 0.0001) usually results in a lower standard deviation, indicating more stable model training.
When num_layers is increased, the model may be overfitted, leading to an increase in Loss and a decrease in PSNR and SSIM. A smaller learning rate (e.g., l r = 0.0001) usually leads to better performance and more stable training. Therefore, to ensure better time and model performance, this study uses the parameter combination num_layers = 4, l r = 0.0001.

3.3.2. Analysis of Comparative Results

The relevant settings are listed in Table 3. The obtained segmentation results are shown in Figure 25, Figure 26, Figure 27, Figure 28 and Figure 29, respectively. It is worth noting that the gap between the deep learning method and the PEBMO algorithm is large in all the evaluation metrics; therefore, Figure 25, Figure 26, Figure 27, Figure 28 and Figure 29 are all double vertical axis bar charts. The left side is the vertical axis of the PEBMO algorithm and the right side is the vertical axis of deep learning.
As can be seen from the data in the table, there are significant differences between the deep learning segmentation algorithm and the PEBMO segmentation algorithm in several performance metrics. The advantages and disadvantages of deep learning and PEBMO algorithms are as follows:
1. Advantages of Deep Learning
(a) Deep learning segmentation methods generally have higher SSIM and PSNR values than the PEBMO algorithm at all thresholds. This indicates that deep learning algorithms are better able to capture the structural and feature information of an image, thereby providing higher-quality segmentation results.
(b) Deep learning is highly adaptable, can handle complex scene data, and has strong generalization ability. This makes it more advantageous for complex image segmentation tasks.
2. The shortcomings of Deep learning:
(a) It is less stable and susceptible to parameters that prevent it from maintaining consistent performance with different inputs.
(b) Although it outperforms the PEBMO algorithm in terms of computation time for individual images, it takes longer to compute at high thresholds and requires high-performance hardware support. This may limit their use in real-time applications.
(c) The data dependency is strong and requires a large amount of labelled data for training, which requires high data quality. If the data is insufficient or inaccurately labelled, the segmentation accuracy may be affected.
3. Advantages of the PEBMO algorithm:
(a) High segmentation accuracy: the optimal fitness value and FSIM value of the PEBMO algorithm under all thresholds are higher than that of the deep learning method, indicating that its segmented image has high segmentation accuracy and is more similar to the original image in terms of characterization.
(b) High stability: the PEMBO algorithm outperforms deep learning in terms of standard deviation values for all numbers of thresholds, indicating strong segmentation stability.
(c) Computationally efficient: shorter computation time, suitable for real-time applications.
(d) Simple and easy to implement: It does not require a large amount of training data suitable for simple scenarios.
4. Shortcomings of the PEBMO algorithm:
(a) Poor adaptability: significant performance degradation in low-dimensional or complex scenarios.
(b) The high degree of distortion: under all dimensions, the SSIM and PSNR values are generally lower than those of the deep learning algorithms, indicating that their segmentation results are not as good as those of the deep learning methods in terms of the degree of distortion.

3.3.3. Subjective Evaluation Analysis

Subjective evaluation can provide value that cannot be replaced by machines and can fully reflect how people feel about pictures in the real world. The segmentation results of each algorithm at threshold numbers 4, 8, 12 and 16 are shown in Figure 30, Figure 31, Figure 32 and Figure 33, respectively. The canopy image is intricate and complex, and it is inconvenient to observe how good or bad the segmentation is in the details. Therefore, the regions where the segmentation difficulties and error-prone points are located are intercepted on the segmentation result graphs for detailed analysis. In addition, due to the limitation of space, typical images from four different types of canopy images are selected for presentation.
From Figure 30, it can be seen that the segmentation quality of each algorithm is poor when the number of thresholds is 4. For the non-uniform canopy images LS1 and LB3, the difficulty of segmentation is in the light and dark junction; how to make the junction area accurately segmented is an excellent test for the algorithms. All the algorithms showed mis-segmentation, and some tiny branches and leaves were wrongly divided into the sky. The same is-segmentation also occurs in the N4 image; even though N4 is a normal illumination canopy image but limited by the shooting method (elevation), there are some uneven illumination phenomena in the ideal sample with uniform illumination, to a greater or lesser extent. On the low illumination canopy image L2, the algorithms misclassify the trees and branches at low places into the background. The strong illumination image S3 has the worst segmentation quality, which appears clear and blurred with low contrast. The FPA algorithm has the worst segmentation quality and shows severe colour distortion on both LB3 and S3.
The segmentation quality in Figure 31 is visually significantly better than in Figure 30. Some minute details have not yet been successfully segmented. The FPA algorithm shows a significant improvement in segmentation quality on LB3. While colour distortion still occurs on S3. The strong light image S3 is challenging for the segmentation algorithms. The ChOA, CS and FPA algorithms show colour distortion at a threshold number of 8. The segmentation results for threshold 12 and threshold 16 have little difference in subjective visualization, and the difference is hardly noticeable, which is also consistent with the objective evaluation index. As the number of thresholds increases, the computational complexity increases exponentially. It is more challenging for intelligent algorithms. The less-performing CS and FPA algorithms still cannot segment LB3 and S3 images accurately at high thresholds. While the other algorithms improve the segmentation quality significantly, the details hidden in the glare and darkness are accurately segmented. For example, on the L2 image, the trees at the lowest brightness can be accurately segmented, and the details of tree trunks, leaves, and ground are clearly visible.
The PEMBO algorithm demonstrates better segmentation quality and stability when segmenting different types of canopy images. Especially at low thresholds, better segmentation results can be obtained even when the comparison algorithms all show more serious segmentation errors. The PEBMO algorithm shows good stability and no colour distortion at any threshold.

4. Discussion

The results of the subjective and objective evaluations of each algorithm on the four types of forest canopy images show that the proposed PEBMO algorithm has obvious advantages over other intelligent algorithms and improved intelligent algorithms. The effectiveness of the improved PEBMO algorithm is demonstrated. However, there is no perfect algorithm to solve these problems. A canopy image has a significant gap in the complexity of the target, light intensity, and image clarity, which is a significant challenge to the accuracy and stability of image segmentation.
Image segmentation accuracy: 17 equal sets of optimal fitness values emerged in acquiring fitness values, of which 94.12% were distributed when the number of thresholds was 4. This stems from the fact that none of the algorithms can accurately segment the canopy image when the number of thresholds is not big enough, and the difference between the adaptative values obtained is insignificant. As the number of thresholds increases, the differences between the algorithms are gradually noticeable. The PEBMO and IMA algorithms have apparent advantages, 40.28% and 30.56% better than the other algorithms, respectively; 90.9% and 81.36% better than BMO and MA algorithms. It shows that it is necessary to improve the intelligent algorithms to improve the segmentation accuracy of canopy images. Although the PEBMO algorithm 56.36% outperforms IMA. However, the IMA algorithm outperforms PEBMO by 69.23% and 66.67% in segmenting strong and normal light canopy images. This indicates that the PEBMO algorithm has higher segmentation accuracy in segmenting non-uniform and low-illuminated canopy images. At the same time, IMA excels in the segmentation of strong and normal light canopy images. The other algorithms perform poorly in segmentation accuracy, and only MA obtains two optimal adaptation values.
Image segmentation stability: the PEBMO algorithm has better stability in segmenting the forest canopy image, obtaining 55.56% of the optimal standard deviation value. The IMA algorithm is 79.17% better than the MA algorithm, although it only obtains 23.61% of the optimal standard deviation value. It shows that the IMA algorithm’s improvement strategy effectively improves the MA algorithm’s segmentation stability algorithm. ABC, ChOA, CS, FPA, MA, and BMO algorithms perform in general and fluctuate more in segmenting non-uniform and strongly illuminated canopy images. This is due to the difficulty of segmenting non-uniform and strongly illuminated canopy images. Uneven illumination and details hidden under strong light can easily cause poorly performing algorithms to fall into local optima and fail in segmenting such images. Hence, there is less stability in segmenting such images.
Feature similarity analysis: The optimal SSIM values obtained by ABC, ChOA, CS, FPA, MA, IMA, BMO, and PEBMO are 5, 3, 4, 3, 4, 19, 7, and 27, respectively. Of which are 1, 1, 2, 2, 1, 1, 1, 3, and 13 on the non-uniform canopy images; on low-illumination canopy images are 1, 0, 1, 1, 1, 1, 3, 1, 8; on strong light canopy images are 2, 0, 1, 0, 1, 6, 2, 4; and on normal illumination canopy images are 1, 2, 0, 0, 1, 1, 9, 1, 2. Regarding the optimal SSIM values distribution, the overall IMA and PEBMO algorithms have a clear advantage, obtaining 63.89% of the optimal SSIM values. IMA’s strengths are centered on strong and normally illuminated canopy images, while the PEBMO algorithm outperforms the comparison algorithm on non-uniform and low-illumination canopy images. The performance of the other algorithms is rather mediocre.
Degree of image distortion: the ABC, ChOA, CS, PFA, MA, IMA, BMO, and PEBMO algorithms obtained 4, 2, 4, 3, 5, 19, 6, and 29 optimal PSNR values, respectively. The IMA and PEBMO algorithms outperformed the others, indicating that their segmented canopy images were less distorted and of better quality. The performance of the algorithms on non-uniform canopy images is more balanced. The PEBMO algorithm obtains 10 out of 24 optimal values on non-uniform canopy images, which is slightly better than the comparison algorithms. The PEBMO algorithm has a clear advantage in low-illumination canopy images, obtaining 62.5% of the optimal PSNR values. In contrast, the MA and CS algorithms are slightly inferior, obtaining two optimal values, respectively. In the strong and normal illumination images, the IMA algorithm obtained 14 optimal values, while the PEBMO algorithm obtained 8, indicating that the IMA algorithm segmented images with better quality.
Feature similarity analysis: 6.94%, 2.78%, 2.78%, 5.56%, 9.72%, 15.28%, 30.56%, and 33.33% of the optimal FSIM values were obtained for ABC, ChOA, CS, FPA, MA, BMO, IMA and PEBMO, respectively. MA and BMO algorithms outperform ABC, ChOA, CS, and FPA, which indicates that they also have good optimization-seeking performance. IMA and PEBMO improve the FSIM values of both MA and BMO by 73.61%. Therefore, it is essential to increase the feature similarity between the segmentated canopy image and the original one by improving the performance of the intelligent algorithms.
Image segmentation efficiency: by analyzing the computation time data of each algorithm, it can be concluded that all algorithms take no more than 5s when the number of thresholds increases from 4 to 16. The limitation that the computational complexity increases exponentially with increasing the number of thresholds in multi-threshold segmentation is overcome. The FPA, CS, and ChOA algorithms have apparent advantages. The MA and IMA algorithms have the lowest segmentation efficiency, which is inferior to the others. The gap between the BMO and PEBMO algorithms is not large, which suggests that improving the PEBMO algorithm does not increase the segmentation efficiency of the BMO algorithm. The segmentation efficiency of ABC is between that of PEBMO and MA.
Comparison with deep learning segmentation methods: in comparison with deep learning methods, the PEBMO algorithm outperforms deep learning methods in terms of fitness value, standard deviation value, FSIM, and computation time, while deep learning outperforms the PEBMO algorithm in terms of SSIM and PSNR. Both algorithms have advantages and disadvantages. Deep learning has a clear advantage in segmentation quality. However, it also has the limitations of poor stability and high data dependency. The PEBMO algorithm, on the contrary, is stable, computationally efficient, and easy to implement. Still, it also suffers from significant performance degradation in low-dimensional or complex cases and high distortion.
In further research, overall, the PBEMO algorithm is better than the IMA algorithm, but its advantage is mainly concentrated in non-uniform and low-illumination canopy images. The IMA algorithm is 53.126% better than the PEBMO algorithm in segmenting normal illumination and strong illumination images. Therefore, further improvement of the BMO algorithm is necessary to improve its segmentation accuracy on high-brightness canopy images. In addition, this study provides a comprehensive comparison with the deep learning segmentation method, in which both algorithms have strengths and weaknesses. Deep learning has apparent advantages in segmentation quality. However, it also has the limitations of poor stability and excessive dependence on data, while the PEBMO algorithm is the opposite, with good stability, high computational efficiency, and ease of implementation. Therefore, different segmentation methods can be used based on these requirements.

5. Conclusions

This paper proposes a method to segment forest canopy images using Kapur entropy combined with the PEBMO algorithm. Compared with the segmentation results of ABC, ChOA, CS, FPA, MA, IMA, and BMO, the segmentation efficiency of the PEBMO algorithm is not as good as that of ChOA, CS, and FPA. Still, it has apparent advantages in segmentation accuracy and stability. The PEBOM and BMO algorithms are not significantly different in segmentation efficiency, which indicates that the improvement of the PEBMO algorithm does not increase the complexity of the BMO algorithm. In addition, the PEBMO algorithm was comprehensively compared with the deep learning segmentation method. The PEBMO algorithm is superior to the deep learning method in terms of the segmentation accuracy, stability, feature similarity, and efficiency. In contrast, the deep learning method was superior to the PEBMO algorithm in terms of structural similarity and image distortion. The two segmentation methods are short. Because this study only focused on small samples of canopy images, it lacked practical validation using large-scale datasets. In contrast, deep learning-based segmentation methods have the advantage of handling large-scale datasets.

Author Contributions

X.Z. and L.Z. contributed to the idea of this paper; X.Z. performed the experiments; W.X. and A.M.E.M. analyzed data; X.Z. wrote the manuscript; X.Z. and L.Z. contributed to the revision of this study. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Basic Research Expenses of Heilongjiang Provincial Universities (Grant No. YWF10236240135) and the Fundamental National Key R&D Program of China under Grant 2021YFC2202502.

Informed Consent Statement

This article does not contain studies with human participants or animals. Statement of informed consent is not applicable since the manuscript does not contain any patient data.

Data Availability Statement

The data used to support the findings of this study are available from https://pan.baidu.com/s/1pxT8oE5Rb5m3inopu0O7Gw?pwd=pn9b (accessed on 23 January 2025), password: c395.

Acknowledgments

The authors thank the anonymous reviewers for their useful comments that improved the quality of the paper.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Mean fitness values for different N values.
Table A1. Mean fitness values for different N values.
FunctionPEBMO
( N = 5 )
PEBMO
( N = 10 )
PEBMO
( N = 20 )
PEBMO
( N = 30 )
PEBMO
( N = 40 )
PEBMO
( N = 50 )
PEBMO
( N = 60 )
F 1 1.19 E + 11 3.91 E + 10 5.41 E + 09 2.83 E + 09 3.13 E + 09 3.43 E + 09 4.20 E + 09
F 2 4.04 E + 57 4.37 E + 46 9.55 E + 38 4.97 E + 33 9.26 E + 35 1.35 E + 34 4.44 E + 32
F 3 6.27 E + 09 4.70 E + 07 1.21 E + 05 8.26 E + 04 9.00 E + 04 8.19 E + 04 8.04 E + 04
F 4 4.91 E + 04 6.82 E + 03 1.12 E + 03 8.54E + 02 8.70 E + 02 9.68 E + 02 1.12 E + 03
F 5 1.19 E + 03 8.70 E + 02 7.43 E + 02 7.16 E + 02 7.39 E + 02 7.41 E + 02 7.35 E + 02
F 6 7.43 E + 02 6.83 E + 02 6.44 E + 02 6.37 E + 02 6.34 E + 02 6.33 E + 02 6.39 E + 02
F 7 3.83 E + 03 2.03 E + 03 1.09 E + 03 1.04 E + 03 1.04 E + 03 1.05 E + 03 1.06 E + 03
F 8 1.41 E + 03 1.17 E + 03 1.02 E + 03 9.83 E + 02 9.95 E + 02 9.91 E + 02 9.94 E + 02
F 9 4.30 E + 04 2.50 E + 04 9.15 E + 03 5.64 E + 03 6.51 E + 03 5.88 E + 03 6.01 E + 03
F 10 1.10 E + 04 9.81 E + 03 8.05 E + 03 5.71 E + 03 5.73 E + 03 5.67 E + 03 5.72 E + 03
F 11 1.71 E + 05 3.28 E + 04 1.38 E + 04 4.54 E + 03 5.61 E + 03 3.83 E + 03 3.97 E + 03
F 12 2.68 E + 10 3.40 E + 09 1.35 E + 08 6.48 E + 07 6.28 E + 07 7.94 E + 07 9.62 E + 07
F 13 2.24 E + 10 1.52 E + 09 9.06 E + 07 6.85 E + 05 5.95 E + 05 6.87 E + 05 1.61 E + 06
F 14 1.05 E + 08 8.20 E + 06 1.20 E + 06 7.15 E + 05 1.13 E + 06 6.89 E + 05 5.66 E + 05
F 15 7.42 E + 09 2.80 E + 08 2.02 E + 04 7.99 E + 03 9.78 E + 03 1.84 E + 04 1.41 E + 04
F 16 9.57 E + 03 4.56 E + 03 3.69 E + 03 3.18 E + 03 3.37 E + 03 3.14 E + 03 3.20 E + 03
F 17 2.92 E + 05 3.07 E + 03 2.59 E + 03 2.31 E + 03 2.39 E + 03 2.30 E + 03 2.38 E + 03
F 18 6.55 E + 08 1.21 E + 08 9.19 E + 06 3.17 E + 06 3.49 E + 06 3.70 E + 06 2.38 E + 06
F 19 5.85 E + 09 4.66 E + 08 8.66 E + 04 1.32 E + 04 3.00 E + 04 2.62 E + 04 3.10 E + 04
F 20 3.94 E + 03 3.56 E + 03 2.98 E + 03 2.64 E + 03 2.67 E + 03 2.69 E + 03 2.69 E + 03
F 21 2.90 E + 03 2.69 E + 03 2.55 E + 03 2.48 E + 03 2.51 E + 03 2.49 E + 03 2.50 E + 03
F 22 1.21 E + 04 1.04 E + 04 3.79 E + 03 2.87 E + 03 2.75 E + 03 2.76 E + 03 2.90 E + 03
F 23 3.57 E + 03 3.05 E + 03 2.89 E + 03 2.84 E + 03 2.84 E + 03 2.85 E + 03 2.86 E + 03
F 24 3.68 E + 03 3.20 E + 03 3.06 E + 03 3.02E + 03 3.02 E + 03 3.02 E + 03 3.03 E + 03
F 25 2.05 E + 04 7.11 E + 03 3.19 E + 03 3.09 E + 03 3.08 E + 03 3.09 E + 03 3.13 E + 03
F 26 1.54 E + 04 9.32 E + 03 6.18 E + 03 4.88 E + 03 5.50 E + 03 5.32 E + 03 5.08 E + 03
F 27 4.53 E + 03 3.42 E + 03 3.27 E + 03 3.25 E + 03 3.25 E + 03 3.26 E + 03 3.26 E + 03
F 28 1.29 E + 04 6.57 E + 03 3.82 E + 03 3.59 E + 03 3.60 E + 03 3.64 E + 03 3.70 E + 03
F 29 1.43 E + 05 5.84 E + 03 4.44 E + 03 4.10 E + 03 2.47 E + 02 4.19 E + 03 4.27 E + 03
F 30 4.05 E + 09 1.77 E + 08 1.49 E + 06 5.43 E + 05 4.02 E + 05 7.98 E + 05 7.45 E + 05
Table A2. Standard deviation values for different N values.
Table A2. Standard deviation values for different N values.
FunctionPEBMOPEBMOPEBMOPEBMOPEBMOPEBMOPEBMO
( N = 5 ) ( N = 10 ) ( N = 20 ) ( N = 30 ) ( N = 40 ) ( N = 50 ) ( N = 60 )
F 1 3.19 E + 10 2.27 E + 10 3.26 E + 09 1.74 E + 09 1.02 E + 09 1.59 E + 09 7.07 E + 08
F 2 1.68 E + 58 1.84 E + 47 4.59 E + 39 1.39 E + 33 6.50 E + 34 4.58 E + 36 1.87 E + 34
F 3 2.34 E + 10 2.33 E + 08 7.26 E + 04 6.78 E + 03 6.52 E + 03 1.24 E + 04 6.91 E + 03
F 4 2.20 E + 04 4.39 E + 03 4.56 E + 02 3.64 E + 02 1.47 E + 02 1.53 E + 02 1.78 E + 02
F 5 1.43 E + 02 7.33 E + 01 6.59 E + 01 3.52 E + 01 3.06 E + 01 4.68 E + 01 3.03 E + 01
F 6 2.62 E + 01 2.84 E + 01 1.80 E + 01 1.68 E + 01 1.56 E + 01 1.68 E + 01 1.83 E + 01
F 7 6.16 E + 02 3.92 E + 02 9.11 E + 01 4.48 E + 01 3.31 E + 01 5.27 E + 01 4.10 E + 01
F 8 9.41 E + 01 8.05 E + 01 5.95 E + 01 2.56 E + 01 2.07 E + 01 2.94 E + 01 3.17 E + 01
F 9 1.05 E + 04 9.60 E + 03 3.72 E + 03 8.47 E + 02 1.45 E + 03 2.20 E + 03 1.84 E + 03
F 10 5.28 E + 02 1.28 E + 03 1.78 E + 03 3.35 E + 02 2.24 E + 02 9.15 E + 02 2.71 E + 02
F 11 4.88 E + 05 1.89 E + 04 7.80 E + 03 1.56 E + 03 1.06 E + 03 3.06 E + 03 2.00 E + 03
F 12 1.09 E + 10 3.86 E + 09 1.31 E + 08 4.75 E + 07 4.02 E + 07 4.72 E + 07 3.64 E + 07
F 13 1.66 E + 10 2.65 E + 09 3.81 E + 08 1.86 E + 06 5.39 E + 05 1.00 E + 06 9.74 E + 05
F 14 1.79 E + 08 6.92 E + 06 1.32 E + 06 4.71 E + 05 6.84 E + 05 1.09 E + 06 6.79 E + 05
F 15 4.60 E + 09 6.30 E + 08 1.56 E + 04 7.41 E + 03 3.18 E + 04 1.08 E + 04 4.68 E + 03
F 16 6.15 E + 03 6.45 E + 02 5.30 E + 02 2.89 E + 02 3.67 E + 02 5.38 E + 02 3.76 E + 02
F 17 6.40 E + 05 9.87 E + 02 3.62 E + 02 2.25 E + 02 2.53 E + 02 2.99 E + 02 2.41 E + 02
F 18 9.36 E + 08 2.25 E + 08 1.27 E + 07 2.61 E + 06 4.81 E + 06 3.97 E + 06 3.05 E + 06
F 19 4.74 E + 09 7.98 E + 08 1.25 E + 05 3.00 E + 04 2.63 E + 04 8.13 E + 04 1.56 E + 04
F 20 2.68 E + 02 3.85 E + 02 4.08 E + 02 1.95 E + 02 2.40 E + 02 3.21 E + 02 2.46 E + 02
F 21 1.17 E + 02 8.51 E + 01 7.09 E + 01 4.67 E + 01 2.90 E + 01 5.74 E + 01 2.79 E + 01
F 22 8.36 E + 02 2.08 E + 03 1.85 E + 03 1.51 E + 02 1.23 E + 02 1.33 E + 02 6.27 E + 02
F 23 3.04 E + 02 1.11 E + 02 5.29 E + 01 2.65 E + 01 2.76 E + 01 3.27 E + 01 2.93 E + 01
F 24 2.76 E + 02 6.09 E + 01 4.51 E + 01 2.74 E + 01 3.17 E + 01 4.90 E + 01 2.93 E + 01
F 25 8.71 E + 03 3.35 E + 03 1.07 E + 02 3.74 E + 01 4.65 E + 01 4.73 E + 01 5.35 E + 01
F 26 4.47 E + 03 2.25 E + 03 1.10 E + 03 8.90 E + 02 9.79 E + 02 1.03 E + 03 9.75 E + 02
F 27 8.16 E + 02 8.94 E + 01 2.69 E + 01 1.64 E + 01 1.35 E + 01 1.65 E + 01 1.55 E + 01
F 28 4.58 E + 03 2.31 E + 03 2.35 E + 02 1.59 E + 02 1.47 E + 02 1.31 E + 02 1.18 E + 02
F 29 4.19 E + 05 1.67 E + 03 3.20 E + 02 3.11 E + 02 2.77 E + 02 2.70 E + 02 2.78 E + 02
F 30 3.50 E + 09 3.22 E + 08 1.58 E + 06 4.77 E + 05 5.62 E + 05 3.03 E + 05 5.56 E + 05
Table A3. Mean fitness values for different T-values.
Table A3. Mean fitness values for different T-values.
FunctionPEBMOPEBMOPEBMOPEBMOPEBMOPEBMOPEBMO
( T = 100 ) ( T = 200 ) ( T = 300 ) ( T = 400 ) ( T = 500 ) ( T = 800 ) (T = 1000)
F 1 3.10 E + 09 2.15 E + 08 1.96 E + 07 1.62 E + 06 5.86 E + 03 1.80 E + 05 7.17 E + 03
F 2 2.05 E + 33 1.96 E + 27 7.89 E + 22 2.06 E + 21 1.59 E + 14 1.42 E + 19 3.82 E + 16
F 3 8.16 E + 04 7.74 E + 04 6.93 E + 04 6.52 E + 04 4.24 E + 04 5.74 E + 04 5.15 E + 04
F 4 9.06 E + 02 5.76 E + 02 5.48 E + 02 5.19 E + 02 4.94 E + 02 5.12 E + 02 5.01 E + 02
F 5 7.33 E + 02 6.86 E + 02 6.93 E + 02 7.03 E + 02 7.05 E + 02 6.90 E + 02 7.04 E + 02
F 6 6.37 E + 02 6.32 E + 02 6.38 E + 02 6.38 E + 02 6.41 E + 02 6.42 E + 02 6.45 E + 02
F 7 1.03 E + 03 9.70 E + 02 9.58 E + 02 9.56 E + 02 1.01 E + 03 1.01 E + 03 1.00 E + 03
F 8 9.78 E + 02 9.55 E + 02 9.51 E + 02 9.54 E + 02 9.62 E + 02 9.55 E + 02 9.63 E + 02
F 9 5.43 E + 03 4.60 E + 03 4.90 E + 03 4.76 E + 03 4.60 E + 03 4.34 E + 03 4.68 E + 03
F 10 5.63 E + 03 5.14 E + 03 5.10 E + 03 5.15 E + 03 5.30 E + 03 5.07 E + 03 5.16 E + 03
F 11 4.10 E + 03 2.09 E + 03 1.63 E + 03 1.40 E + 03 1.20 E + 03 1.33 E + 03 1.21 E + 03
F 12 5.57 E + 07 5.25 E + 06 4.06 E + 06 2.20 E + 06 1.29 E + 06 2.62 E + 06 1.84 E + 06
F 13 6.87 E + 05 2.25 E + 04 1.86 E + 04 2.14 E + 04 1.99 E + 04 2.01 E + 04 1.67 E + 05
F 14 7.38 E + 05 8.00 E + 05 5.96 E + 05 5.09 E + 05 1.55 E + 05 4.13 E + 05 3.39 E + 05
F 15 8.10 E + 03 3.61 E + 03 3.97 E + 03 2.98 E + 03 4.08 E + 03 3.08 E + 03 3.37 E + 03
F 16 3.15 E + 03 2.84 E + 03 2.81 E + 03 2.83 E + 03 2.77 E + 03 2.87 E + 03 2.78 E + 03
F 17 2.34 E + 03 2.27 E + 03 2.29 E + 03 2.26 E + 03 2.21 E + 03 2.21 E + 03 2.24 E + 03
F 18 3.50 E + 06 1.29 E + 06 1.19 E + 06 1.38 E + 06 7.28 E + 05 1.02 E + 06 9.18 E + 05
F 19 1.49 E + 04 8.71 E + 03 6.64 E + 03 4.99 E + 03 6.60 E + 03 7.93 E + 03 7.82 E + 03
F 20 2.69 E + 03 2.62 E + 03 2.63 E + 03 2.61 E + 03 2.59 E + 03 2.61 E + 03 2.62 E + 03
F 21 2.48 E + 03 2.43 E + 03 2.43 E + 03 2.43 E + 03 2.42 E + 03 2.43 E + 03 2.43 E + 03
F 22 2.76 E + 03 2.37 E + 03 2.32 E + 03 2.31 E + 03 2.64 E + 03 2.30 E + 03 2.63 E + 03
F 23 2.84 E + 03 2.77 E + 03 2.77 E + 03 2.77 E + 03 2.78 E + 03 2.77 E + 03 2.77 E + 03
F 24 3.01 E + 03 2.94 E + 03 2.94 E + 03 2.93 E + 03 2.93 E + 03 2.93 E + 03 2.93 E + 03
F 25 3.08 E + 03 2.97 E + 03 2.94 E + 03 2.92 E + 03 2.90 E + 03 2.91 E + 03 2.90 E + 03
F 26 5.51 E + 03 3.73 E + 03 3.98 E + 03 3.69 E + 03 3.69 E + 03 3.91 E + 03 3.87 E + 03
F 27 3.25 E + 03 3.23 E + 03 3.23 E + 03 3.22 E + 03 3.23 E + 03 3.23 E + 03 3.23 E + 03
F 28 3.60 E + 03 3.37 E + 03 3.32 E + 03 3.29 E + 03 3.23 E + 03 3.27 E + 03 3.25 E + 03
F 29 4.29 E + 03 3.93 E + 03 3.89 E + 03 3.88 E + 03 3.91 E + 03 3.95 E + 03 4.00 E + 03
F 30 5.83 E + 05 5.64 E + 04 1.96 E + 04 2.05 E + 04 1.82 E + 04 1.53 E + 04 1.27 E + 04
Table A4. Standard deviation values for different T-values.
Table A4. Standard deviation values for different T-values.
FunctionPEBMOPEBMOPEBMOPEBMOPEBMOPEBMOPEBMO
( T = 100 ) ( T = 200 ) ( T = 300 ) ( T = 400 ) ( T = 500 ) ( T = 800 ) (T = 1000)
F 1 1.30 E + 08 1.55 E + 07 1.00 E + 09 2.08 E + 06 6.82 E + 03 6.53 E + 03 1.80 E + 05
F 2 6.61 E + 27 1.93 E + 23 6.09 E + 33 6.43 E + 21 1.28 E + 17 2.60 E + 14 5.67 E + 19
F 3 8.15 E + 03 7.65 E + 03 8.76 E + 03 7.58 E + 03 7.63 E + 03 5.36 E + 03 6.18 E + 03
F 4 3.23 E + 01 5.30 E + 01 1.66 E + 02 2.72 E + 01 2.06 E + 01 2.54 E + 01 2.48 E + 01
F 5 3.94 E + 01 4.49 E + 01 3.37 E + 01 3.66 E + 01 3.58 E + 01 4.97 E + 01 4.01 E + 01
F 6 1.87 E + 01 2.14 E + 01 1.78 E + 01 1.84 E + 01 1.59 E + 01 1.52 E + 01 1.76 E + 01
F 7 6.73 E + 01 9.46 E + 01 4.39 E + 01 1.05 E + 02 9.13 E + 01 9.75 E + 01 9.95 E + 01
F 8 2.65 E + 01 2.74 E + 01 2.49 E + 01 2.85 E + 01 2.69 E + 01 2.79 E + 01 2.03 E + 01
F 9 1.10 E + 03 7.72 E + 02 1.32 E + 03 1.35 E + 03 6.51 E + 02 6.13 E + 02 9.58 E + 02
F 10 4.29 E + 02 3.44 E + 02 4.34 E + 02 3.79 E + 02 3.10 E + 02 4.03 E + 02 3.88 E + 02
F 11 9.23 E + 02 3.84 E + 02 1.73 E + 03 3.01 E + 02 4.07 E + 01 4.25 E + 01 1.71 E + 02
F 12 3.11 E + 06 3.39 E + 06 2.20 E + 07 1.72 E + 06 1.55 E + 06 7.44 E + 05 2.37 E + 06
F 13 1.27 E + 04 9.41 E + 03 1.70 E + 06 1.27 E + 04 8.07 E + 05 1.38 E + 04 1.80 E + 04
F 14 1.07 E + 06 7.13 E + 05 7.24 E + 05 6.31 E + 05 3.70 E + 05 1.81 E + 05 4.15 E + 05
F 15 2.30 E + 03 2.08 E + 03 3.80 E + 03 1.53 E + 03 2.16 E + 03 3.08 E + 03 2.34 E + 03
F 16 3.62 E + 02 2.79 E + 02 3.46 E + 02 3.65 E + 02 3.35 E + 02 4.00 E + 02 3.34 E + 02
F 17 2.35 E + 02 2.23 E + 02 2.52 E + 02 2.36 E + 02 2.67 E + 02 2.41 E + 02 2.78 E + 02
F 18 1.55 E + 06 1.41 E + 06 6.42 E + 06 1.32 E + 06 1.07 E + 06 6.47 E + 05 1.32 E + 06
F 19 8.11 E + 03 4.82 E + 03 1.26 E + 04 2.44 E + 03 6.37 E + 03 5.18 E + 03 6.64 E + 03
F 20 2.32 E + 02 2.53 E + 02 2.27 E + 02 2.56 E + 02 1.95 E + 02 2.41 E + 02 2.26 E + 02
F 21 2.99 E + 01 3.55 E + 01 2.89 E + 01 5.09 E + 01 3.46 E + 01 5.75 E + 01 3.78 E + 01
F 22 2.58 E + 01 3.76 E + 00 2.03 E + 02 3.24 E + 00 1.29 E + 03 1.28 E + 03 1.39 E + 00
F 23 3.55 E + 01 4.61 E + 01 2.74 E + 01 4.25 E + 01 3.88 E + 01 3.49 E + 01 3.96 E + 01
F 24 3.65 E + 01 4.40 E + 01 3.21 E + 01 2.63 E + 01 2.50 E + 01 3.07 E + 01 3.20 E + 01
F 25 3.78 E + 01 2.75 E + 01 4.39 E + 01 1.88 E + 01 1.75 E + 01 1.91 E + 01 2.48 E + 01
F 26 7.60 E + 02 1.28 E + 03 9.81 E + 02 1.17 E + 03 1.65 E + 03 1.46 E + 03 1.52 E + 03
F 27 1.43 E + 01 1.66 E + 01 1.72 E + 01 1.23 E + 01 1.40 E + 01 1.58 E + 01 1.41 E + 01
F 28 4.40 E + 01 3.06 E + 01 1.38 E + 02 2.65 E + 01 2.48 E + 01 2.50 E + 01 2.57 E + 01
F 29 2.55 E + 02 2.49 E + 02 2.78 E + 02 2.35 E + 02 2.95 E + 02 2.92 E + 02 2.69 E + 02
F 30 8.51 E + 04 1.26 E + 04 8.43 E + 05 1.66 E + 04 9.61 E + 03 2.44 E + 04 1.10 E + 04
Table A5. Mean fitness value of each algorithm.
Table A5. Mean fitness value of each algorithm.
FunctionIBMO [32]IBMO [33]IBMO [34]IBMO [35]IBMO [36]BMOPEBMO
F 1 1.57 E + 09 5.99 E + 08 2.44 E + 09 6.20 E + 10 4.40 E + 09 1.86 E + 09 4.69 E + 05
F 2 1.23 E + 28 1.97 E + 43 3.77 E + 31 1.70 E + 55 6.72 E + 34 5.09 E + 29 3.55 E + 21
F 3 7.53 E + 04 4.36 E + 05 7.84 E + 04 9.08 E + 04 8.81 E + 04 7.83 E + 04 6.09 E + 04
F 4 7.17 E + 02 6.30 E + 02 6.88 E + 02 2.13 E + 04 1.02 E + 03 6.27 E + 02 5.15 E + 02
F 5 7.06 E + 02 7.07 E + 02 7.52 E + 02 9.25 E + 02 7.52 E + 02 7.29 E + 02 7.05 E + 02
F 6 6.49 E + 02 6.28 E + 02 6.55 E + 02 6.92 E + 02 6.58 E + 02 6.50 E + 02 6.43 E + 02
F 7 1.08 E + 03 1.07 E + 03 1.14 E + 03 1.41 E + 03 1.12 E + 03 1.14 E + 03 1.00 E + 03
F 8 9.66 E + 02 1.02 E + 03 9.75 E + 02 1.14 E + 03 1.01 E + 03 9.69 E + 02 9.44 E + 02
F 9 5.42 E + 03 6.81 E + 03 5.52 E + 03 1.25 E + 04 6.18 E + 03 5.11 E + 03 4.58 E + 03
F 10 5.45 E + 03 8.64 E + 03 5.37 E + 03 8.79 E + 03 5.84 E + 03 5.44 E + 03 5.16 E + 03
F 11 2.67 E + 03 2.33 E + 04 2.90 E + 03 1.01 E + 04 4.90 E + 03 2.14 E + 03 1.38 E + 03
F 12 1.13 E + 07 2.30 E + 07 3.93 E + 07 2.05 E + 10 1.54 E + 08 1.84 E + 07 2.94 E + 06
F 13 6.87 E + 05 2.12 E + 06 6.09 E + 04 2.64 E + 10 3.01 E + 05 2.77 E + 05 2.24 E + 04
F 14 9.87 E + 05 5.16 E + 06 1.08 E + 06 1.15 E + 08 8.63 E + 05 6.89 E + 05 6.46 E + 05
F 15 1.12 E + 04 9.63 E + 03 1.16 E + 04 2.24 E + 09 2.90 E + 04 7.00 E + 03 3.77 E + 03
F 16 2.88 E + 03 3.51 E + 03 2.91 E + 03 7.45 E + 03 3.11 E + 03 2.88 E + 03 2.81 E + 03
F 17 2.40 E + 03 2.71 E + 03 2.38 E + 03 4.53 E + 04 2.49 E + 03 2.42 E + 03 2.31 E + 03
F 18 2.55 E + 06 2.99 E + 07 1.88 E + 06 4.94 E + 08 4.27 E + 06 2.09 E + 06 1.18 E + 06
F 19 7.29 E + 03 2.94 E + 06 7.79 E + 03 2.66 E + 09 8.22 E + 04 1.47 E + 04 6.22 E + 03
F 20 2.65 E + 03 3.19 E + 03 2.77 E + 03 3.20 E + 03 2.79 E + 03 2.65 E + 03 2.63 E + 03
F 21 2.47 E + 03 2.54 E + 03 2.48 E + 03 2.90 E + 03 2.49 E + 03 2.47 E + 03 2.45 E + 03
F 22 2.67 E + 03 9.62 E + 03 2.90 E + 03 9.97 E + 03 3.25 E + 03 2.74 E + 03 2.65 E + 03
F 23 2.83 E + 03 2.86 E + 03 2.83 E + 03 4.13 E + 03 2.87 E + 03 2.82 E + 03 2.77 E + 03
F 24 2.96 E + 03 3.10 E + 03 2.99 E + 03 4.07 E + 03 3.01 E + 03 2.98 E + 03 2.93 E + 03
F 25 3.01 E + 03 2.96 E + 03 3.03 E + 03 6.24 E + 03 3.08 E + 03 3.01 E + 03 2.92 E + 03
F 26 5.42 E + 03 6.11 E + 03 5.22 E + 03 1.23 E + 04 6.02 E + 03 5.74 E + 03 4.28 E + 03
F 27 3.27 E + 03 3.44 E + 03 3.25 E + 03 7.32 E + 03 3.25 E + 03 3.25 E + 03 3.23 E + 03
F 28 3.42 E + 03 3.91 E + 03 3.45 E + 03 8.34 E + 03 3.68 E + 03 3.43 E + 03 3.29 E + 03
F 29 4.16 E + 03 4.66 E + 03 4.16 E + 03 2.32 E + 04 4.34 E + 03 4.16 E + 03 3.94 E + 03
F 30 1.51 E + 05 2.75 E + 04 2.26 E + 05 5.26 E + 09 9.43 E + 05 1.59 E + 05 1.54 E + 04
Table A6. Standard deviation values of each algorithm.
Table A6. Standard deviation values of each algorithm.
FunctionIBMO [32]IBMO [33]IBMO [34]IBMO [35]IBMO [36]BMOPEBMO
F 1 1.25 E + 09 2.07 E + 09 1.57 E + 09 5.62 E + 09 2.95 E + 09 1.51 E + 09 1.49 E + 06
F 2 3.07 E + 28 1.07 E + 44 1.22 E + 32 3.36 E + 55 3.28 E + 35 1.54 E + 30 1.83 E + 22
F 3 1.12 E + 04 2.02 E + 05 7.63 E + 03 2.95 E + 03 1.70 E + 04 8.26 E + 03 8.73 E + 03
F 4 3.32 E + 02 2.55 E + 02 1.57 E + 02 2.65 E + 03 4.91 E + 02 6.89 E + 01 2.93 E + 01
F 5 4.52 E + 01 5.18 E + 01 3.76 E + 01 2.54 E + 01 4.70 E + 01 4.22 E + 01 4.39 E + 01
F 6 1.23 E + 01 1.63 E + 01 1.11 E + 01 5.19 E + 00 1.32 E + 01 1.16 E + 01 1.95 E + 01
F 7 8.96 E + 01 9.95 E + 01 1.00 E + 02 5.19 E + 01 8.67 E + 01 9.42 E + 01 1.12 E + 02
F 8 2.54 E + 01 6.43 E + 01 2.59 E + 01 2.81 E + 01 2.80 E + 01 2.45 E + 01 2.61 E + 01
F 9 6.37 E + 02 2.86 E + 03 7.79 E + 02 1.46 E + 03 8.91 E + 02 7.12 E + 02 1.24 E + 03
F 10 5.37 E + 02 9.78 E + 02 4.21 E + 02 5.28 E + 02 3.39 E + 02 4.96 E + 02 4.33 E + 02
F 11 9.97 E + 02 1.69 E + 04 1.35 E + 03 1.64 E + 03 2.90 E + 03 6.11 E + 02 1.75 E + 02
F 12 8.51 E + 06 6.74 E + 07 7.97 E + 07 2.44 E + 09 3.37 E + 08 1.49 E + 07 3.32 E + 06
F 13 1.54 E + 06 1.07 E + 07 5.33 E + 04 4.32 E + 09 5.05 E + 05 1.04 E + 06 2.00 E + 04
F 14 9.64 E + 05 1.79 E + 07 1.15 E + 06 7.15 E + 07 7.95 E + 05 9.51 E + 05 8.20 E + 05
F 15 1.14 E + 04 7.91 E + 03 1.30 E + 04 1.27 E + 09 8.89 E + 04 6.40 E + 03 3.37 E + 03
F 16 3.41 E + 02 5.21 E + 02 3.67 E + 02 1.34 E + 03 4.01 E + 02 2.98 E + 02 3.38 E + 02
F 17 2.62 E + 02 3.08 E + 02 2.42 E + 02 3.65 E + 04 2.58 E + 02 2.95 E + 02 2.44 E + 02
F 18 2.61 E + 06 3.87 E + 07 2.47 E + 06 5.38 E + 08 7.45 E + 06 3.17 E + 06 1.50 E + 06
F 19 5.73 E + 03 1.58 E + 07 8.50 E + 03 9.63 E + 08 2.15 E + 05 2.50 E + 04 4.63 E + 03
F 20 2.06 E + 02 3.13 E + 02 2.17 E + 02 9.51 E + 01 2.91 E + 02 2.28 E + 02 2.68 E + 02
F 21 4.44 E + 01 4.79 E + 01 3.70 E + 01 6.96 E + 01 6.17 E + 01 4.86 E + 01 4.78 E + 01
F 22 3.39 E + 02 1.77 E + 03 1.25 E + 03 4.67 E + 02 1.04 E + 03 8.71 E + 02 1.34 E + 03
F 23 4.97 E + 01 6.26 E + 01 4.88 E + 01 1.38 E + 02 5.59 E + 01 4.27 E + 01 3.48 E + 01
F 24 5.25 E + 01 7.67 E + 01 3.91 E + 01 1.41 E + 02 3.47 E + 01 4.68 E + 01 3.31 E + 01
F 25 4.09 E + 01 3.61 E + 01 4.54 E + 01 4.83 E + 02 9.54 E + 01 4.88 E + 01 2.15 E + 01
F 26 1.49 E + 03 7.79 E + 02 1.53 E + 03 6.68 E + 02 1.55 E + 03 1.43 E + 03 1.70 E + 03
F 27 4.39 E + 01 2.68 E + 02 2.14 E + 01 5.29 E + 02 2.30 E + 01 2.56 E + 01 1.17 E + 01
F 28 8.71 E + 01 1.26 E + 03 7.81 E + 01 5.65 E + 02 3.21 E + 02 5.89 E + 01 2.70 E + 01
F 29 3.70 E + 02 5.81 E + 02 2.83 E + 02 1.42 E + 04 3.96 E + 02 2.88 E + 02 3.10 E + 02
F 30 1.93 E + 05 2.59 E + 04 3.55 E + 05 1.01 E + 09 1.07 E + 06 2.45 E + 05 7.79 E + 03
Table A7. p-values of the Wilcoxon rank-sum test over 30 runs.
Table A7. p-values of the Wilcoxon rank-sum test over 30 runs.
Function PEBMO vs . IBMO [32] PEBMO vs . IBMO [33] PEBMO vs . IBMO [34] PEBMO vs . IBMO [35] PEBMO vs . IBMO [36] PEBMO vs . BMO
F 1 3.0199 E 11 3.4742 E 10 3.0199 E 11 3.0199 E 11 3.0199 E 11 3.0199 E 11
F 2 3.0199 E 11 1.6132 E 10 5.4941 E 11 3.0199 E 11 3.3384 E 11 3.6897 E 11
F 3 3.0811 E 08 3.0199 E 11 4.5726 E 09 3.3384 E 11 5.5727 E 10 3.4742 E 10
F 4 6.0658 E 11 2.6784 E 06 3.0199 E 11 3.0199 E 11 3.0199 E 11 3.6897 E 11
F 5 5.8282 E 03 0.010763 6.5486 E 04 3.0199 E 11 4.4205 E 06 2.2658 E 03
F 6 5.8282 E 03 4.2175 E 04 8.6600 E 05 3.0199 E 11 1.3703 E 03 1.4067 E 04
F 7 1.5846 E 04 4.4272 E 03 1.4932 E 04 3.0199 E 11 1.4423 E 03 2.4913 E 06
F 8 6.9724 E 03 7.0430 E 07 1.6351 E 05 3.0199 E 11 3.0103 E 07 0.025101
F 9 0.037782 5.8282 E 03 7.1988 E 05 3.0199 E 11 1.3853 E 06 4.0330 E 03
F 10 4.2259 E 03 3.0199 E 11 2.1265 E 04 3.0199 E 11 4.7445E-06 8.6844 E 03
F 11 2.6695 E 09 4.9752 E 11 3.3520 E 08 3.0199 E 11 4.1997 E 10 7.5991 E 07
F 12 6.0459 E 07 4.6390 E 05 3.1967 E 09 3.0199 E 11 4.9752 E 11 4.6856 E 08
F 13 2.4913 E 06 0.039167 4.9818 E 04 3.0199 E 11 1.9527 E 03 7.1988 E 05
F 14 0.027086 5.8737 E 04 9.4683 E 03 3.0199 E 11 0.040595 5.8282 E 03
F 15 8.2919 E 06 4.6371 E 03 1.1058 E 04 3.0199 E 11 0.0435840.032651
F 16 1.1143 E 03 3.9648 E 08 0.043584 3.0199 E 11 1.3250 E 04 0.046756
F 17 0.045146 9.0688 E 03 3.0339 E 03 3.0199 E 11 9.0688 E 03 3.5638 E 04
F 18 3.7704 E 04 1.1567 E 07 0.048413 3.0199 E 11 0.023243 2.6243 E 03
F 19 0.0405950.0292050.039167 3.0199 E 11 1.4918 E 06 0.030317
F 20 0.026077 3.0103 E 07 0.032651 3.6897 E 11 0.0215060.040595
F 21 9.5207 E 04 2.1327 E 05 4.2259 E 03 3.0199 E 11 1.4733 E 07 0.042067
F 22 5.5727 E 10 4.9752 E 11 3.0199 E 11 3.0199 E 11 8.4848E-09 3.0199 E 11
F 23 3.7704 E 04 1.7294 E 07 3.0059 E 04 3.0199 E 11 3.3242 E 06 4.3531 E 05
F 24 1.6813 E 04 7.3803 E 10 3.0103 E 07 3.0199 E 11 5.5329 E 08 8.2919 E 06
F 25 5.5727 E 10 1.7836 E 04 4.5043 E 11 3.0199 E 11 7.3891 E 11 1.0937 E 10
F 26 2.5721 E 07 1.1567 E 07 1.5846 E 04 3.0199 E 11 2.4913 E 06 2.2658 E 03
F 27 4.0840 E 05 4.3106 E 08 0.013272 3.0199 E 11 3.0939 E 06 3.7704 E 04
F 28 8.8910 E 10 4.5726 E 09 3.0199 E 11 3.0199 E 11 3.0199 E 11 7.3891 E 11
F 29 2.6077 E 02 5.9706 E 05 2.8389 E 04 3.0199 E 11 4.3531 E 05 0.013832
F 30 8.1975 E 07 5.8282 E 03 2.1959 E 07 3.0199 E 11 5.9673 E 09 2.1544 E 10
Table A8. Optimal fitness values obtained by each algorithm.
Table A8. Optimal fitness values obtained by each algorithm.
ImageDimABCChOACSFPAMAIMABMOPEBMO
LS1448.008647.966147.704947.556848.014548.015948.015948.0159
879.281578.425377.288477.175879.827479.8275779.801179.8336
12104.7496102.9617101.5882102.2865106.2248106.2445105.2567106.2499
16126.2755123.3417120.5088121.867129.0688129.1405129.0721129.3424
LS2448.31848.26448.113747.790348.321148.325248.320448.3238
879.858779.29977.817476.872180.452280.449480.441580.4532
12104.5855102.9147100.4812100.7449106.867107.0246105.1117107.0272
16125.9141124.7758121.9272120.6541129.3555129.8391129.5242129.9012
LS3447.08947.063546.865846.716547.097447.097447.014347.0974
877.713977.347675.94774.776378.327578.270378.312478.339
12102.1062100.832897.334998.2392103.7293103.72103.4798103.6359
16123.2746122.903120.0128120.8741128.0451127.6188125.9931127.9746
LB1446.209946.142245.922845.789646.216846.217246.216846.2172
876.282475.666273.762974.269176.995577.016976.1377.0028
12101.104599.565697.671898.2685102.7173102.9983102.8093103.0667
16122.0963121.4659120.6648117.8679127.4983128.2044126.5449127.6949
LB2446.039145.964845.630145.510146.049646.049646.048346.0496
876.916276.229475.084174.625777.527477.594377.49177.596
12101.6975101.346197.415798.2788103.5888103.7153103.006103.9161
16123.1056121.6342119.1038120.5048127.4749127.1375127.182127.2849
LB3449.08149.039748.909448.63149.091849.091849.091549.0918
880.490479.821779.094377.185981.031481.033781.01681.047
12105.2572104.1442101.7131100.3685107.6818107.5712107.3766107.7648
16125.4144125.4449121.534122.3018130.4694130.7899129.6344130.7112
L1448.501248.437348.329247.962248.5148.5148.5148.51
880.067579.737478.606677.461480.670480.694580.659780.6946
12105.7694105.1358102.1743102.7151108.0408107.7356107.7697108.0403
16127.7735125.6091125.4105126.1313131.6222131.5283131.8409132.1093
L2448.422148.340847.958548.135548.432248.432248.432248.4322
880.440279.678578.000378.610180.959880.967280.925580.9789
12105.8806105.632103.7626103.8049108.3114108.3499108.3412108.5285
16127.7982126.5144124.2342125.7799132.4709132.5962131.1821132.6609
L3447.778547.721847.53147.240947.787447.787447.787447.7874
879.524479.118277.57577.008879.942179.961279.956779.9768
12105.1771103.9931103.0118103.7187107.4628107.4741107.2843107.4685
16126.706126.0217126.1506124.2492131.3893131.5671130.75131.5785
L4449.187549.158549.031548.672149.192749.192749.192749.1927
881.467681.284879.828279.31781.977481.97981.969581.9818
12107.5178107.0374105.0405105.057109.6753109.6923109.3656109.7259
16129.5379127.1497124.2868123.9073133.6504133.6855133.2264133.6386
S1449.33849.300148.918849.077549.342149.342149.342149.3421
881.401380.990379.872479.994381.883981.884281.873181.8842
12107.2292105.4543104.5251102.8939108.842109.1382108.8739109.1467
16128.408127.7292125.9984123.7531132.8784133132.465132.8416
S2439.120938.997238.596438.337439.181839.178239.148838.9676
862.011561.355259.500159.936962.910263.069362.581562.3881
1278.768176.763173.339573.24481.368981.557780.254378.3173
1689.879986.981684.780382.044296.229796.272391.872691.7885
S3449.46849.419249.135549.240349.477949.477949.476349.4779
881.585681.471780.272879.102682.308982.311382.293182.3125
12107.6504106.7669105.3293103.3981109.5594109.8243109.6236109.7859
16129.7214126.7394127.524125.7382133.478133.679132.2654133.3721
S4447.776647.740647.534747.372547.783547.783647.783547.7836
879.583579.401477.364678.05780.026280.021980.026480.0651
12106.0834105.1352103.3057103.4634107.7611107.8908107.7554107.8515
16127.2673126.7021124.6762125.0253131.3905132.014130.9904132.0132
N1447.865547.835647.661847.581947.874747.874747.874347.8747
879.79979.477877.596378.372980.298980.307880.28780.2688
12105.7812104.1394101.5682104.6089107.8763107.8225107.8454107.9298
16127.3402126.3404124.1978124.4306131.7924132.0332131.1746131.8597
N2447.849547.820147.52647.418747.85647.85647.852947.856
8105.8145104.6116103.3116102.5263107.5417107.5732107.5117107.5868
12104.872105.0269101.9968103.167106.9648107.307106.6086107.301
16127.37126.7299123.601124.8646131.0551131.3882129.7893131.2023
N3447.823347.751347.566447.590347.828847.828847.828647.8288
879.81779.443778.36778.43780.267480.278880.249380.276
12105.8145104.6116103.3116102.5263107.5417107.5732107.5117107.5868
16127.2122125.8929125.1016123.9052131.2522131.4418130.6177131.3901
N4447.763947.744147.601347.450947.775247.775247.77547.774
879.811279.209178.24177.942180.276680.281880.267780.2831
12105.4597104.7805102.5258103.3431107.6039107.622107.2717107.6099
16127.0776125.3874122.243123.5945131.158131.3819130.6546131.1936
Table A9. Standard deviation values obtained by each algorithm.
Table A9. Standard deviation values obtained by each algorithm.
ImageDimABCChOACSFPAMAIMABMOPEBMO
LS149.5913E-048.1216E-026.6639E-024.3741E-023.3326E-022.2291E-021.2497E-031.0414E-02
86.0017E-022.0291E-026.2622E-023.6112E-025.8858E-024.3839E-023.1330E-022.5475E-02
123.1345E-026.7002E-028.1548E-024.3356E-029.4256E-025.3314E-023.5781E-023.4839E-03
162.1172E-014.3651E-027.8594E-028.5891E-026.5116E-026.0465E-025.5038E-024.6511E-03
LS249.9558E-024.1407E-034.0976E-025.6743E-036.6753E-032.0703E-032.9489E-031.0351E-03
84.7812E-025.6159E-025.7359E-027.7959E-023.9974E-021.0355E-023.0926E-021.7269E-02
124.0536E-025.7343E-025.6543E-027.7624E-028.8753E-025.5329E-025.0172E-022.7418E-02
164.5392E-025.6343E-025.8725E-02673624E-028.3723E-028.0915E-027.5231E-022.6332E-02
LS347.5542E-036.7764E-036.7653E-026.5531E-025.6642E-013.8864E-022.7784E-022.3354E-03
84.7753E-027.6534E-028.8253E-026.8843E-026.8263E-023.8862E-042.0032E-021.6603E-02
123.7735E-025.7363E-026.8826E-027.9272E-026.6253E-026.8262E-023.7724E-022.8028E-02
164.7363E-016.9273E-026.5615E-027.8223E-028.6635E-024.7363E-025.3836E-023.9038E-02
LB141.2346E-026.0606E-025.8373E-014.7364E-034.8834E-036.7736E-046.6363E-036.4921E-03
85.7754E-026.7734E-0273635E-028.6637E-025.7733E-024.8836E-023.6625E-021.8181E-02
122.9734E-024.8836E-021.0050E-025.8836E-027.8853E-025.7733E-024.9835E-021.7339E-02
162.8875E-014.9986E-026.9958E-026.8874E-025.7765E-025.9987E-024.9973E-024.4096E-02
LB241.7464E-028.7464E-018.7363E-031.3032E-032.8836E-022.9983E-021.9383E-011.8336E-03
85.7736E-024.7764E-025.9974E-027.9986E-025.9982E-023.6684E-023.8862E-021.6774E-02
127.6648E-025.7753E-024.9972E-026.8874E-025.8845E-024.9935E-023.0093E-022.5025E-02
161.9927E-015.9942E-026.8845E-027.9974E-024.0083E-025.9973E-024.9984E-023.9933E-02
LB341.1266E-025.0096E-014.8971E-033.8875E-023.9985E-024.0097E-025.8876E-021.2222E-02
87.8874E-027.9945E-029.8875E-025.9984E-026.7789E-028.9975E-023.9984E-021.7262E-02
126.9986E-026.9975E-029.9983E-025.9985E-026.0096E-022.8865E-024.0086E-023.3675E-02
165.8875E-027.8874E-028.0096E-027.0065E-025.9985E-024.9975E-023.7008E-022.5764E-02
L141.0515E-028.6635E-027.8836E-035.8863E-034.8836E-033.8836E-022.0036E-011.0309E-03
87.8837E-026.9973E-026.0083E-021.2394E-025.7639E-024.0082E-023.9927E-021.7553E-02
124.8837E-025.8837E-028.0083E-024.9937E-025.9937E-021.9973E-023.6645E-022.3773E-02
164.9947E-016.0937E-025.0093E-026.8836E-029.8736E-027.9937E-024.9937E-022.1343E-02
L241.2636E-021.0526E-028.8837E-016.8837E-015.9937E-016.0038E-018.9936E-012.8888E-03
85.9954E-027.7732E-024.9974E-025.0085E-026.0085E-024.0999E-052.9776E-021.4833E-02
124.0096E-026.9985E-026.9999E-025.7732E-022.5806E-023.9975E-023.1432E-022.9094E-02
162.0997E-014.8875E-024.9874E-026.9958E-026.8864E-023.8874E-025.8874E-023.1204E-02
L341.5297E-028.9951E-027.8875E-024.9986E-014.8753E-012.2986E-012.0097E-011.8831E-03
84.7764E-029.9975E-027.0085E-028.0084E-025.7753E-024.8853E-021.3764E-029.8874E-02
123.8846E-025.8474E-026.8846E-025.8456E-028.0048E-026.0048E-024.0484E-022.3032E-02
163.9634E-025.8846E-026.9937E-026.8836E-028.9937E-024.0938E-023.3037E-022.9874E-02
L449.5567E-036.8474E-027.9938E-025.0847E-024.9937E-013.9374E-012.4746E-011.0162E-03
84.9975E-025.9974E-028.8874E-027.8863E-026.9975E-027.2312E-038.9984E-037.7937E-03
124.9977E-026.9937E-028.9937E-026.0083E-024.0037E-028.0037E-024.9373E-022.2557E-02
164.0085E-028.8852E-024.0964E-027.8853E-026.9964E-024.8863E-023.0874E-022.4199E-02
S149.3306E-038.8751E-036.0085E-025.9974E-014.9986E-022.9987E-013.0096E-021.0131E-03
86.0096E-035.0086E-038.9985E-037.0096E-034.0096E-033.7799E-034.0986E-033.9973E-03
125.8863E-029.8866E-027.8864E-024.9974E-026.9984E-027.0085E-023.0654E-022.4505E-02
164.9975E-025.0974E-027.9974E-029.8864E-025.8874E-026.9985E-035.0094E-023.3372E-02
S246.9986E-027.9974E-025.8864E-026.9962E-021.2251E-028.9984E-027.6543E-025.3598E-02
84.6547E-027.9734E-025.0936E-024.9464E-028.8464E-025.0494E-024.0383E-022.0789E-02
124.0986E-025.0986E-026.8754E-025.3096E-026.0974E-025.0964E-034.2132E-023.0986E-02
165.0843E-027.0886E-027.3124E-026.0864E-025.9753E-024.0986E-023.2643E-022.7643E-02
S348.6991E-037.7647E-037.0837E-034.7474E-032.9384E-012.9336E-022.2393E-021.6162E-03
85.8764E-027.9642E-026.2345E-025.2364E-024.3425E-023.6742E-021.0974E-011.0935E-02
124.0875E-025.9865E-024.8657E-026.8464E-026.0383E-024.0293E-022.7397E-023.0854E-02
163.0384E-015.0484E-026.0383E-024.9373E-027.4041E-026.0827E-023.0283E-022.4372E-02
S449.2204E-038.3636E-036.2467E-036.3029E-023.2038E-012.4038E-011.4735E-011.4647E-03
84.8753E-027.9764E-027.8975E-025.0975E-024.9543E-021.7478E-029.0985E-022.4906E-01
127.6753E-026.9854E-028.5753E-029.6425E-026.9876E-025.0861E-034.6874E-022.8757E-02
163.9865E-025.1352E-026.3548E-026.1253E-028.0913E-027.2414E-026.0164E-022.7668E-02
N148.1573E-037.2837E-037.3928E-033.2283E-013.0393E-022.9303E-022.5039E-021.6701E-03
84.8765E-028.9753E-026.8754E-026.7542E-025.8654E-021.6864E-021.8444E-011.8973E-02
125.0764E-027.8753E-028.9753E-027.0864E-025.1531E-025.3642E-024.2742E-022.9642E-02
162.7839E-015.2093E-025.5837E-027.3836E-027.1037E-021.8363E-024.3974E-023.3247E-02
N241.3807E-021.0438E-027.8364E-035.2028E-024.9374E-022.0383E-012.5052E-032.4937E-02
84.0384E-025.0384E-026.4863E-024.8645E-024.2018E-024.3927E-024.5937E-021.3767E-02
124.2028E-027.0283E-027.3746E-028.8363E-027.5927E-025.8374E-023.2027E-021.9626E-02
165.0373E-026.4303E-026.5927E-028.0274E-028.3484E-026.9273E-035.0273E-023.4351-02
N341.0879E-027.7373E-036.4746E-023.3648E-023.0393E-012.7393E-022.4938E-021.0453E-03
85.3038E-027.3746E-028.2937E-026.0375E-024.6937E-025.9474E-023.8354E-021.2453E-02
124.8645E-025.2937E-026.4373E-027.4528E-024.0384E-021.9745E-023.3046E-022.6977E-02
164.0384E-025.8464E-025.2947E-026.0484E-028.3746E-024.0173E-023.6037E-022.6190E-02
N441.2159E-026.0682E-025.8262E-034.8373E-021.8832E-033.5838E-023.9453E-022.3846E-02
86.9374E-027.1835E-027.7253E-026.9363E-025.6383E-024.9337E-023.6937E-021.2635E-02
123.8363E-025.6383E-027.3725E-028.0373E-027.9373E-026.8332E-033.2984E-022.8832E-02
164.7545E-025.9334E-026.3835E-027.3854E-024.7663E-024.0936E-023.6303E-022.3904E-02
Table A10. SSIM values obtained by each algorithm.
Table A10. SSIM values obtained by each algorithm.
LS140.72090.720410.706790.713160.71920.723650.724510.72366
80.813940.812540.81590.80320.820090.826080.818120.82698
120.860210.870640.892630.913090.895850.883530.903160.89082
160.925340.937190.90520.943320.905120.925460.916680.95507
LS240.779990.776930.759270.782850.776170.778330.780990.7759
80.857290.861130.865770.83810.856150.859050.854710.85818
120.906170.872170.907740.912280.903540.898290.900260.93189
160.926070.900260.927240.954480.934010.948140.940430.95487
LS340.708440.703380.684670.70680.708760.708750.700960.71061
80.813850.810510.808190.807460.816860.82720.817950.84149
120.900990.91520.915670.908290.914890.920390.920650.93225
160.909830.898260.923730.94450.929060.946260.941480.94616
LB140.674130.676470.634710.666040.67550.676360.676550.67646
80.807180.781720.817870.788770.801610.804120.821230.81574
120.902590.894230.872230.915540.923260.912540.903750.92138
160.935830.911940.928910.932860.945530.949410.942950.9513
LB240.713740.728970.687970.698430.711620.711470.71250.71154
80.860590.827130.856470.831070.842570.844240.848830.84406
120.920040.90990.894760.894940.917490.920290.912910.92354
160.943080.94980.94150.935110.951080.954760.946850.961
LB340.72370.717150.733430.724540.720740.720690.721310.72078
80.817630.801840.812180.821870.822090.822120.821280.82868
120.875060.8160.863230.875790.864940.880070.881030.89334
160.911780.907820.913960.904240.900560.908490.916020.92955
L140.799440.787470.793090.801020.799110.799160.799150.7993
80.884840.875130.889750.87190.887120.893440.895390.88715
120.936720.917760.910260.927890.942230.934880.942290.94893
160.929650.930760.932720.947740.950190.960240.955760.96085
L240.783680.780690.763580.761480.781930.781710.781710.78185
80.890370.872120.878990.871340.881280.8830.883870.89283
120.892470.907190.912420.910670.922420.93020.926070.93647
160.941570.922020.920020.931170.947620.950130.941 790.94609
L340.775180.764380.784550.772970.774180.774120.77420.77438
80.858740.849410.858050.865470.869070.873670.863280.89335
120.914410.920220.923220.913320.917760.931020.919090.93653
160.933720.913680.920280.945850.95620.95550.930650.95704
L440.821960.819890.809290.822110.823270.823070.823120.8233
80.896170.89930.880360.887610.904490.90380.903830.90422
120.926580.920370.912770.908740.929580.931580.929020.93037
160.921180.922590.929760.933770.946420.949430.946290.94519
S140.790440.789160.782780.775010.790260.790070.790190.7902
80.864640.861990.843220.866640.875370.875620.875440.87566
120.909220.88490.915880.921380.922670.920230.925590.91965
160.939860.941550.91650.942610.936250.943490.942360.94215
S240.745870.745630.763590.736450.75550.756830.757230.75167
80.854430.822640.807460.846230.848350.860390.856810.85417
120.90960.870750.881710.84450.911620.895090.90750.91285
160.905280.888030.923280.911750.93020.938780.933320.93576
S340.831770.827010.817090.831110.831330.831190.831290.83128
80.895520.890540.874390.900220.906320.906610.904910.90588
120.918780.910210.905110.910170.931890.933980.934840.93149
160.932030.926560.922550.934130.94320.943290.944340.94727
S440.83260.832710.814370.825810.832990.833050.832930.83328
80.914450.911450.878310.904920.916520.916580.914280.91647
120.939880.934240.927560.925490.947120.948980.943690.94795
160.956060.941560.942570.948120.960090.959850.945110.94685
N140.824760.826380.820130.810480.824040.823720.823680.82384
80.907850.908360.908640.901450.909090.909660.902210.90609
120.931650.931620.942240.929440.947640.948460.947660.94679
160.942010.94680.94110.938280.961640.964780.958750.96024
N240.843030.83880.841210.830650.841450.842620.841960.84162
80.950530.934470.934410.953030.958810.955190.95740.95759
120.938230.94430.931180.93820.950490.953170.958810.9509
160.956280.949140.940130.95950.965960.972140.95840.96962
N340.857480.846390.85710.85060.85720.857020.856910.8598
80.922710.914040.905030.921030.926910.928320.928190.92725
120.950530.934470.934410.953030.955190.958810.95740.95759
160.956660.943290.947220.961940.969350.969420.966930.96697
N440.840670.841310.829680.827620.83670.836680.837530.83606
80.912930.903120.904260.914020.913990.915160.911060.9127
120.938760.934780.934710.942670.944390.946290.946180.95286
160.944710.934880.951450.955830.961630.973380.95650.96398
Table A11. PSNR values obtained by each algorithm.
Table A11. PSNR values obtained by each algorithm.
ImageDimABCChOACSFPAMAIMABMOPEBMO
LS 1417.623317.531416.746217.653317.623317.654617.622717.6314
822.660622.266621.393920.713722.122622.324123.27521.2806
1224.793425.398225.157224.72124.875424.358225.497725.5312
1628.434727.265826.332728.093126.537527.118226.589328.9935
LS2417.310717.565615.585717.613817.498517.132116.750317.138
823.507723.451420.786521.32821.513721.528721.506821.5758
1226.324423.230525.591224.980324.54424.554126.478824.7528
1627.011926.285325.86726.522726.010526.794726.383828.2071
LS3416.89916.588616.835616.968216.900816.906616.403716.936
822.63822.731921.771921.766523.161322.712723.213323.6906
1223.962826.708625.854924.014626.973925.349927.564727.1342
1627.868627.662727.090727.97528.695330.918530.088628.9575
LB1416.34716.310614.549415.834316.33816.367916.362816.3735
822.338920.777723.024620.39622.086822.237422.941622.8365
1227.049225.760724.411925.486728.716727.891228.05827.5096
1627.96127.669327.214928.334730.377130.783530.37330.7994
LB2417.036517.64616.082916.336717.030817.022917.040417.0288
824.344322.284823.456221.928822.819522.841122.901522.8438
1227.161326.104324.02925.669826.232526.238526.760427.1785
1627.874628.59327.217926.785428.125928.373829.395629.9283
LB3416.52117.022916.361216.570916.564117.190416.518416.5772
822.780322.504521.590821.487123.551722.513523.5322.5442
1224.384923.229822.240322.770824.001824.292225.982826.7508
1626.9727.543726.783526.249426.502626.667325.909527.9759
L1419.081818.872416.650319.205619.097119.117719.117419.1248
823.984523.754720.795322.721425.153524.997125.187224.934
1226.755726.824424.094326.399328.47428.749227.89728.4827
1629.34427.801226.516129.627530.246630.604530.270930.9262
L2419.21418.93418.482218.293419.2819.266119.231419.2611
824.682123.974524.375223.294725.141725.100125.189825.2418
1226.999726.893525.693326.668628.64828.675328.386328.7201
1628.425229.005926.851828.310330.919731.262229.838531.3071
L3419.085518.621619.86318.566419.169119.144819.157119.1681
824.460324.338322.189122.07924.603824.89424.771324.9318
1227.685125.830126.950927.898328.469128.644228.419528.4659
1629.104428.671626.320528.724230.599430.919430.05231.1223
L4418.801118.827519.101518.438918.867218.8418.898318.8256
824.440224.540621.744723.650525.04725.094624.997325.1384
1224.828726.937525.68327.128528.502728.599827.789228.6067
1627.381228.521725.733726.474730.705730.814529.638930.9529
S1418.821518.897718.772918.769818.907818.891518.895518.9285
824.959124.639323.268623.836525.151725.153625.122525.0951
1227.080526.938423.894426.677228.263328.553828.294428.7182
1628.242528.25528.088829.720530.956631.179730.516731.1172
S2420.515920.282420.458220.260320.752520.822620.605920.5969
824.945324.309623.676625.307125.497125.931326.01926.2151
1228.079726.454126.114925.139529.076228.629729.150428.7945
1629.154828.146528.337729.863630.696131.322629.961131.1449
S3418.508217.845717.40318.380818.257718.132818.122818.241
823.031521.735121.358923.443223.256523.510322.976823.7055
1224.874627.290423.847224.5727.14628.244827.318227.9499
1626.770828.361527.230729.557929.783230.455427.618229.7469
S4418.215119.033717.709618.041318.383418.467118.393218.4992
823.981124.448723.2822.447225.03625.079224.288524.8859
1227.141526.924.304627.601628.46628.335228.13528.5991
1628.177729.409326.512429.418931.054531.196730.566231.1365
N1419.542219.223418.925918.831819.435719.382119.350719.3989
824.460423.652424.404124.851225.339425.360925.230825.5503
1226.801327.350426.469725.972228.477228.092628.345728.4018
1628.522628.443427.789128.919830.383530.695230.412130.9756
N2418.617718.409418.45317.895118.673318.681918.501218.7055
827.379827.041125.721226.931528.507328.697728.603928.6595
1226.512426.350724.495626.685128.623928.066127.797928.3443
1628.825327.949125.247427.565130.630530.857229.433930.6594
N3419.413919.344519.678718.728219.470719.403419.429619.4436
824.707724.441224.667124.594525.103125.398425.243925.139
1227.379827.041125.721226.931528.507328.697728.603928.6595
1629.55527.866826.153429.276530.667931.280529.796831.2565
N4419.183819.510719.431919.236519.323719.29319.356619.4714
824.19724.988723.014323.142725.465425.485625.284325.503
1228.14527.777727.133327.484128.796328.893128.41828.7206
1627.639928.261227.065829.375830.523531.084630.523231.3229
Table A12. FSIM values obtained by each algorithm.
Table A12. FSIM values obtained by each algorithm.
ImageDimABCChOACSFPAMAIMABMOPEBMO
LS140.902240.901740.895460.899260.901960.904050.904090.9041
80.936720.93360.935020.931760.939540.942190.941720.93919
120.954260.954140.963940.973280.966790.961670.969770.96414
160.975320.97960.96740.976510.966040.968660.972760.98245
LS240.901120.89850.888960.90120.899620.900130.901210.89825
80.938860.94120.940920.929240.940880.942450.940060.94209
120.960720.944950.958620.963850.964120.960550.965920.97683
160.971390.957640.968850.982380.980780.984550.98440.98856
LS340.894010.889670.8840.894460.893870.894080.886610.89708
80.926760.926370.913530.931260.928030.933210.928220.9248
120.959190.966490.985630.962230.966690.969240.966650.97563
160.968980.964590.974360.985050.97610.985910.980520.98892
LB140.863940.863530.84010.856880.86430.864820.864730.86883
80.912750.903850.921810.900430.909850.911040.924720.92137
120.966260.967920.952960.973340.972120.971170.971080.96716
160.985030.969310.978390.984620.980550.988580.98540.98844
LB240.86420.872950.847870.856710.86380.86380.86440.86379
80.939620.921450.93610.925630.928470.931760.933090.93379
120.970830.962870.958030.95880.968130.96950.963120.9669
160.978170.98540.9810.9750.984440.988950.969750.98943
LB340.907160.906970.903150.908580.906190.906240.906470.90634
80.940510.934470.930270.94230.942440.942310.942190.94239
120.95830.93750.949540.957930.961120.955370.963460.96355
160.97240.970790.974890.969880.969310.971980.973280.98367
L140.96270.957190.948780.962880.96280.962790.962820.96282
80.985940.983020.973510.979480.968360.986780.986810.98875
120.994310.991010.986940.991510.996210.994940.995690.99622
160.993060.992560.990760.995360.996810.997510.99720.99871
L240.957490.955960.948370.943850.956690.956590.956640.95664
80.985840.982810.984330.981870.986040.986480.985450.98578
120.987250.988020.988490.989720.992870.993960.994720.99477
160.992150.991980.989920.993450.996550.996670.995230.99819
L340.958230.952370.963120.956040.958260.95820.958230.95832
80.981850.980150.983770.979170.987410.986430.983290.98641
120.991420.992830.993030.992120.994040.995990.996340.9942
160.994840.991290.991370.995470.997760.998740.994940.99897
L440.963170.962460.959860.962720.954020.963940.964070.96389
80.985560.9870.978430.981890.989290.989070.989170.98941
120.988610.990260.987340.986880.993910.994540.993260.99366
160.987870.991310.985130.989740.995990.996560.995540.99879
S140.951150.950880.945890.941190.951110.952120.951150.95106
80.978160.977310.967180.979040.982650.982680.982570.98274
120.988580.982620.985260.991140.9930.994720.993410.99458
160.994090.993140.989360.994810.994940.998840.995810.9978
S240.93950.93850.941410.934360.944670.945550.944270.94312
80.980670.971740.964680.977050.978420.980270.981990.98426
120.989580.980340.981610.969610.990830.989190.990220.9914
160.989410.987250.985770.990980.9940.99460.992150.99348
S340.945150.940720.931780.944040.944910.944930.945870.945
80.975190.969430.957080.976720.982960.983320.981930.98288
120.984260.983240.973350.980330.991520.990320.991770.98958
160.987590.987460.982050.988420.992290.992370.992120.99383
S440.962780.966390.949110.959280.963790.964080.963730.96412
80.989090.988570.977220.983960.990940.995960.990530.99111
120.994290.992890.987420.989540.995490.995810.99520.99578
160.994540.99340.992440.99440.996890.998190.996230.9974
N140.964120.96460.960370.955010.963720.963660.963450.96371
80.989290.988110.987970.987820.989940.990150.987970.98866
120.992940.992820.993270.990050.996070.996150.996140.99579
160.992360.993380.994360.99230.997080.997360.996580.99707
N240.961610.959990.9610.954890.961250.961130.960520.96124
80.99360.99140.98970.994180.996050.995470.995840.99589
120.990910.992160.985440.989510.993930.994880.994470.99547
160.994680.99160.986010.993950.996470.997550.994770.99723
N340.971520.966570.971370.968180.971440.97130.971260.97139
80.988810.986120.984940.988130.99070.991220.990460.99037
120.99360.99140.98970.994180.996050.995470.995840.99589
160.995280.990810.991050.995010.997250.998290.996780.99691
N440.971450.972790.967580.965680.970260.970310.970680.98052
80.990510.987640.986090.99010.990420.991150.990690.99311
120.994860.993020.993230.994180.996020.996240.986810.98617
160.993160.993210.994960.996020.997660.9980.978160.99788
Table A13. Segmentation time of each algorithm.
Table A13. Segmentation time of each algorithm.
ImageDimABCChOACSFPAMAIMABMOPEBMO
LS1437.2292817.69524919.1384617.74074755.37653537.16668234.23701534.936568
837.89637720.09182819.80087718.33000556.80297052.27704535.09092835.700193
1238.77135720.08165519.76727418.78370057.79368753.02060336.08669636.662281
1639.28793821.16763820.02461819.36588659.27863253.22083637.40372037.588600
LS2424.47291611.56804112.35166811.66444036.31371833.27522021.85561622.503947
825.43441812.83129612.70470512.11407938.18364934.81785422.96017823.464068
1226.21516714.07215812.96251112.60581940.18342735.80198623.90269624.607119
1626.99318715.1297213.22807613.16961041.00411037.27539824.75028625.635050
LS3443.73254520.76119322.89906823.09163566.73456668.18473540.63967037.083445
846.3611222.78285722.349123.20705867.86597871.92215043.10273437.073659
1246.3429623.85660223.6997824.24026569.50248375.50328144.03094337.744468
1646.73135825.16671723.9588224.37614771.30743387.02581745.53406037.811615
LB1444.43429321.44703825.79654222.61686466.80475360.20372240.83808537.412987
844.60449722.56673327.12567223.79553867.81136861.62758342.63947438.952232
1247.06229624.28237926.52147624.38712769.78067562.86394543.84985640.571279
1645.91637825.71748225.96395124.92485572.83644464.85572444.29028641.891504
LB2438.67442218.61266919.85739518.61567357.03046152.89508736.15071636.255754
839.2666119.72036419.79595919.04710458.61978553.40959736.61078337.624395
1240.00293420.96417720.11767619.55733460.17919254.74704837.51839437.946456
1641.20766821.84300320.29545920.08654262.12360657.44306238.54920138.342609
LB3436.15441617.43784119.21201417.37758953.62348949.44240734.73302234.122744
837.03567518.78732119.22096619.61722156.14680750.99185435.22451037.488786
1237.5530119.7179719.51740220.14997256.62902951.96176736.18074038.641067
1638.42447420.77278719.75081020.79451958.21468053.06428037.12287039.799497
L1441.05162519.60123919.88702618.59991156.09804162.25892638.97385137.723193
838.86079820.19773621.37291919.73348158.14538762.89636940.35432340.440959
1241.09141522.71424522.09205119.94929259.76291263.87507841.09160240.153868
1642.30090322.74579122.79990920.92492463.85758165.78182741.16339341.477621
L2440.40046519.26707821.77902720.13181956.72896163.02944540.70143539.047658
839.62005220.22991621.26326820.38786458.99582964.58718741.69636441.619870
1242.13651122.24303021.03917320.58786462.83546665.73798642.37693642.198586
1643.45319922.71203322.65182121.45648263.48773673.03681541.93868142.578852
L3441.65137320.56247320.24557119.17926160.48876263.11918138.64166938.350863
841.77500920.87150821.18560719.96335060.54804665.17735841.19616340.692544
1241.63589722.49322120.94946620.47082861.02484865.46506342.77232341.217254
1642.89111922.30305921.36823121.07780762.60492467.67114442.07642741.969717
L4441.66403517.97732319.44881418.68950459.34805960.48355738.59586638.164451
839.89877019.25158219.91004919.29781257.29891062.56975538.97919840.953199
1240.41195720.74457820.53886219.81036360.96526763.28202839.36784541.859678
1641.42759321.83033520.55576420.29084860.62608764.62531440.89048439.671965
S1441.37639119.66646820.82011719.49307555.03053656.17667938.25412537.845036
841.77697421.23147621.53346820.50974556.22122456.88992938.55251539.669227
1242.39779622.34848221.91891920.86906358.46270160.23291039.51720739.984327
1644.26157523.80929722.06176421.47551158.86287761.26111840.84130140.398811
S2438.13575118.18834519.46642318.67154651.98203153.56431435.18918735.627038
839.81359519.57410119.90744518.92724053.08071855.30499736.16055138.292573
1240.06086820.76827920.04721920.30474754.68072055.80237236.84509837.520523
1641.23516522.52615321.42336919.94323355.96860556.72419537.78019438.476276
S3440.56939019.87545420.60069219.22814855.62298254.83069136.85212437.245393
841.97875820.71191521.57226520.44673956.01012156.02852037.98387137.818678
1241.66842222.04246922.09215120.64507456.58989356.67856338.86615338.710808
1643.17600323.29421522.11831221.10521558.33485158.18991439.94169840.404805
S4430.81150114.91364915.52755214.51095540.97129249.01294829.14394330.413819
831.98614416.06069016.76542115.29370842.55851249.98554830.13222030.983137
1234.07356017.45948916.35563216.09927343.98788751.37794929.35037229.838231
1634.21568418.65422317.34409316.62019345.72329953.43051230.82677430.601450
N1448.08068922.92233120.64043619.48680860.06358077.26250540.26147444.658027
850.05716223.68571720.82578919.96793060.13869265.40676941.57759945.393651
1250.87819725.55834521.54773820.51340160.75176665.99840441.08350746.317470
1652.32976226.99395121.76702420.92033657.90114567.89640243.23683347.695611
N2449.29998521.33616119.97525318.77084757.88111262.01722239.51940742.890841
849.63148924.30566720.50461219.31414362.92868663.50044740.61469543.604405
1249.41968926.08244120.60929320.03828962.33093964.72278741.16616644.732965
1651.12917728.06836521.13869320.77246862.39975166.09933841.68269446.304206
N3449.15979323.00811820.18139618.69967257.42658462.67778537.62288342.788288
848.90568724.72227220.61459619.65125658.58416564.19930740.48015643.730899
1251.45264124.92542821.32463519.91111761.13051664.89745941.62543544.652749
1653.60956427.24017821.53561920.89827160.45222366.19693741.55914745.723206
N4438.72463918.77934219.87551419.18845656.63200062.16429239.80552243.034793
842.41403220.07817320.15949119.49880759.72009963.05511840.66150143.496024
1241.78596121.71379720.85467820.37097060.21889764.53606841.86213944.750324
1644.30930222.73338521.24848120.53358461.05566265.62081241.17476445.087065
Table A14. Comparison between PEBMO algorithm and deep learning.
Table A14. Comparison between PEBMO algorithm and deep learning.
ImageDimFitness ValueStandard DeviationSSIMPSNRFSIMComputation Time
PEBMO Deep PEBMO Deep PEBMO Deep PEBMO Deep PEBMO Deep PEBMO Deep
LS1448.01590.10751.0414E-020.03250.723660.990817.631457.81830.90410.268834.93656822.5598
879.83360.12042.5475E-020.04850.826980.990121.280657.32320.939190.283835.70019332.9795
12106.24990.13093.4839E-030.04360.890820.989825.531256.96080.964140.294536.66228157.7556
16129.34240.13194.6511E-030.05780.955070.989728.993556.92860.982450.296637.588600165.7821
LS2448.32380.10711.0351E-030.01850.77590.991417.13857.83460.898250.267822.50394722.3029
880.45320.11011.7269E-020.06540.858180.991321.575857.71310.942090.270823.46406832.9847
12107.02720.12232.7418E-020.04350.931890.990824.752857.25550.976830.284524.60711956.5832
16129.90120.12092.6332E-020.03560.954870.990828.207157.30720.988560.283525.635050165.9812
LS3447.09740.07872.3354E-030.03890.710610.995616.93659.17080.897080.221837.08344523.3651
878.3390.13061.6603E-020.02560.841490.987923.690656.97230.92480.296937.07365932.9382
12103.63590.14402.8028E-020.03640.932250.987527.134256.54670.975630.311437.74446858.5183
16127.97460.14383.9038E-020.04780.946160.987528.957556.55460.988920.310637.811615165.8707
LB1446.21720.07656.4921E-030.05610.676460.995516.373559.29620.868830.218137.41298723.2946
877.00280.12131.8181E-020.02890.815740.989122.836557.29370.921370.284238.95223233.1972
12103.06670.13381.7339E-020.03280.921380.988427.509656.86730.967160.298040.57127957.4462
16127.69490.13994.4096E-020.02980.95130.988330.799456.67350.988440.304741.891504156.0487
LB2446.04960.06301.8336E-030.04730.711540.996717.028860.13430.863790.198336.25575422.2942
877.5960.11821.6774E-020.05340.844060.989522.843857.40540.933790.280137.62439533.1487
12103.91610.12622.5025E-020.05980.923540.989327.178557.12120.96690.288837.94645656.5224
16127.28490.12973.9933E-020.03260.9610.989229.928357.00290.989430.292538.342609156.5286
LB3449.09180.10971.2222E-020.04980.720780.989916.577257.72820.906340.273134.12274423.3701
881.0470.11991.7262E-020.05640.828680.989622.544257.34400.942390.284337.48878633.067
12107.76480.13053.3675E-020.04250.893340.989226.750856.97570.963550.295638.64106755.7509
16130.71120.13142.5764E-020.03650.929550.989227.975956.94550.983670.297039.799497166.0908
L1448.510.10041.0309E-030.04170.79930.994319.124858.11480.962820.257137.72319322.3921
880.69460.14521.7553E-020.05130.887150.986924.93456.51100.988750.317240.44095933.1111
12108.04030.16752.3773E-020.0560.948930.984828.482755.88940.996220.342640.15386857.3989
16132.10930.16422.1343E-020.04350.960850.986130.926255.97730.998710.335941.47762115.2046
L2448.43220.09432.8888E-030.04290.781850.994419.261158.38350.956640.246139.04765823.3232
880.97890.13621.4833E-020.04760.892830.989625.241856.79000.985780.303041.61987033.2126
12108.52850.13222.9094E-020.04360.936470.991428.720156.91860.994770.291342.19858658.4160
16132.66090.18353.1204E-020.04520.946090.982731.307155.49370.998190.361542.578852165.8598
L3447.78740.08561.8831E-030.04710.774380.995419.168158.80590.958320.233938.35086323.2801
879.97680.13599.8874E-020.03860.893350.989124.931856.79800.986410.303340.69254433.1594
12107.46850.15022.3032E-020.05210.936530.987028.465956.36320.99420.320441.21725457.3497
16131.57850.16902.9874E-020.02980.957040.985231.122355.85270.998970.341641.969717166.1911
L4449.19270.09801.0162E-030.03640.82330.994018.825658.21940.963890.254738.16445123.2850
881.98180.14867.7937E-030.06120.904220.987925.138456.41150.989410.312240.95319933.2179
12109.72590.17582.2557E-020.04020.930370.984728.606755.68160.993660.351241.85967856.3911
16133.63860.19252.4199E-020.05280.945190.981930.952955.28540.998790.370939.671965159.9169
S1449.34210.14161.0131E-030.04190.79020.986918.928556.61880.951060.315837.84503622.2733
881.88420.13543.9973E-030.03620.875660.989225.095156.81460.982740.302239.66922733.0524
12109.14670.17422.4505E-020.04080.919650.984428.718255.72130.994580.349439.98432755.3965
16132.84160.17273.3372E-020.05070.942150.984531.117255.75710.99780.348640.398811166.1598
S2438.96760.08645.3598E-020.03170.751670.995420.596958.76800.943120.234435.62703823.2570
862.38810.16552.0789E-020.04330.854170.984026.215155.94290.984260.342438.29257332.9243
1278.31730.18303.0986E-020.03660.912850.982628.794555.50540.99140.361237.52052358.4439
1691.78850.18762.7643E-020.02710.935760.982531.144955.39950.993480.364838.476276156.9967
S3449.47790.09111.6162E-030.03650.831280.994518.24158.53640.9450.234837.24539322.2431
882.31250.20651.0935E-020.04510.905880.979323.705554.98110.982880.357937.81867831.0139
12109.78590.19043.0854E-020.04320.931490.982027.949955.33510.989580.359438.71080857.7750
16133.37210.19262.4372E-020.05130.947270.981629.746955.28370.993830.358140.404805155.7512
S4447.78360.10261.4647E-030.06140.833280.992718.499258.02120.964120.255930.41381923.3486
880.06510.10471.4906E-010.03210.916470.994124.885957.93080.991110.258330.98313732.983
12107.85150.18142.8757E-020.04520.947950.983228.599155.54510.995780.359829.83823158.7869
16132.01320.18502.7668E-020.03550.946850.982831.136555.45870.99740.362930.601450155.8123
N1447.87470.06711.6701E-030.04720.823840.997319.398959.86430.963710.204344.65802722.3048
880.26880.12591.8973E-020.05330.906090.990625.550357.12940.988660.289445.39365132.9232
12107.92980.15672.9642E-020.04660.946790.986728.401856.18030.995790.327146.31747058.4291
16131.85970.17863.3247E-020.04250.960240.983930.975655.61240.997070.353947.69561116.1105
N2447.8560.09812.4937E-020.05120.841620.994418.705558.21330.961240.253842.89084122.3614
8107.58680.12001.3767E-020.04360.957590.992528.659557.33980.995890.280743.60440532.9322
12107.3010.16641.9626E-020.03770.95090.986228.344355.91970.995470.336944.73296557.4437
16131.20230.16793.4351-020.02860.969620.985630.659455.87930.997230.341446.304206155.9574
N3447.82880.07021.0453E-030.03880.85980.997019.443659.66490.971390.211042.78828823.3718
880.2760.11291.2453E-020.04530.927250.993025.13957.60300.990370.268843.73089933.0735
12107.58680.16292.6977E-020.01980.957590.985828.659556.01200.995890.336644.65274956.5866
16131.39010.17292.6190E-020.04210.966970.984631.256555.75380.996910.348345.72320616.0879
N4447.7740.11582.3846E-020.05330.836060.990919.471457.49230.980520.276943.03479322.3124
880.28310.14561.2635E-020.04370.91270.987025.50356.49890.993110.316443.49602433.0007
12107.60990.14972.8832E-020.04530.952860.988128.720656.37780.986170.316344.75032457.4719
16131.19360.16182.3904E-020.04760.963980.986431.322956.04010.997880.332845.087065156.0582

References

  1. Wang, W.; Ye, C.; Pan, Z.; Tian, J. Multilevel threshold segmentation of rice plant images utilizing tuna swarm optimization algorithm incorporating quadratic interpolation and elite swarm genetic operators. Expert Syst. Appl. 2025, 263, 125673. [Google Scholar] [CrossRef]
  2. Abualigah, L.; Al-Okbi, N.K.; Awwad, E.M.; Sharaf, M.; Sh, M. Correction: Boosted Aquila Arithmetic Optimization Algorithm for multi-level thresholding image segmentation. Evol. Syst. 2024, 15, 1427. [Google Scholar] [CrossRef]
  3. Abdel-Basset, M.; Mohamed, R.; Hezam, I.M.; Sallam, K. M; Hameed, I.A. An enhanced spider wasp optimization algorithm for multilevel thresholding-based medical image segmentation. Evol. Syst. 2024, 15, 2249–2271. [Google Scholar] [CrossRef]
  4. Sowmiya, R.; Sathya, P.D. Hybrid leader corona virus herd optimizer with multilevel thresholding techniques for foreground and background image segmentation. Comput. Electr. Eng. 2024, 120, 109569. [Google Scholar] [CrossRef]
  5. Houssein, E.H.; Emam, M.M.; Singh, N.; Nagwan, M.; Samee, A.; Alabdulhafith, M.; Çelik, E. An improved honey badger algorithm for global optimization and multilevel thresholding segmentation: Real case with brain tumor images. Clust. Comput. 2024, 27, 14315–14364. [Google Scholar] [CrossRef]
  6. Lee, H.S.; Cho, S.I. Spatial color histogram-based image segmentation using texture-aware region merging. Multim. Tools Appl. 2022, 81, 24573–24600. [Google Scholar] [CrossRef]
  7. Kavitha, K.J.; Shan, P.B. Medical image watermarking based on novel encoding for EHR and fusion based morphological watershed segmentation algorithm for medical images. Multimed. Tools Appl. 2024, 83, 25163–25190. [Google Scholar] [CrossRef]
  8. Kwenda, C.; Gwetu, M.V.; Dombeu, J.V.F. Hybridizing Deep Neural Networks and Machine Learning Models for Aerial Satellite Forest Image Segmentation. J. Imaging 2024, 10, 132. [Google Scholar] [CrossRef]
  9. Capua, F.R.; Schandin, J.; Cristóforis, P.D. Training Point-Based Deep Learning Networks for Forest Segmentation with Synthetic Data. In Proceedings of the International Conference on Pattern Recognition, Kolkata, India, 1–5 December 2024; Springer Nature: Cham, Switzerland, 2024; Volume 4, pp. 64–80. [Google Scholar] [CrossRef]
  10. Shi, L.; Wang, G.; Mo, L.; Yi, X.; Wu, X.; Wu, P. Automatic Segmentation of Standing Trees from Forest Images Based on Deep Learning. Sensors 2022, 22, 6663. [Google Scholar] [CrossRef] [PubMed]
  11. Wang, M.; Wang, W.; Feng, S.; Li, L. Adaptive Multi-class Segmentation Model of Aggregate Image Based on Improved Sparrow Search Algorithm. KSII Trans. Internet Inf. Syst. 2023, 17, 391–411. [Google Scholar] [CrossRef]
  12. Wu, Y.; Li, Q. The Algorithm of Watershed Color Image Segmentation Based on Morphological Gradient. Sensors 2022, 22, 8202. [Google Scholar] [CrossRef] [PubMed]
  13. Bhandari, A.K. A novel beta differential evolution algorithm-based fast multilevel thresholding for color image segmentation. Neural Comput. Appl. 2020, 32, 4583–4613. [Google Scholar] [CrossRef]
  14. Jia, H.; Su, Y.; Rao, H.; Liang, M.; Abualigah, L.; Liu, C.; Chen, X. Improved artificial rabbits algorithm for global optimization and multi-level thresholding color image segmentation. Artif. Intell. Rev. 2025, 58, 55. [Google Scholar] [CrossRef]
  15. Olmez, Y.; Koca, G.O.; Tanyildizi, E.; Sengür, A. Multilevel image thresholding based on Renyi’s entropy and golden sinus algorithm II. Neural Comput. Appl. 2023, 35, 17837–17850. [Google Scholar] [CrossRef]
  16. Wu, B.; Zhu, L.; Li, X. Giza pyramids construction algorithm with gradient contour approach for multilevel thresholding color image segmentation. Appl. Intell. 2023, 53, 21248–21267. [Google Scholar] [CrossRef]
  17. Wu, B.; Zhu, L.; Cao, J.; Wang, J. A Hybrid Preaching Optimization Algorithm Based on Kapur Entropy for Multilevel Thresholding Color Image Segmentation. Entropy 2021, 23, 1599. [Google Scholar] [CrossRef]
  18. Zhao, S.; Wang, P.; Heidari, A.A.; Chen, H.; Turabieh, H.; Mafarja, M.M.; Li, C. Multilevel threshold image segmentation with diffusion association slime mould algorithm and Renyi’s entropy for chronic obstructive pulmonary disease. Comput. Biol. Med. 2021, 134, 104427. [Google Scholar] [CrossRef] [PubMed]
  19. Zhao, D.; Liu, L.; Yu, F.; Heidari, A.A.; Wang, M.; Oliva, D.; Muhammad, K.; Chen, H. Ant colony optimization with horizontal and vertical crossover search: Fundamental visions for multilevel thresholding image segmentation. Expert Syst. Appl. 2021, 167, 114122. [Google Scholar] [CrossRef]
  20. Zhang, J.; Zhang, T.; Wang, D.; Zhang, G.; Kong, M.; Li, Z.; Chen, R.; Xu, Y. A complex-valued encoding golden jackal optimization for multilevel thresholding image segmentation. Appl. Soft Comput. 2024, 165, 112108. [Google Scholar] [CrossRef]
  21. Chakraborty, F.; Roy, P.K. An efficient multilevel thresholding image segmentation through improved elephant herding optimization. Evol. Intell. 2025, 18, 17. [Google Scholar] [CrossRef]
  22. Hu, K.; Mo, Y. An efficient multi-threshold image segmentation method for COVID-19 images using reinforcement learning-based enhanced sand cat algorithm. J. Supercomput. 2025, 81, 113. [Google Scholar] [CrossRef]
  23. Xia, H.; Ke, Y.; Liao, R.; Sun, Y. Fractional order calculus enhanced dung beetle optimizer for function global optimization and multilevel threshold medical image segmentation. J. Supercomput. 2025, 81, 90. [Google Scholar] [CrossRef]
  24. Nama, S. A novel improved SMA with quasi reflection operator: Performance analysis, application to the image segmentation problem of COVID-19 chest X-ray images. Appl. Soft Comput. 2022, 118, 108483. [Google Scholar] [CrossRef]
  25. Wang, J.; Bei, J.; Song, H.; Zhang, H.; Zhang, P. A whale optimization algorithm with combined mutation and removing similarity for global optimization and multilevel thresholding image segmentation. Appl. Soft Comput. 2023, 137, 110130. [Google Scholar] [CrossRef]
  26. Houssein, E.H.; Marwa, M.; Emam, M.M.; Ali, A.A. An efficient multilevel thresholding segmentation method for thermography breast cancer imaging based on improved chimp optimization algorithm. Expert Syst. Appl. 2021, 185, 115651. [Google Scholar] [CrossRef]
  27. Abualigah, L.; Al-Okbi, N.K.; Elaziz, M.A.; Houssein, E.H. Boosting Marine Predators Algorithm by Salp Swarm Algorithm for Multilevel Thresholding Image Segmentation. Multim. Tools Appl. 2021, 81, 16707–16742. [Google Scholar] [CrossRef] [PubMed]
  28. Wu, B.; Zhu, L.; Wang, J. Forest Canopy Image Segmentation Basedon Differential Evolution Whale Optimization Algorithm. J. Northwest For. Univ. 2022, 37, 67–73. [Google Scholar] [CrossRef]
  29. Tong, K.; Zhu, L.; Wang, J.; Fu, X. Forest Canopy Image Segmentation Based on Improved Northern Goshawk Algorithm by Using Multi-Strategy Fusion. For. Eng. 2024, 40, 124–133. [Google Scholar] [CrossRef]
  30. Wang, J.; Zhu, L.; Wu, B.; Ryspayev, A. Forestry Canopy Image Segmentation Based on Improved Tuna Swarm Optimization. Forests 2022, 13, 1746. [Google Scholar] [CrossRef]
  31. Sulaiman, M.H.; Mustaffa, Z.; Saari, M.M.; Daniyal, H.; Daud, M.R.; Razali, S.; Mohamed, A.I. Barnacles Mating Optimizer: A Bio-Inspired Algorithm for Solving Optimization Problems. SNPD 2018, 265–270. [Google Scholar] [CrossRef]
  32. Mohammadnejad, Z.; Al-Khafaji, H.M.R.; Mohammed, A.S.; Alatba, S.R. Energy optimization for optimal location in 5G networks using improved Barnacles Mating Optimizer. Phys. Commun. 2023, 59, 102068. [Google Scholar] [CrossRef]
  33. Yang, Z.; Liu, Q.; Zhang, L.; Dai, J.; Razmjooy, N. Model parameter estimation of the PEMFCs using improved Barnacles Mating Optimization algorithm. Energy 2020, 212, 118738. [Google Scholar] [CrossRef]
  34. Li, H.; Zheng, G.; Sun, K.; Jiang, Z.; Li, Y.; Jia, H. A Logistic Chaotic Barnacles Mating Optimizer with Masi Entropy for Color Image Multilevel Thresholding Segmentation. IEEE Access 2020, 8, 213130–213153. [Google Scholar] [CrossRef]
  35. Liu, B.; Wang, H.; Tseng, M.L.; Li, Z. State of charge estimation for lithium-ion batteries based on improved barnacle mating optimizer and support vector machine. J. Energy Storage 2022, 55, 105830. [Google Scholar] [CrossRef]
  36. Zhao, J.; Zhang, T.; Ma, S.; Wang, M. Improved barnacles mating optimizer to solve high-dimensional continuous optimization problems. CAAI Trans. Intell. Syst. 2023, 18, 823–832. [Google Scholar] [CrossRef]
  37. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  38. Khishe, M.; Mosavi, M.R. Chimp optimization algorithm. Expert Syst. Appl. 2020, 149, 113338. [Google Scholar] [CrossRef]
  39. Yang, X.S.; Deb, S. Cuckoo search via levy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar] [CrossRef]
  40. Yang, X.S. Flower pollination algorithm for global optimization. In Proceedings of the International Conference on Unconventional Computing and Natural Computation (UCNC), Berlin/Heidelberg, Germany, 3–7 September 2012; Volume 7445, pp. 240–249. [Google Scholar] [CrossRef]
  41. Zervoudakis, K.; Tsafarakis, S. A mayfly optimization algorithm. Comput. Ind. Eng. 2020, 145, 106559. [Google Scholar] [CrossRef]
  42. Zhao, X.; Zhu, L.; Wu, B. An improved mayfly algorithm based on Kapur entropy for multilevel thresholding color image segmentation. J. Intell. Fuzzy Syst. 2023, 44, 365–380. [Google Scholar] [CrossRef]
  43. Naruei, I.; Keynia, F. A new optimization method based on COOT bird natural life model. Expert Syst. Appl. 2021, 183, 115352. [Google Scholar] [CrossRef]
  44. He, L.; Huang, S. An efficient krill herd algorithm for color image multilevel thresholding segmentation problem. Appl. Soft Comput. 2020, 89, 106063. [Google Scholar] [CrossRef]
  45. Zhao, X.; Zhu, L.; Wang, J.; Mohamed, A.M.E. Enhancement Method Based on Multi-Strategy Improved Pelican Optimization Algorithm and Application to Low-Illumination Forest Canopy Images. Forests 2024, 15, 1783. [Google Scholar] [CrossRef]
  46. Abualigah, L.; Almotairi, K.H.; Elaziz, M.A. Multilevel thresholding image segmentation using meta-heuristic optimization algorithms: Comparative analysis, open challenges and new trends. Appl. Intell. 2023, 53, 11654–11704. [Google Scholar] [CrossRef]
  47. Qiao, L.; Liu, K.; Xue, Y.; Tang, W.; Salehnia, T. A multi-level thresholding image segmentation method using hybrid Arithmetic Optimization and Harris Hawks Optimizer algorithms. Expert Syst. Appl. 2024, 241, 122316. [Google Scholar] [CrossRef]
  48. Yan, Z.; Zhang, J.; Yang, Z.; Tang, J. Kapur’s Entropy for Underwater Multilevel Thresholding Image Segmentation Based on Whale Optimization Algorithm. IEEE Access 2021, 9, 41294–41319. [Google Scholar] [CrossRef]
  49. Das, G.; Panda, R.; Samantaray, L.; Agrawal, S. A Novel Segmentation Error Minimization-Based Method for Multilevel Optimal Threshold Selection Using Opposition Equilibrium Optimizer. Int. J. Image Graph. 2023, 23, 2350021:1–2350021:22. [Google Scholar] [CrossRef]
Figure 1. p l n e w optimization mechanism.
Figure 1. p l n e w optimization mechanism.
Forests 16 00419 g001
Figure 2. Change curve of R cos 2 R 1 π and p _ n e w , (a) is the change curve of R cos 2 R 1 π , (b) is the change curve of p _ n e w .
Figure 2. Change curve of R cos 2 R 1 π and p _ n e w , (a) is the change curve of R cos 2 R 1 π , (b) is the change curve of p _ n e w .
Forests 16 00419 g002
Figure 3. Common 10 chaotic mappings.
Figure 3. Common 10 chaotic mappings.
Forests 16 00419 g003
Figure 4. Flowchart of PEBMO algorithm.
Figure 4. Flowchart of PEBMO algorithm.
Forests 16 00419 g004
Figure 5. Multi-level threshold segmentation framework based on PEBMO algorithm.
Figure 5. Multi-level threshold segmentation framework based on PEBMO algorithm.
Forests 16 00419 g005
Figure 6. Histogram of the distribution of optimal values in the parameter sensitivity test. (a) Histogram of the distribution of the optimal average fitness values and standard deviation values for different values of N; (b) is a histogram of the distribution of optimal average fitness values and standard deviation values for different values of T.
Figure 6. Histogram of the distribution of optimal values in the parameter sensitivity test. (a) Histogram of the distribution of the optimal average fitness values and standard deviation values for different values of N; (b) is a histogram of the distribution of optimal average fitness values and standard deviation values for different values of T.
Forests 16 00419 g006
Figure 7. Histogram of the distribution of the optimal average fitness values obtained by each BMO variant algorithm on the cec2017 test function. Where yellow is literature [32], green is literature [33], light blue is literature [34], blue is literature [35], purple is literature [36], black is BMO, and red is PEBMO. (a) is a histogram of the single-peak function; (b) is a histogram of the multi-peak function; (c) is a histogram of the hybrid function; (d) is a histogram of the composite function.
Figure 7. Histogram of the distribution of the optimal average fitness values obtained by each BMO variant algorithm on the cec2017 test function. Where yellow is literature [32], green is literature [33], light blue is literature [34], blue is literature [35], purple is literature [36], black is BMO, and red is PEBMO. (a) is a histogram of the single-peak function; (b) is a histogram of the multi-peak function; (c) is a histogram of the hybrid function; (d) is a histogram of the composite function.
Forests 16 00419 g007
Figure 8. Histogram of the distribution of the optimal standard deviation values obtained by each BMO variant algorithm on the cec2017 test function. Where yellow is literature [32], green is literature [33], light blue is literature [34], blue is literature [35], purple is literature [36], black is BMO, and red is PEBMO. (a) is a histogram of the single-peak function; (b) is a histogram of the multi-peak function; (c) is a histogram of the hybrid function; (d) is a histogram of the composite function.
Figure 8. Histogram of the distribution of the optimal standard deviation values obtained by each BMO variant algorithm on the cec2017 test function. Where yellow is literature [32], green is literature [33], light blue is literature [34], blue is literature [35], purple is literature [36], black is BMO, and red is PEBMO. (a) is a histogram of the single-peak function; (b) is a histogram of the multi-peak function; (c) is a histogram of the hybrid function; (d) is a histogram of the composite function.
Forests 16 00419 g008
Figure 9. The Wilcoxon–rand rank sum test values for each algorithm. Among them, PEBMO vs. IBMO [32] is yellow, PEBMO vs. IBMO [33] is green, PEBMO vs. IBMO [34] is blue, PEBMO vs. IBMO [35] is purple, PEBMO vs. IBMO [36] is black, and PEBMO vs. BMO is red.
Figure 9. The Wilcoxon–rand rank sum test values for each algorithm. Among them, PEBMO vs. IBMO [32] is yellow, PEBMO vs. IBMO [33] is green, PEBMO vs. IBMO [34] is blue, PEBMO vs. IBMO [35] is purple, PEBMO vs. IBMO [36] is black, and PEBMO vs. BMO is red.
Forests 16 00419 g009
Figure 10. Convergence curves of the BMO algorithm and the BMO variant algorithm (dark blue for ABMO [29]; black for IBMO [30]; yellow for IBMO [31]; green for IBMO [32]; light blue for IBMO [33]; violet for BMO; and red for PEBMO. the designation value of F 1 is 100; the designation value of F 2 is 200; The designation value of F 6 is 600; the designation value of F 10 is 1000; the designation value of F 12 is 1200; the designation value of F 18 is 1800; the designation value of F 26 is 2600; the designation value of F 30 is 3000).
Figure 10. Convergence curves of the BMO algorithm and the BMO variant algorithm (dark blue for ABMO [29]; black for IBMO [30]; yellow for IBMO [31]; green for IBMO [32]; light blue for IBMO [33]; violet for BMO; and red for PEBMO. the designation value of F 1 is 100; the designation value of F 2 is 200; The designation value of F 6 is 600; the designation value of F 10 is 1000; the designation value of F 12 is 1200; the designation value of F 18 is 1800; the designation value of F 26 is 2600; the designation value of F 30 is 3000).
Forests 16 00419 g010
Figure 11. Non-uniform canopy images processed by reference [45]. (LS1, LS2 and LS3 are small-area exposure images; LB1, LB2, and LB3 are large-area exposure images).
Figure 11. Non-uniform canopy images processed by reference [45]. (LS1, LS2 and LS3 are small-area exposure images; LB1, LB2, and LB3 are large-area exposure images).
Forests 16 00419 g011
Figure 12. Low-light canopy images processed by reference [45]. (L1 and L4 are local high-light canopy images; L2 is a canopy image with uneven canopy distribution; L3 is a canopy image with uniform canopy distribution).
Figure 12. Low-light canopy images processed by reference [45]. (L1 and L4 are local high-light canopy images; L2 is a canopy image with uneven canopy distribution; L3 is a canopy image with uniform canopy distribution).
Forests 16 00419 g012
Figure 13. Strong illuminated canopy image processed by literature [45]. (S1 has slight colour distortion; S2 image has low contrast; S3 has colour degradation; S4 image has low saturation).
Figure 13. Strong illuminated canopy image processed by literature [45]. (S1 has slight colour distortion; S2 image has low contrast; S3 has colour degradation; S4 image has low saturation).
Forests 16 00419 g013
Figure 14. Normal illumination canopy images processed by literature [45]. (N1–N4 are images obtained at different periods. Where, N1: 08:00, N2: 10:00, N3: 12:00 and N4: 14:00).
Figure 14. Normal illumination canopy images processed by literature [45]. (N1–N4 are images obtained at different periods. Where, N1: 08:00, N2: 10:00, N3: 12:00 and N4: 14:00).
Forests 16 00419 g014
Figure 15. Line graph of optimal fitness values obtained by each algorithm (where (a) is the line graph of fitness values for a threshold number of 4, (b) is the line graph of fitness values for a threshold number of 8, (c) is the line graph of fitness values for a threshold number of 12, and (d) is the line graph of fitness values for a threshold number of 16).
Figure 15. Line graph of optimal fitness values obtained by each algorithm (where (a) is the line graph of fitness values for a threshold number of 4, (b) is the line graph of fitness values for a threshold number of 8, (c) is the line graph of fitness values for a threshold number of 12, and (d) is the line graph of fitness values for a threshold number of 16).
Forests 16 00419 g015
Figure 16. Line graph of standard deviation values obtained by each algorithm (where (a) is the line graph of standard deviation values for non-uniform canopy images, (b) line graph of standard deviation values for low illumination canopy images, (c): line graph of standard deviation values for strongly illuminated canopy images, and (d): line graph of standard deviation values for normally illuminated canopy images).
Figure 16. Line graph of standard deviation values obtained by each algorithm (where (a) is the line graph of standard deviation values for non-uniform canopy images, (b) line graph of standard deviation values for low illumination canopy images, (c): line graph of standard deviation values for strongly illuminated canopy images, and (d): line graph of standard deviation values for normally illuminated canopy images).
Forests 16 00419 g016
Figure 17. Line graph of SSIM values obtained by each algorithm (where (a) is the line graph of standard deviation values for non-uniform canopy images, (b) line graph of standard deviation values for low illumination canopy images, (c): line graph of standard deviation values for strongly illuminated canopy images, and (d): line graph of standard deviation values for normally illuminated canopy images).
Figure 17. Line graph of SSIM values obtained by each algorithm (where (a) is the line graph of standard deviation values for non-uniform canopy images, (b) line graph of standard deviation values for low illumination canopy images, (c): line graph of standard deviation values for strongly illuminated canopy images, and (d): line graph of standard deviation values for normally illuminated canopy images).
Forests 16 00419 g017
Figure 18. Line graph of PSNR values obtained by each algorithm (where (a) is the line graph of standard deviation values for non-uniform canopy images, (b) line graph of standard deviation values for low illumination canopy images, (c): line graph of standard deviation values for strongly illuminated canopy images, and (d): line graph of standard deviation values for normally illuminated canopy images).
Figure 18. Line graph of PSNR values obtained by each algorithm (where (a) is the line graph of standard deviation values for non-uniform canopy images, (b) line graph of standard deviation values for low illumination canopy images, (c): line graph of standard deviation values for strongly illuminated canopy images, and (d): line graph of standard deviation values for normally illuminated canopy images).
Forests 16 00419 g018aForests 16 00419 g018b
Figure 19. Line graph of FSIM values obtained by each algorithm (where (a) is the line graph of standard deviation values for non-uniform canopy images, (b) line graph of standard deviation values for low illumination canopy images, (c): line graph of standard deviation values for strongly illuminated canopy images, and (d): line graph of standard deviation values for normally illuminated canopy images).
Figure 19. Line graph of FSIM values obtained by each algorithm (where (a) is the line graph of standard deviation values for non-uniform canopy images, (b) line graph of standard deviation values for low illumination canopy images, (c): line graph of standard deviation values for strongly illuminated canopy images, and (d): line graph of standard deviation values for normally illuminated canopy images).
Forests 16 00419 g019
Figure 20. Histogram of segmentation time for each algorithm on non-uniformly illuminated canopy images.
Figure 20. Histogram of segmentation time for each algorithm on non-uniformly illuminated canopy images.
Forests 16 00419 g020
Figure 21. Histogram of segmentation time for each algorithm on low illumination canopy image.
Figure 21. Histogram of segmentation time for each algorithm on low illumination canopy image.
Forests 16 00419 g021
Figure 22. Histogram of segmentation time for each algorithm on strongly illuminated canopy images.
Figure 22. Histogram of segmentation time for each algorithm on strongly illuminated canopy images.
Forests 16 00419 g022
Figure 23. Histogram of segmentation time for each algorithm on normal illumination canopy image.
Figure 23. Histogram of segmentation time for each algorithm on normal illumination canopy image.
Forests 16 00419 g023
Figure 24. The distribution of each evaluation metric for different l r and num_layers. (where (a): the distribution of Loss values for different l r and num_layers, (b): the distribution of PSNR values for different l r and num_layers, (c): the distribution of SSIM values for different l r and num_layers, (d): the distribution of FSIM values for different l r and num_layers, (e): the distribution of average computation time for different l r and num_layers, (f): the distribution of standard deviation values for different l r and num_layers).
Figure 24. The distribution of each evaluation metric for different l r and num_layers. (where (a): the distribution of Loss values for different l r and num_layers, (b): the distribution of PSNR values for different l r and num_layers, (c): the distribution of SSIM values for different l r and num_layers, (d): the distribution of FSIM values for different l r and num_layers, (e): the distribution of average computation time for different l r and num_layers, (f): the distribution of standard deviation values for different l r and num_layers).
Forests 16 00419 g024
Figure 25. Schematic comparison between the PEBMO algorithm and deep learning algorithm in terms of fitness value.
Figure 25. Schematic comparison between the PEBMO algorithm and deep learning algorithm in terms of fitness value.
Forests 16 00419 g025
Figure 26. Schematic comparison between the PEBMO algorithm and deep learning algorithm in terms of SSIM value.
Figure 26. Schematic comparison between the PEBMO algorithm and deep learning algorithm in terms of SSIM value.
Forests 16 00419 g026
Figure 27. Schematic comparison between the PEBMO algorithm and deep learning algorithm in terms of PSNR value.
Figure 27. Schematic comparison between the PEBMO algorithm and deep learning algorithm in terms of PSNR value.
Forests 16 00419 g027
Figure 28. Schematic comparison between the PEBMO algorithm and deep learning algorithm in terms of FSIM value.
Figure 28. Schematic comparison between the PEBMO algorithm and deep learning algorithm in terms of FSIM value.
Forests 16 00419 g028
Figure 29. Schematic comparison of the PEBMO algorithm and deep learning algorithm in terms of computation time.
Figure 29. Schematic comparison of the PEBMO algorithm and deep learning algorithm in terms of computation time.
Forests 16 00419 g029
Figure 30. Results of the segmentation of each algorithm for a threshold number of 4.
Figure 30. Results of the segmentation of each algorithm for a threshold number of 4.
Forests 16 00419 g030
Figure 31. Results of the segmentation of each algorithm for a threshold number of 8.
Figure 31. Results of the segmentation of each algorithm for a threshold number of 8.
Forests 16 00419 g031
Figure 32. Results of the segmentation of each algorithm for a threshold number of 12.
Figure 32. Results of the segmentation of each algorithm for a threshold number of 12.
Forests 16 00419 g032
Figure 33. Results of the segmentation of each algorithm for a threshold number of 16.
Figure 33. Results of the segmentation of each algorithm for a threshold number of 16.
Forests 16 00419 g033
Table 1. Basic information about CEC2017 test functions. * F 2 shows unstable behavior, especially for higher dimensions, and significant performance variations for the same algorithm implemented. For high-dimensional test functions, F 2 is usually excluded. Since this study is not a high-dimensional testing study, however, F 2 is also tested.
Table 1. Basic information about CEC2017 test functions. * F 2 shows unstable behavior, especially for higher dimensions, and significant performance variations for the same algorithm implemented. For high-dimensional test functions, F 2 is usually excluded. Since this study is not a high-dimensional testing study, however, F 2 is also tested.
No.Functions F i * = F i x *
Unimodal
Functions
F 1 Shifted and Rotated Bent Cigar Function100
F 2 Shifted and Rotated Sum of Different Power Function *200
F 3 Shifted and Rotated Zakharov Function300
Simple
Multimodal
Functions
F 4 Shifted and Rotated Rosenbrock’s Function400
F 5 Shifted and Rotated Rastrigin’s Function500
F 6 Shifted and Rotated Expanded Scaffer’s Function600
F 7 Shifted and Rotated Lunacek Bi_Rastrigin Function700
F 8 Shifted and Rotated Non-Continuous Rastrigin’s Function8000
F 9 Shifted and Rotated Levy Function900
F 10 Shifted and Rotated Schwefel’s Function1000
F 11 Hybrid Function 1 ( N = 3 ) 1100
F 12 Hybrid Function 2 ( N = 3 ) 1200
F 13 Hybrid Function 3 ( N = 3 )1300
Hybrid
Functions
F 14 Hybrid Function 4 ( N = 4 )1400
F 15 Hybrid Function 5 ( N = 4 )1500
F 16 Hybrid Function 6 ( N = 4 )1600
F 17 Hybrid Function 6 ( N = 5 )1700
F 18 Hybrid Function 6 ( N = 5 )1800
F 19 Hybrid Function 6 ( N = 5 )1900
F 20 Hybrid Function 6 ( N = 6 )2000
Composition
Functions
F 21 Composition Function 1 ( N = 3 ) 2100
F 22 Composition Function 2 ( N = 3 )2200
F 23 Composition Function 3 ( N = 4 )2300
F 24 Composition Function 4 ( N = 4 )2400
F 25 Composition Function 5 ( N = 5 )2500
F 26 Composition Function 6 ( N = 5 )2600
F 27 Composition Function 7 ( N = 6 )2700
F 28 Composition Function 8 ( N = 6 ) 2800
F 29 Composition Function 9 ( N = 3 )2900
F 30 Composition Function 103000
Search Range: [ 100 , 100 ] D
Table 2. Parameter setting for each algorithm.
Table 2. Parameter setting for each algorithm.
AlgorithmParameters SettingValue
ABMO [32]Penis length p l 7
IBMO [33]Penis length p l 7
LF index τ 1.5
IBMO [34]Maximother penis length P max 7
Minimother penis length P min 1
The initial decay rate λ 3
IBMO [35]Penis length p l 7
Gaussian distribution parameter λ 0.7
IBMO [36]Penis length p l 7
Parameters u0.005
Parameters v0.5
Parameters θ [ 2 π , 2 π ]
Linear decline factor δ [0, 1]
BMOPenis length p l 7
PEBMOPenis length p l 1 27
Table 3. Parameter setting for deep learning.
Table 3. Parameter setting for deep learning.
Parameter NameValue
image size (img_size) 256 × 256 , 264 × 264
Patchsize (patch_size) 4 , 8 , 12 , 16
Embedding dimensions (embed_dim)256,264
Transformer layer (num_layers)4
Number of heads of the multiple self-attention
mechanism (num heads)8
loss functionCross - Entropy Loss
learning rate (learning_rate)Adam
momentum parameter (betal and beta2)Use default values
weight decay (weight_decay1 × 10−4
Batch size (batch_size)16
training wheels (epochs)100
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, X.; Zhu, L.; Xu, W.; Mohamed, A.M.E. Forest Canopy Image Segmentation Based on the Parametric Evolutionary Barnacle Optimization Algorithm. Forests 2025, 16, 419. https://doi.org/10.3390/f16030419

AMA Style

Zhao X, Zhu L, Xu W, Mohamed AME. Forest Canopy Image Segmentation Based on the Parametric Evolutionary Barnacle Optimization Algorithm. Forests. 2025; 16(3):419. https://doi.org/10.3390/f16030419

Chicago/Turabian Style

Zhao, Xiaohan, Liangkuan Zhu, Wanzhou Xu, and Alaa M. E. Mohamed. 2025. "Forest Canopy Image Segmentation Based on the Parametric Evolutionary Barnacle Optimization Algorithm" Forests 16, no. 3: 419. https://doi.org/10.3390/f16030419

APA Style

Zhao, X., Zhu, L., Xu, W., & Mohamed, A. M. E. (2025). Forest Canopy Image Segmentation Based on the Parametric Evolutionary Barnacle Optimization Algorithm. Forests, 16(3), 419. https://doi.org/10.3390/f16030419

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop