Enhanced Slime Mould Algorithm for Multilevel Thresholding Image Segmentation Using Entropy Measures

Image segmentation is a fundamental but essential step in image processing because it dramatically influences posterior image analysis. Multilevel thresholding image segmentation is one of the most popular image segmentation techniques, and many researchers have used meta-heuristic optimization algorithms (MAs) to determine the threshold values. However, MAs have some defects; for example, they are prone to stagnate in local optimal and slow convergence speed. This paper proposes an enhanced slime mould algorithm for global optimization and multilevel thresholding image segmentation, namely ESMA. First, the Levy flight method is used to improve the exploration ability of SMA. Second, quasi opposition-based learning is introduced to enhance the exploitation ability and balance the exploration and exploitation. Then, the superiority of the proposed work ESMA is confirmed concerning the 23 benchmark functions. Afterward, the ESMA is applied in multilevel thresholding image segmentation using minimum cross-entropy as the fitness function. We select eight greyscale images as the benchmark images for testing and compare them with the other classical and state-of-the-art algorithms. Meanwhile, the experimental metrics include the average fitness (mean), standard deviation (Std), peak signal to noise ratio (PSNR), structure similarity index (SSIM), feature similarity index (FSIM), and Wilcoxon rank-sum test, which is utilized to evaluate the quality of segmentation. Experimental results demonstrated that ESMA is superior to other algorithms and can provide higher segmentation accuracy.


Introduction
Image segmentation is fundamental and challenging work in computer vision, pattern recognition, and image processing. It is widely used in various fields, such as ship target segmentation and medical image processing [1]. The main goal of segmentation is to divide the image into homogeneous classes. The elements of each class share common attributes such as grayscale, feature, color, intensity, or texture [2][3][4][5]. In the literature, there are four standard image segmentation methods, which can be divided into (1) clusteringbased methods, (2) region-based methods, (3) graph-based methods, (4) thresholding-based methods. Among the existing methods, one of the most widespread techniques is multilevel thresholding, which is widely used owing to its ease of implementation, high performance, and robustness compared with other methods [6]. Image thresholding techniques can be classified into two categories: Bilevel and multilevel. In the prior category, the image is separated into two homogeneous foreground and background areas using a single threshold value. The latter segment-techniques segment divides an image into more than • ESMA based on Levy flight and quasi opposition-based learning for solving global optimization problems and multilevel thresholding image segmentation.

•
The optimization performance of ESMA is evaluated on 23 benchmark functions including unimodal and multimodal. • DSMA is applied for thresholding segmentation using minimum cross-entropy measure.

•
The segmentation quality is verified according to the PSNR, SSIM, FSIM, and statistical test.

•
The performance of DSMA is compared with several classical and state-of-the-art optimization algorithm.
The remainder of this paper can be organized as follows: Section 2 describes a brief overview of SMA, Levy flight, quasi opposition-based learning, and maximum crossentropy measure. Section 3 provides the details of the proposed algorithm. The experimental results are discussed and analyzed in detail in Sections 4 and 5. Finally, the conclusion and future work are discussed in Section 6.

Preliminaries
This section presents the main inspiration and mathematical model of the slime mould algorithm (SMA). Next, the improvement strategy including Levy flight, and quasi opposition-based learning will be described. Finally, we will describe the minimum crossentropy measure.

Slime Mould Algorithm
The slime mould algorithm (SMA) is a meta-heuristic optimization algorithm proposed recently by Li et al. [35], which is inspired by the oscillation behavior of slime mould in foraging. Slime mould achieves positive and negative feedback according to the quality of the food source. If the quality of the food source is high, the slime mould will use the region-limited search strategy. Meanwhile, if the food source is of low quality, the slime mould will leave this area and move to another food source in search space. Furthermore, SMA also has a slight chance of z to reinitialize the population in the search space.
Based on the above description, the updating process of slime mould can be expressed as in the following equation: where z denotes the probability of slime mould reinitializing, which is 0.03; r 1 , r 2 , and r 3 denote the random value in [0,1]; LB and UB represent the lower and upper bound of search space, respectively; t is the current iteration. The p can be calculated as follows: where i ∈ 1,2, . . . , N, S(i) is the sequence representing the fitness of search agents. DF indicates the best fitness obtained by the slime mould.
→ vb can be calculated as follows: where T represents the maximum iteration.
Note that the coefficient → W is an essential parameter, which simulates the oscillation frequency of slime mould under different food sources. The → W can be calculated as follows: Smell Index = sort(S) (6) where r 4 is a random value in [0,1]; bF and wF represent the best fitness and worst fitness obtained currently, respectively; condition indicates the rank first half of the search agent of S(i). The pseudo-code of SMA is shown in Algorithm 1.

Levy Flight
Numerous studies reveal that the flight trajectories of many flying animals are consistent with characteristics typical of Levy flight. Levy flight is a class of non-Gaussian random walk that follows Levy distribution [46,47]. It performs occasional long-distance walking with frequent short-distance steps, as shown in Figure 1. The mathematical formula for Levy flight is as follows: where r 4 and r 5 are random values in [0,1], and β is a constant equal to 1.5.

Levy Flight
Numerous studies reveal that the flight trajectories of many flying animals are consistent with characteristics typical of Levy flight. Levy flight is a class of non-Gaussian random walk that follows Levy distribution [46,47]. It performs occasional long-distance walking with frequent short-distance steps, as shown in Figure 1. The mathematical formula for Levy flight is as follows: where r4 and r5 are random values in [0,1], and β is a constant equal to 1.5.

Opposition-Based Learning
Opposition-based learning (OBL) is an efficient search approach to avoid premature convergence, which was proposed by Tizhoosh in 2005 [48]. The main idea of OBL is to generate the opposite solution in the search space, then evaluate the original solution and its opposite solution by the objective function, respectively. Next, the best solution will be  Opposition-based learning (OBL) is an efficient search approach to avoid premature convergence, which was proposed by Tizhoosh in 2005 [48]. The main idea of OBL is to generate the opposite solution in the search space, then evaluate the original solution and its opposite solution by the objective function, respectively. Next, the best solution will be retained and go into the next iteration. Typically, the OBL strategy has high opportunities to provide closer optimal solutions than random ones. We assume x to be an actual number in one dimension. Its opposite number x obl can be calculated by:

Quasi Opposition-Based Learning
Based on the above description, a variant of OBL called quasi opposition-based learning (QOBL) was proposed by Rahnamayan et al. [49]. Unlike OBL, the QOBL strategy applied a quasi-opposite solution rather than the opposite solution. Therefore, the QOBL approach is more effective in finding globally optimal solutions than the previous strategy. On the basic theory of opposite solution, the quasi-opposite solution can be calculated by: To understand the above theory more clearly, Figure 2 illustrates the original solution x, its opposite solution x obl , and its quasi-opposite solution x qobl . retained and go into the next iteration. Typically, the OBL strategy has high opportunities to provide closer optimal solutions than random ones.
We assume x to be an actual number in one dimension. Its opposite number xobl can be calculated by:

Quasi Opposition-Based Learning
Based on the above description, a variant of OBL called quasi opposition-based learning (QOBL) was proposed by Rahnamayan et al. [49]. Unlike OBL, the QOBL strategy applied a quasi-opposite solution rather than the opposite solution. Therefore, the QOBL approach is more effective in finding globally optimal solutions than the previous strategy. On the basic theory of opposite solution, the quasi-opposite solution can be calculated by: To understand the above theory more clearly, Figure 2 illustrates the original solution x, its opposite solution xobl, and its quasi-opposite solution xqobl.

Minimum Cross-Entropy
In 1968, cross-entropy was proposed by Kullback [50]. Cross-entropy measures the difference information between two probability distributions { } In this work, we utilized minimum cross-entropy as a fitness function to find the optimal threshold value. The lower value of cross-entropy means less uncertainty and greater homogeneity. Let I be the origin grey image and h(i) be its histogram. Then, the thresholded image Ith can be calculated as follows: where th denotes the threshold and divides the image into two different regions (foreground and background), and ( , ) a b μ can be calculated by: The cross-entropy can be computed by:

Minimum Cross-Entropy
In 1968, cross-entropy was proposed by Kullback [50]. Cross-entropy measures the difference information between two probability distributions P = {p 1 , p 2 , . . . , p N } and Q = {q 1 , q 2 , . . . , q N }, defined by: In this work, we utilized minimum cross-entropy as a fitness function to find the optimal threshold value. The lower value of cross-entropy means less uncertainty and greater homogeneity. Let I be the origin grey image and h(i) be its histogram. Then, the thresholded image I th can be calculated as follows: where th denotes the threshold and divides the image into two different regions (foreground and background), and µ(a, b) can be calculated by: Entropy 2021, 23, 1700 7 of 32 The cross-entropy can be computed by: The above objective functions are utilized to calculate the threshold value for bilevel thresholding. Thus it can be extended to a multilevel strategy. Yin [51] proposed a faster technique to obtain the threshold values for the digital image. The formula is as follows: where the above formula is based on thresholds th = [th 1 , th 2 , . . . , th nt ], which contain nt different threshold values, by: where nt represents the total number of thresholds and H i can be defined as follows:

Details of ESMA
The standard slime mould algorithm is a simple and efficient approach to solving specific optimization problems. However, based on the NFL theorem, no unique optimization algorithm is available for solving all optimization problems. Furthermore, SMA may be trapped into local optimal and show unperfected convergence speed for specific problems such as multilevel thresholding image segmentation. In order to improve the search ability and balance exploration and exploitation, in this paper, we propose an enhanced slime mould algorithm (ESMA) to improve the optimization performance. The improvement involves two major methods. Firstly, the Levy flight was used to enhance the exploration ability of SMA, which can be calculated by: Secondly, quasi opposition-based learning was used to enhance the exploitation ability of SMA and balance the exploration and exploitation capability. The pseudo-code of ESMA is shown in Algorithm 2, and Figure 3 illustrates the flowchart of the proposed algorithm.

Computational Complexity Analysis
As can be seen, the ESMA mainly contains three components: Initialization phase, fitness evaluation, and position update procedure. In the initialization phase, the complexity can be expressed as O(N×D), where N represents the population size, and D denotes the dimension size of problems. Besides, the proposed algorithm evaluates the fitness of all slime mould with the complexity of O(N). The position update phase in the ESMA requires O(N×D). During the position updating phase, we utilize the QOBL to improve the exploitation ability and balance the exploration and exploitation; thus the QOBL strategy requires O(N×D). In summary, the total computation complexity of ESMA can be expressed as O(N×D×T) for T iterations. So, it can be concluded that both the SMA and ESMA have the same computational complexity wise.

Computational Complexity Analysis
As can be seen, the ESMA mainly contains three components: Initialization phase, fitness evaluation, and position update procedure. In the initialization phase, the complexity can be expressed as O(N×D), where N represents the population size, and D denotes the dimension size of problems. Besides, the proposed algorithm evaluates the fitness of all slime mould with the complexity of O(N). The position update phase in the ESMA requires O(N×D). During the position updating phase, we utilize the QOBL to improve the exploitation ability and balance the exploration and exploitation; thus the QOBL strategy requires O(N×D). In summary, the total computation complexity of ESMA can be expressed as O(N×D×T) for T iterations. So, it can be concluded that both the SMA and ESMA have the same computational complexity wise.

Definition of 23 Benchmark Functions
To evaluate the exploration ability, exploitation ability, and escaping from the local optima ability of ESMA, twenty-three benchmark functions, including unimodal (F1-F7), multimodal (F8-F13), and fixed-dimension multimodal (F14-F23), are introduced [52]. The description of these functions is shown in Tables 1-3. As can be seen, the unimodal benchmark functions have only one global optimal value, which is suitable for evaluating the algorithms' exploitation capability. Unlike unimodal functions, the multimodal and fixed-dimension benchmark functions have multiple local optimal values and only one optimal global value; it is suitable for evaluating the exploration ability and escaping from local minima.
[−500,500] −12,569.487 [0,10] −10.1532 To verify the performance of the proposed ESMA, we compared it with seven other algorithms including slime mould algorithm (SMA) [35], remora optimization algorithm (ROA) [36], arithmetic optimization algorithm (AOA) [32], aquila optimizer (AO) [33], salp swarm algorithm (SSA) [30], whale optimization algorithm (WOA) [29], and sine cosine algorithm (SCA) [31]. These classical and state-of-the-art algorithms are proved to equip with excellent performance on some optimization problems. The details of these algorithms are listed as follows:  Table 4 illustrates the parameter setting of each algorithm. For all the algorithms included in the comparison, we set the population size N = 30, dimension size D = 30, and maximum iteration T = 500; all the tests had 30 independent runs. Furthermore, we extract the average results, standard deviations, and statistical tests to evaluate the performance; the best results will be listed in bold font. Table 4. Parameter settings for the comparative algorithms.

Statistical Results on 23 Benchmark Functions
The statistical results on 23 benchmark functions can be seen in Table 5. From this table, it can be clearly seen that the ESMA is superior to other algorithms in most benchmark functions. For unimodal benchmark functions (F1-F7), ESMA can obtain theoretical optimal for F1 and F3, while others algorithms cannot find the optimal solution. While ESMA cannot find the theoretical optimal for F4, F5, and F7, the convergence accuracy and robustness are better than other algorithms. In general, the exploitation ability of SMA is enhanced by applying the QOBL strategy. For the multimodal benchmark functions and fixed-dimension multimodal benchmark functions, ESMA also provides more competitive results than others. ESMA can obtain the theoretical optimal for F8, F9, F11, F14, F16, F17, F19, and F21-F23. For F10, F12, F13, and F15, ESMA gets the optimal global solution compared to others. Consequently, it can be concluded that ESMA always maintains high convergence accuracy and high robustness compared to other algorithms on such benchmark functions.

Wilcoxon Rank-Sum Test
In order to verify the non-incidentalness of the experimental results, this paper carried out the Wilcoxon rank-sum test (WRS). WRS is a nonparametric statistical test used to test the statistical performance between the proposed algorithm and comparison group on different benchmark functions [53]. WRS is based here on a 5% significant level, if the p-values obtained are less than 0.05, it indicates that there is a significant difference between them; otherwise, the difference is not obvious. The p-values obtained by algorithms are listed in Table 6. From this table, we can see that ESMA provides the statistically significant results compared with other algorithms.

Convergence Behavior Analysis
The convergence behavior of some benchmark functions is shown in Figure 4. On the unimodal benchmark functions, ESMA can achieve the highest accuracy and faster convergence speed. Especially for F1 and F3, while SMA can find the optimal solution, the convergence speed is slower than ESMA. For F2 and F4, ESMA finally converges to the optimal solution, while other algorithms either converge slowly or cannot converge to the optimal solution. For F5 and F7, while ESMA does not find the theoretical optimal solution, it still converges to the global optimal solution. On the multimodal benchmark functions, ESMA still shows the fastest convergence speed on most functions. While the global optimal solution is not found in some functions, it still has good performance compared with other algorithms. On the fixed dimensional multimodal functions, ESMA shows a faster convergence speed in the initial stage than others, and it also has a good convergence speed.
Generally, ESMA can obtain competitive results compared to other algorithms, such as the fastest convergence speed and highest convergence accuracy.

Qualitative Metrics Analysis
To evaluate the optimization performance of ESMA, Figure 5 illustrates the qualitative metrics, which include the 2D shape of benchmark functions (first column), search history of individuals (second column), trajectory (third column), average fitness (fourth column), and convergence curve (fifth column). For the first column, the 2D view of benchmark functions is described and shows the complexity of different functions. The second column illustrates the search history of the search agent from the first to the last iteration; it can be seen that the proposed ESMA is able to find the areas where the fitness values are the lowest. The trajectory of the first agent in the first dimension is described in the third column. We can see that the search agent oscillates continuously in the search space, which shows that the search agent widely studies the most promising fields and better solutions. The fourth column denotes the average fitness history. It can be seen that the fitness curve is decreasing, which indicates that the quality of the population is improving at each iteration. The last column is the convergence curve, which reveals that populations find the best solution after each iteration.

Qualitative Metrics Analysis
To evaluate the optimization performance of ESMA, Figure 5 illustrates the qualitative metrics, which include the 2D shape of benchmark functions (first column), search history of individuals (second column), trajectory (third column), average fitness (fourth column), and convergence curve (fifth column). For the first column, the 2D view of benchmark functions is described and shows the complexity of different functions. The second column illustrates the search history of the search agent from the first to the last iteration; it can be seen that the proposed ESMA is able to find the areas where the fitness values are the lowest. The trajectory of the first agent in the first dimension is described in the third column. We can see that the search agent oscillates continuously in the search space, which shows that the search agent widely studies the most promising fields and better solutions. The fourth column denotes the average fitness history. It can be seen that the fitness curve is decreasing, which indicates that the quality of the population is improving at each iteration. The last column is the convergence curve, which reveals that populations find the best solution after each iteration.

Experimental Results on Multilevel Thresholding
This section introduces the experimental details of the proposed algorithm ESMA applied to the multilevel thresholding image segmentation. First, the benchmark images and the experimental setup are presented in Section 5.1. Furthermore, the results of the algorithms in fitness, PSNR, SSIM, and FSIM are also analyzed. This section also shows the statistical analysis used to compare the proposed algorithm with other competitive algorithms.

Experiment Setup
In this paper, the benchmark greyscale images, including Lena, Baboon, Butterfly, etc., are used to evaluate the performance of the proposed algorithm ESMA's image segmentation [54]. All the benchmark images and their histogram images are represented in

Experimental Results on Multilevel Thresholding
This section introduces the experimental details of the proposed algorithm ESMA applied to the multilevel thresholding image segmentation. First, the benchmark images and the experimental setup are presented in Section 5.1. Furthermore, the results of the algorithms in fitness, PSNR, SSIM, and FSIM are also analyzed. This section also shows the statistical analysis used to compare the proposed algorithm with other competitive algorithms.

Experiment Setup
In this paper, the benchmark greyscale images, including Lena, Baboon, Butterfly, etc., are used to evaluate the performance of the proposed algorithm ESMA's image segmentation [54]. All the benchmark images and their histogram images are represented in Figure 6

Evaluation Measurements
In this paper, three common evaluation methods are used to illustrate the performance of the algorithm and the quality of image segmentation, namely PSNR, FSIM, and SSIM, which are defined as follows:

PSNR
Peak Signal to Noise Ratio (PSNR) is an image quality evaluation metric used to evaluate the similarity between the original image and the segmented image [55]. The PSNR is calculated as:

Evaluation Measurements
In this paper, three common evaluation methods are used to illustrate the performance of the algorithm and the quality of image segmentation, namely PSNR, FSIM, and SSIM, which are defined as follows:

PSNR
Peak Signal to Noise Ratio (PSNR) is an image quality evaluation metric used to evaluate the similarity between the original image and the segmented image [55]. The PSNR is calculated as: where I and Seg denote the original image and segmented image with M × N, respectively; RMSE is the root mean square error.

SSIM
Structural Similarity (SSIM) is a common metric used to measure the structural similarity between the original image and the segmented image [3], and is defined as: where µ I and µ Seg indicate the mean intensity of the original image and its segmented image; σ I and σ Seg denote the standard deviation of the original image and its segmented image; σ I,Seg is the covariance of the original image and the segmented image. c 1 and c 2 are constant.

FSIM
Feature Similarity (FSIM) is used to estimate the structural similarity between the original image and the segmented image [56], and is defined as: where Ω indicates the entire image domain; PC 1 and PC 2 represent the phase consistency of the original image and its segmented image, respectively; G 1 and G 2 represent the gradient magnitude of the original image and segmented image, respectively. T 1 and T 2 both are constant.

Experimental Result Analysis
This section mainly compares ESMA with seven optimization algorithms: SMA, ROA, AOA, AO, SSA, WOA, and SCA. All the algorithms run independently 30 times, and the average value (mean) and standard deviation (Std) are selected as the evaluation indexes, in which the best values are marked in bold. Table A1 illustrates the optimal threshold values obtained by different algorithms on the benchmark images. It can be seen that when the number of thresholds is equal to 4 and 6, the thresholds obtained by most algorithms are roughly the same. However, the results are quite different when the thresholds are extended to 8 and 10, especially for SCA and AOA. Table A2 represents the average fitness values and their Std obtained by all algorithms on the benchmark images. In general, the lower value of the average fitness denotes the better quality of segmentation. It can be seen that the fitness value of ESMA is better than most algorithms. For example, when the tank image is segmented with ten threshold levels, the fitness value obtained by ESMA ranks first, which is greatly improved compared with the SMA. Experimental results show that ESMA has better performance and strong applicability in segmenting multilevel threshold images. Table A3 shows the PSNR results obtained by all algorithms. As mentioned above, it is suitable to evaluate the similarity between the segmented image and the original image, where a higher average value indicates a better segmentation quality. From the attained results, however, there are only small differences between the ESMA and other compared algorithms in threshold values 4 and 6. However, the PSNR values significantly increase when the threshold values are increasing. It can be observed that, for most benchmark images, the proposed ESMA significantly produces more favorable and reliable results than the original SMA and other compared algorithms, which provides better PSNR results for most benchmark images, for example, when images Lena, Baboon, Tank, Cameraman, and Pirate are tackled with 10 threshold levels. Obviously, the PSNR values are highest, and AO and WOA are ranked second and third, respectively. When segmenting Lena and Baboon images, ESMA showed the best PSNR value among all thresholds. Generally, ESMA presents the best performance with the images Lena, Baboon, Peppers, Tank, and House. Table A4 illustrates the SSIM value obtained from different algorithms. As is possible to obverse, when the threshold is equal to 4, the SSIM results of each algorithm are roughly the same. Then, as the number of threshold values increases, the value of SSIM continues to increase, ESMA can obtain more original image information than other algorithms. For example, when the threshold value is equal to 4, the SSIM value obtained by ESMA for Baboon is 0.8041. When the number of thresholds increases to 10, the SSIM is 0.9395. Furthermore, when the threshold is equal to 6, 8, and 10, the segmentation quality of ESMA is better than most comparison algorithms, especially for segmenting Baboon, Butterfly, and House. In the case of Cameraman, the best SSIM results were obtained by ROA in the threshold values 4, 6, and 8. Overall, ESMA ranked first in segmentation quality. Table A5 shows the FSIM values obtained by different algorithms, where a higher value represents the best quality of the segmentation. We can see that the SMA and ROA show significant performance in Baboon, Butterfly, and Cameraman. Both AOA and SCA are not shown a significant performance for any of the images. The proposed ESMA can achieve good results in segmenting most images. For example, when the House image is processed using eight each threshold level, the value of FSIM is significant. Therefore, in most cases, the algorithm proposed in this paper can extract the interesting target from the image more accurately. Table A6 represents the p-value obtained by Wilcoxon rank-sum test with 5% significance level. It can be seen from the results that ESMA is significantly different from ROA, AOA, SSA, and SCA, which means that the proposed algorithm ESMA has been improved considerably. However, there is no significant difference at Lena for level 4. When comparing ESMA and WOA, there are significant differences in other images except for Butterfly, House, and Pepper. Table 7 shows the image segmentation results of the proposed algorithm ESMA for different thresholds, in which the obtained optimal threshold is marked with a red vertical line. This table shows how the thresholds divide an image into several different classes and how the objects are segmented from the background. Figure 7 summarizes the segmentation experimental results of fitness, PSNR, SSIM, and FSIM based on the objective function. From this figure, we can see that the segmentation performance of ESMA is significantly improved compared with original SMA, and ROA and WOA are ranked second and third, respectively.  According to the above evaluation metrics and statistical test, the proposed ESMA has a better segmentation quality than other compared algorithms. Thus, the proposed ESMA can be effectively applied to the field of image segmentation.

Conclusions and Future Work
In this paper, an enhanced slime mould algorithm (ESMA) is proposed for global optimization and multilevel thresholding image segmentation. In order to improve the performance of SMA, we use two strategies. First, the Levy flight strategy is used to enhance the exploration ability. Second, quasi opposition-based learning is used to enhance the exploitation ability and balance the exploration and exploitation. To evaluate the performance of ESMA, ESMA and some state-of-the-art algorithms were tested on the 23 benchmark functions, and the results indicate that the ESMA is superior to others. This shows that the above two strategies can effectively help SMA avoid falling into optimal local state and improve the global search ability of the population. In addition, we applied ESMA to multilevel thresholding image segmentation, and minimum cross-entropy is selected as the fitness function. The experimental evaluation metrics determined the mean fitness, standard deviation, PSNR, SSIM, FSIM and Wilcoxon rank-sum test. Experimental results show that the ESMA method is superior to other image segmentation methods in PSNR, FSIM, SSIM, and statistical tests.
While the proposed work is valuable in the image segmentation field, it is necessary to extend the benchmark images and increase the number of thresholds to obtain more reliable results. In addition, we will also seek to hybridize the ESMA with other MAs to improve the segmentation results when solving real-world applications, such as ship target segmentation and medical image segmentation. Meanwhile, other objective functions can be selected to realize multilevel thresholding image segmentation.

Data Availability Statement:
The data presented in this study are available on request from the corresponding author.