Next Article in Journal
Kernel Estimation of Cumulative Residual Tsallis Entropy and Its Dynamic Version under ρ-Mixing Dependent Data
Next Article in Special Issue
Metaheuristic Algorithms Based on Compromise Programming for the Multi-Objective Urban Shipment Problem
Previous Article in Journal
Towards a Theory of Quantum Gravity from Neural Networks
Previous Article in Special Issue
Belief and Possibility Belief Interval-Valued N-Soft Set and Their Applications in Multi-Attribute Decision-Making Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Grouping Differential Evolution Algorithm Boosted by Attraction and Repulsion Strategies for Masi Entropy-Based Multi-Level Image Segmentation

1
Computer Engineering Department, Hakim Sabzevari University, Sabzevar 96179-76487, Iran
2
Departamento de Innovación Basada en la Información y el Conocimiento, Universidad de Guadalajara, CUCEI, Guadalajara 44430, Mexico
3
Department of Computer Science, Loughborough University, Loughborough LE11 3TT, UK
*
Authors to whom correspondence should be addressed.
Entropy 2022, 24(1), 8; https://doi.org/10.3390/e24010008
Submission received: 21 November 2021 / Revised: 14 December 2021 / Accepted: 15 December 2021 / Published: 21 December 2021
(This article belongs to the Special Issue Entropy in Soft Computing and Machine Learning Algorithms)

Abstract

:
Masi entropy is a popular criterion employed for identifying appropriate threshold values in image thresholding. However, with an increasing number of thresholds, the efficiency of Masi entropy-based multi-level thresholding algorithms becomes problematic. To overcome this, we propose a novel differential evolution (DE) algorithm as an effective population-based metaheuristic for Masi entropy-based multi-level image thresholding. Our ME-GDEAR algorithm benefits from a grouping strategy to enhance the efficacy of the algorithm for which a clustering algorithm is used to partition the current population. Then, an updating strategy is introduced to include the obtained clusters in the current population. We further improve the algorithm using attraction (towards the best individual) and repulsion (from random individuals) strategies. Extensive experiments on a set of benchmark images convincingly show ME-GDEAR to give excellent image thresholding performance, outperforming other metaheuristics in 37 out of 48 cases based on cost function evaluation, 26 of 48 cases based on feature similarity index, and 20 of 32 cases based on Dice similarity. The obtained results demonstrate that population-based metaheuristics can be successfully applied to entropy-based image thresholding and that strengthening both exploitation and exploration strategies, as performed in ME-GDEAR, is crucial for designing such an algorithm.

Graphical Abstract

1. Introduction

Image segmentation is a challenging task in machine vision. It is the process of dividing an image into several non-overlapping areas based on features such as colour or texture. Image segmentation is used in a broad spectrum of applications including medicine [1,2], the modelling of microstructures [3] and food quality [4]. While a variety of image segmentation approaches have been proposed [5] and although deep learning methods have shown impressive performance for image segmentation tasks [6], techniques based on image thresholding remain popular due to their simplicity and robustness [7,8] despite not requiring a training process. Image thresholding aims to find the threshold value(s) for an image using information from the histogram of an image. While bi-level image thresholding (BLIT) methods try to find a single threshold to discriminate between the background and foreground, multi-level image thresholding (MLIT) approaches have determined multiple threshold values to partition an image into several regions. MLIT is a challenging task and has thus attracted the attention of significant research [9,10,11,12].
In recent years, entropy-based MLIT algorithms have been extensively employed for image segmentation [13,14,15]. Entropy is a measure of randomness or disorder so that homogeneous regions are characterised by low unpredictability [16]. A higher value of entropy thus shows higher separability between background and foreground, while different types of entropy, such as Kapur entropy [17], Reni entropy [18], Shannon entropy [19] and Tsallis entropy [20] can be employed. The information considered is either additive or non-additive, and is exploited in entropy-based image thresholding [21]. Renyi entropy can address additivity [18], while Tsallis entropy can take into consideration non-additivity; however, neither can simultaneously employ both additive and non-additive information.
Masi entropy [22] combines the additivity feature of Renyi entropy and the non-extensitivity feature of Tsallis entropy. Masi entropy has shown remarkable performance for BLIT, but its efficiency drastically decreases when increasing the number of thresholds due to the resulting time complexity. To address this issue, population-based metaheuristic algorithms (PBMHs) such as differential evolution (DE) and particle swarm optimisation (PSO), where a population of candidate solutions is iteratively and co-operatively improved, offer a powerful alternative. While PBMHs have been extensively used for image segmentation [23], there are only few works on PBMHs for Masi-based MLIT problems. Khairuzzaman et al. [24] employ PSO with Masi entropy for image segmentation and shows that PSO can outperform the dragonfly algorithm (DA) on six benchmark images. Fractional order Darwinian PSO was used in [25] for image segmentation based on Masi entropy. A post-processing step was introduced to remove small segmented regions and merge them into bigger regions. In [26], the water cycle algorithm (WCA) was employed for image thresholding using Masi entropy as the objective function. The obtained results indicate that WCA can achieve better performance in comparison to 5 other algorithms on 10 benchmark images. Ref. [27] employs the moth swarm algorithm (MSA) for image thresholding based on context-sensitive energy and Masi entropy and shows that it can outperform several PBMHs. Other PBMHs including multi-verse optimiser (MVO) [28,29], Harris hawks optimisation (HHO) [21,30], cuttlefish algorithm (CA) [31], and barnacles mating optimiser (BMO) [32] have also been employed for Masi entropy-based MLIT problems.
Differential evolution [33] is a well-established PBMH with three main operators: mutation, crossover, and selection. Similar to other PBMHs, during initialisation, a starting population of individuals is (randomly) generated. The mutation operator generates a mutant vector based on the differences among individuals, while crossover combines the mutant vector and its parent. Finally, the selection operator chooses the individual to include in the next iteration. In recent years, much research has focussed on improving DE [34,35,36], while DE has been shown to yield notable performance in solving complex problems [37,38,39].
In this paper, we propose a novel multi-level image thresholding algorithm named Masi entropy-based grouping differential evolution boosted by attraction and repulsion strategies (ME-GDEAR). Our proposed algorithm employs a grouping strategy using a clustering algorithm to partition the current population into groups. ME-GDEAR then uses the cluster information to update the current population. In addition, we apply attraction and repulsion strategies to further improve the efficacy of the algorithm. Extensive experiments on a set of benchmark images convincingly show the excellent image thresholding performance of ME-GDEAR in comparison to other approaches.
The remainder of the paper is organised as follows. Section 2 reviews some background about differential evolution, clustering, and image thresholding. Section 3 explains our proposed algorithm in detail, while Section 4 evaluates and discusses the obtained experimental results. Section 5 concludes the paper.

2. Background

2.1. Differential Evolution

Differential evolution (DE) [33] is a well-established population-based metaheuristic algorithm that has shown good performance in solving complex optimisation problems from a broad spectrum of domains [37,40,41]. The canonical DE algorithm includes four main steps: initialisation, crossover, mutation, and selection. The pseudo-code of DE is given in Algorithm 1, whereas the main operators are described below.
Algorithm 1: Pseudo-code of DE algorithm.
Entropy 24 00008 i001

2.1.1. Initialisation

Similarly to other PBMHs, DE begins with a population of N P randomly generated individuals, where for a D-dimensional problem, an individual is defined as x i = ( x i , 1 , x i , 2 , . . . , x i , D ) R D .

2.1.2. Mutation

Mutation creates a mutant vector based on differences among individuals. While there is a wide range of mutation operators, DE/rand/1 is popular and defined as
v i = x r 1 + F ( x r 2 x r 3 ) ,
where x r 1 (called the base vector), x r 2 , and x r 3 are three randomly selected individuals distinct from the current population, and F is a scaling factor.

2.1.3. Crossover

Crossover combines the mutant and parent vectors, with the aim of enhancing the exploration of the population. Among the different crossover operators, binomial crossover is often chosen and is formulated as
u i , j = v i , j r a n d ( 0 , 1 ) C R x i , j otherwise ,
where i = 1 , . . . , N p o p , j = 1 , . . . , D , u is called a trial vector, C R is the crossover rate, and j r a n d is a random number in [ 1 ; N p o p ] .

2.1.4. Selection

The selection operator aims to select the better individual from the trial vector and the parent vector for inclusion in the next population.

2.2. Clustering

Clustering is an unsupervised pattern recognition technique to divide a set of samples into a number of groups so that samples located in the same cluster are more similar compared to those in different clusters. The main characteristics of a clustering algorithm are:
  • Each cluster should have at least one sample: C i ϕ , i = 1 , . . . , K ;
  • The total number of samples in all clusters must be equal to the total number of samples: i = 1 K = O ; and
  • Distinct clusters should not have a mutual sample: c i c j = ϕ , j = 1 , . . . , K , i j .
Among the different clustering algorithms, k-means [42] is a simple yet effective approach that is widely employed. k-means proceeds in the following steps:
  • Randomly select k samples as cluster centres;
  • Allocate each sample to its closest cluster centre based on a distance metric (often Euclidean distance);
  • Recalculate the new cluster centres as the mean value of the samples located in each cluster;
  • If the stopping condition is satisfied, the algorithm has terminated—otherwise go to Step 2.

2.3. Multi-Level Image Thresholding

Multi-level image thresholding is a popular approach for image segmentation. MLIT aims to find D threshold values as
M 0 = { f ( x , y ) I | 0 f ( x , y ) t h 1 1 } M 1 = { f ( x , y ) I | t h 1 f ( x , y ) t h 2 1 } M i = { f ( x , y ) I | t h i f ( x , y ) t h i + 1 1 } M D = { f ( x , y ) I | t h m f ( x , y ) L 1 }
where f ( x , y ) indicates an image pixel at location ( x , y ) and L is the number of intensity levels in the image. M i thus gives an image segment based on the threshold values and it is the selection of these t h D that is at the core of this paper.

3. Proposed ME-GDEAR Algorithm

In this paper, we propose Masi entropy-based grouping differential evolution boosted by attraction and repulsion strategies (ME-GDEAR), as an improved DE algorithm for multi-level image thresholding. The general structure of our proposed algorithm is shown in Figure 1. In the following, we first explain the main components of ME-GDEAR, and then detail how the algorithm proceeds.

3.1. Grouping Strategy

We propose a grouping strategy, inspired by [43], for dividing the current population into groups. Our grouping strategy has two main operators: region creation and population update.

3.1.1. Region Creation

Our grouping strategy first creates some regions based on the k-means algorithm. Here, each cluster indicates a region and the number of clusters is set as a random number between 2 and N P . Cluster centres are the means of individuals in the same cluster, meaning that each cluster centre holds information about the individuals in the cluster. The cluster centres thus support a sort of multi-parent crossover. Figure 2 indicates the process of region creation for a toy example.

3.1.2. Population Update

The cluster centres created above should be included in the current population. To this end, we employed a generic population-based algorithm (GPBA) proposed in [43,44] to boost the performance of the algorithm. GPBA uses four operators to tackle optimisation problems, namely:
  • Selection: randomly choose some individuals from the current population. This relates to choosing initial samples in the k-means algorithm;
  • Generation: create m individuals as set A. For this, ME-GDEAR selects the cluster centres as the new individuals, that is, the new individuals are generated using k-means clustering;
  • Substitution: choose m individuals (set B) from the population for substitution. There are various ways to select some individuals from the population; ME-GDEAR uses random selection as a simple selection strategy;
  • Update: from the union set A B , the m best individuals are selected as B ¯ . The new population is then obtained as ( P B ) B ¯ .

3.1.3. Clustering Period

In ME-GDEAR, clustering is not performed in every iteration. Instead, clustering is periodically performed [43,45], where parameter C P defines the clustering period. Selecting an effective clustering period is essential so that DE can create stable clusters.

3.2. Attraction and Repulsion Strategies

We introduce attraction and repulsion strategies into ME-GDEAR inspired by the WOA algorithm [46] in order to explore the search space more effectively. These strategies are applied with a probability P r . Three operators are employed, which we explain below, while switching between them is performed based on some probabilities.

3.2.1. Repulsion from Random Individuals

This operator causes all individuals to move away from some randomly selected individuals as
x i = x r A M ,
with
M = | C x r x i | ,
where x r is a random individual selected from the current population, A is a number greater than 1, and C is a random number between 0 and 2.

3.2.2. Attraction towards the Best Individual

Here, each individual tries to converge towards the best individual as
x i = x b e s t A M ,
with:
M = | C x b e s t x i | ,
where x b e s t is the best individual in the current population, A is a number less than 1, and C is a random number between 0 and 2.

3.2.3. Attraction towards the Best Individual (Spirally)

This operator updates an individual in a spiral way as
x i = x b e s t + e b l cos ( 2 π l ) M ,
with:
M = | x b e s t x i | ,
where x b e s t is the position of the best individual, b is a constant, and l is a random number in [ 1 , 1 ] .

3.3. Encoding Strategy

The encoding strategy determines the structure of each individual in the population. In ME-GDEAR, we employed, as illustrated in Figure 3, a one-dimensional vector to encode the threshold values as
x = [ t h 1 , t h 2 , . . . , t h D ] ,
where D is the number of threshold values, and t h i is the i-th threshold value.

3.4. Objective Function

The probability of occurrence of pixel intensity i is:
h i = n i M N , h i 0 , i = 0 L 1 h i = 1 ,
where M and N are the dimensions of the image, L is the number of image intensities, and n i is the number of pixels of intensity i.
For our MLIT algorithm, the class likelihoods are computed as
w 1 = i = 0 t h 1 h i , w 2 = i = t h 1 + 1 t h 2 h i , , w D = i = t h D 1 L 1 h i ,
and the multi-level Masi entropy (MME) of each class is calculated as
H 1 = 1 1 r log [ 1 ( 1 r ) i = 0 t h 1 ( h i w 1 ) log ( h i w 1 ) ] H 2 = 1 1 r log [ 1 ( 1 r ) t h 1 + 1 t h 2 ( h i w 2 ) log ( h i w 2 ) ] H D = 1 1 r log [ 1 ( 1 r ) t h D + 1 L ( h i w D ) log ( h i w D ) ] ,
where r is the value of the entropic parameter.
Finally, we define the objective function as
f ( t 1 , t 2 , . . . , t D ) = H 1 + H 2 + . . . + H D .

3.5. Proposed Algorithm

Our ME-GDEAR algorithm, which performs clustering-based DE boosted by attraction and repulsion strategies for Masi-entropy multi-level image segmentation, proceeds in the following steps:
  • Initialise the parameters including population size N P , maximum number of function evaluations N F E max , clustering period C P , probability of attraction and repulsion strategies P r , and entropic parameter r. Set the current number of function evaluations N F E = 0 , and the current iteration i t e r = 1 .
  • Generate the initial population of size N P using uniformly distributed random numbers.
  • Calculate the objective function value of each individual in the population using Equation (14).
  • Set N F E = N F E + N P .
  • For each individual, perform Steps 5a–5d:
    (a)
    Apply mutation operator;
    (b)
    Apply crossover operator;
    (c)
    Calculate the objective function using Equation (14);
    (d)
    Apply selection operator.
  • Set N F E = N F E + N P .
  • If ( i t e r % C P = = 0 ) , go to Step 7a—otherwise, go to Step 8:
    (a)
    Randomly generate k as random integer number between 2 and N P ;
    (b)
    Perform k-means clustering and select k cluster centres as set A;
    (c)
    Select k individuals randomly from current population as set B;
    (d)
    From A B , select best k individuals as B ¯ ;
    (e)
    Select new population as ( P B ) B ¯ .
  • If r a n d < P r , go to Step 8a—otherwise, go to Step 9.
    (a)
    Generate two random numbers, r 1 and r 2 , between 0 and 1, and one random number, C, between 0 and 2;
    (b)
    Set a as 2 N F E ( 2 / N F E m a x ) and A as 2 a r 1 a ;
    (c)
    If r a n d < 0.5 , go to Step 8d—otherwise, go to Step 8g;
    (d)
    If | A | 1 , go to Step 8e—otherwise, go to Step 8f;
    (e)
    Apply repulsion operator using Equation (4) and go to Step 9;
    (f)
    Apply attraction operator using Equation (6) and go to Step 9;
    (g)
    Apply spiral attraction operator using Equation (8).
  • Set i t e r = i t e r + 1 .
  • If N F E > N F E max , go to Step 11—otherwise, go to Step 5.
  • Select the best individual as the set of optimal threshold values.

3.6. Monte-Carlo Simulations

In our approach, clustering acts similarly to a multi-parent crossover. To analyse the effect of clustering on the algorithm’s performance, we designed some Monte-Carlo simulations. For this, we selected three representative images from the Berkeley image segmentation database [47], namely 147091, 101087, and 253027.
The golden region was defined as a hyper-sphere whose diameter is the middle 60% interval of the shrunken search space and whose centre is the centre of the shrunken search space [48]. The lower and higher bounds of the shrunken search space are the minimum and maximum of the current population, respectively. An individual is located in the golden region if the distance to the centre point is less than the radius of the hyper-sphere. Points in the golden region are more likely to be close to an unknown optimum solution [48].
In the first simulation, the percentages of cluster centres and random individuals which are located in the golden region were computed. In each iteration, several randomly generated individuals were generated (based on the population size) and their locations were found (inside the golden region or not). Then, the location of cluster centres was obtained. Figure 4 gives the results (all simulations were repeated 10,000,000 times) and shows that the probability of a cluster centre falling in the golden region is much higher than that of a random individual, indicating that cluster centres are biased toward the centre of the golden region.
In the next experiment, we calculated the distance between the centre of the golden region and cluster centres and between the centre of the golden region and random individuals. From Figure 5, which shows the results, we can observe that the distance between the cluster centres and the centre of golden region is smaller than the distance between random individuals and the centre of golden region, indicating that the cluster centres are closer to the centre of golden region compared to random individuals.
Finally, we evaluate the mean objective function value with and without our proposed grouping strategy to assess its effectiveness. Figure 6 shows that for all images, the mean objective function values are improved, confirming that the grouping stage leads to improved thresholding performance.

4. Results and Discussion

In order to evaluate the performance of our proposed ME-GDEAR algorithm, we performed several experiments on a set of benchmark images which are widely used to test thresholding algorithms, namely Boats, Peppers, Goldhill, Lenna, and House, as well as seven images from the Berkeley image segmentation database [47], 12003, 181079, 175043, 101085, 147091, 101087, and 253027. Figure 7 shows the images and their histograms. As we can see, the image histograms show different characteristics; some images such as Lenna and Peppers have different peaks and valleys, while others such as 175043 have only one peak and images such as Goldhill have abrupt changes in the histogram.
We compared ME-GDEAR with a number of population-based image thresholding algorithms, including Masi entropy-based differential evolution (ME-DE), the Masi entropy-based firefly algorithm (ME-FA), Masi entropy-based bat algorithm (ME-BA), Masi entropy-based moth flame optimisation (ME-MFO), Masi entropy-based dragonfly algorithm (ME-DA), and Masi entropy-based whale optimisation algorithm (ME-WOA).
The population size and the number of function evaluations for all algorithms were 50 and 10,000, respectively. For ME-GDEAR, C p and p r are set to 5 and 0.2, respectively. For the other algorithms, we used the default values for the various parameters which are listed in Table 1. For all algorithms, the entropic parameter was set to 1.2. Each algorithm was run 25 times and we reported the average and standard deviation over these 25 runs.

4.1. Objective Function Results

We first compared the algorithms in terms of objective function values. Table 2 gives the results of all algorithms and all images for D = 3 . For each image and algorithm, we give the average, standard deviation, and resulting rank (based on the average) of each algorithm. In addition, the average ranks and overall ranks are reported.
As we can see, ME-GDEAR is ranked first or second for 8 of the 12 images, leading to the first overall rank. ME-DE is ranked top for three images, while ME-FA gives the best results for two images and these two algorithms give the second-best results overall.
Table 3 reports the results for D = 4 . ME-GDEAR is again clearly ranked first overall. By comparing Table 2 and Table 3, we can observe that ME-DE drops from an average rank of 3.25 to 4.50, leading to an overall rank of 5 for D = 4 . In contrast, ME-MFO is ranked second overall for D = 4 , improving from its fourth rank for D = 3 .
For D = 5 , similar results can be seen in Table 4. ME-GDEAR yields the first overall rank, while ME-MFO is ranked second. There is a clear difference between the average rank of ME-GDEAR (1.83) and that of ME-DE (4.17) which shows that our approach clearly outperforms differential evolution.
The curse of dimensionality is a challenging problem in solving an optimisation problem, since increasing the number of dimensions results in exponentially expanding the search space. To assess our proposed algorithm in higher dimensions, we compared ME-GDEAR for D = 10 against the other algorithms in Table 5. It is obvious that our algorithm again yields the best results, being ranked first or second for 9 of the 12 images, while ME-BA is ranked second overall.
Overall, ME-GDEAR thus outperforms all other algorithms for all tested dimensionalities, indicating the impressive multi-level image thresholding performance.

4.2. Feature Similarity Index Results

The feature similarity index measure (FSIM) [53] is a popular measure for evaluating image quality which is based on two low-level features—phase congruency, which measures the significance of local structures; and gradient magnitude, which incorporates contrast information.
Table 6 lists the FSIM results for D = 3 . From there, we can see that our proposed algorithm is again ranked top overall. The same holds for D = 4 whose results are in Table 7 and for D = 5 with results in Table 8.
The results for the higher-dimensional problem with D = 10 are given in Table 9. From there, we can see that ME-GDEAR maintains its efficacy and outperforms all other algorithms.
Overall, ME-GDEAR also outperforms all other algorithms in terms of FSIM and does so for all dimensionalities, confirming the efficacy of our proposed algorithm.

4.3. Dice Measure

We further performed an evaluation based on Dice similarity [54], which measures the overlap between two segmented images. Since the Dice measure requires a ground truth, we can only apply it on the images of the Berkeley segmentation dataset. As there are multiple manual segmentations for each image, we take the maximum obtained Dice score as our measure for comparison.
Table 10 gives the results for D = 3 and shows ME-GDEAR to give the best Dice score for 5 of the 7 images, and, consequently, the best average rank.
Similar results are obtained for D = 4 , D = 5 , and D = 10 , as can be observed from Table 11, Table 12 and Table 13, respectively.

4.4. Statistical Tests

Owing to the random characteristics of PBMHs, we also performed statistical tests, based on objective function performance, to further assess the algorithms. In particular, we conducted two non-parametric statistical tests, the Wilcoxon signed rank test and the Friedman test [55]. The Wilcoxon signed rank test is a pair-wise test to compare two algorithms, while the Friedman test allows to evaluate more than two algorithms. The null hypothesis ( H 0 ) states that there is no significant difference between algorithms, while the alternative hypothesis ( H 1 ) investigates a difference. Furthermore, the level of statistical significance α indicates the hypothesis rejection probability: if the calculated p-value is lower than α , H 0 is rejected.
The results of the Wilcoxon signed rank test between ME-GDEAR and the other algorithms are given in Table 14. From there, we can see that in all cases, the obtained p-value is much smaller than α = 0.05 , confirming that ME-GDEAR statistically outperforms the other algorithms.
The results of the Friedman test are given in Table 15. It is apparent that ME-GDEAR yields the lowest rank (1.96) and with a wide margin over the second ranked algorithm (ME-BA). The obtained p-value is negligible, confirming the fact that there is a significant difference between the algorithms. The critical value for (8 − 1) = 7 degrees of freedom with a 0.05 significance level is 14.067 (from chi-squared distribution table). The obtained chi-squared value of 87.6 is much higher than the critical value; in other words, H 0 is rejected.

4.5. Visual Evaluation

In this section, we visually compare the results of the algorithms. For this, we select (due to length restrictions) image 147091 for D = 5 and image 101087 for D = 10 as representatives examples. Since the images are from the Berkley segmentation dataset, there are several ground truth segmentations available for each, although these are often quite different.
Figure 8 shows the manual segmentations together with the images thresholded by all algorithms for image 147091 for D = 5 . We can notice that our proposed algorithm can segment the image with less noise, particularly the parts of the sky that are cloudless. In contrast, some algorithms such as ME-BA and ME-WOA are unable to distinguish between the left vertical margin and its adjacent parts.
Figure 9 shows the results for images 101087 and D = 10 . Here, we can observe that some algorithms such as ME-WOA and ME-BA do not perform well, most noticeably in the sky part, while ME-GDEAR works significantly better and with less noise. Some algorithms such as ME-FA, ME-BA, and ME-WOA cannot properly segment the shadow part of the lake; these algorithms segment the shadow part into three different regions with almost the same proportions, while our proposed algorithm segments this part more reasonably into two partitions. It is worth noting that in our proposed method the distribution of the classes in the shadow part is not the same and most of the shadow part belongs to one single class, which is more in line with reality.

4.6. Effect of Parameters

In ME-GDEAR, we introduce two new parameters, namely C P and P r . To see their effect, we select three representative images, 147091, 101087, and 253027 with D = 10 . As shown in Figure 10, the performance highly depends on C P . Therefore, finding a good value for C P is beneficial to achieve better thresholding. The best value was obtained for C P = 5 .
Figure 11 shows results for different values of P r . As we can see, 0.2 is an appropriate value for this parameter.

5. Conclusions

Multi-level image thresholding remains a popular image segmentation approach. Its aim is to find optimal thresholds based on information available in the image histogram. In this paper, we proposed an improved differential evolution algorithm for MLIT based on Masi entropy. Our ME-GDEAR algorithm introduces (1) a grouping strategy into DE to cluster the population and use cluster information to update the population; and (2) attraction and repulsion strategies to more effectively update individuals. Experiments on a benchmark image set with different characteristics clearly demonstrate that ME-GDEAR outperforms other MLIT approaches.
One challenge of image thresholding algorithm is that they may not be too widely used on their own, particularly for higher dimensions. However, they can also be effectively employed as a pre-processing technique. For example, ref. [56] uses image thresholding as a pre-processing step for the application of a subsequent graph cut segmentation algorithm. Therefore, in future work, we intend to integrate our approach with other image segmentation algorithms. Another challenge is that only the image histogram is used, thus ignoring 2-dimensional image information including texture.
Furthermore, some of the drawbacks of ME-GDEAR can be addressed in future work. For instance, it uses k-means to cluster the population which can be time-consuming. Using methods with lower computational demand can thus be considered. Furthermore, as is common with other population-based metaheuristic algorithms, parameter tuning is a demanding task and investigating mechanisms for automatic parameter-tuning will be beneficial. Other planned future work includes the application of alternative objective functions to improve segmentation and a multi-objective variant of the algorithm.

Author Contributions

Conceptualization, S.J.M. and D.O.; Formal analysis, S.J.M. and D.Z.; Investigation, S.J.M.; Methodology, S.J.M., D.O., M.P.-C. and G.S.; Software, S.J.M. and D.Z.; Supervision, S.J.M., D.O.; Validation, S.J.M., D.Z. and G.S.; Visualization, S.J.M.; Writing—original draft, S.J.M.; Writing—review & editing, D.O., M.P.-C. and G.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rundo, L.; Tangherloni, A.; Cazzaniga, P.; Nobile, M.S.; Russo, G.; Gilardi, M.C.; Vitabile, S.; Mauri, G.; Besozzi, D.; Militello, C. A novel framework for MR image segmentation and quantification by using MedGA. Comput. Methods Programs Biomed. 2019, 176, 159–172. [Google Scholar] [CrossRef] [PubMed]
  2. Li, Y.; Jiao, L.; Shang, R.; Stolkin, R. Dynamic-context cooperative quantum-behaved particle swarm optimization based on multilevel thresholding applied to medical image segmentation. Inf. Sci. 2015, 294, 408–422. [Google Scholar] [CrossRef] [Green Version]
  3. Sanei, S.H.R.; Barsotti, E.J.; Leonhardt, D.; Fertig, R.S., III. Characterization, synthetic generation, and statistical equivalence of composite microstructures. J. Compos. Mater. 2017, 51, 1817–1829. [Google Scholar] [CrossRef]
  4. Mousavirad, S.; Akhlaghian, F.; Mollazade, K. Classification of rice varieties using optimal color and texture features and BP neural networks. In Proceedings of the 7th Iranian Conference on Machine Vision and Image Processing, Tehran, Iran, 16–17 November 2011; pp. 1–5. [Google Scholar]
  5. Pal, N.R.; Pal, S.K. A review on image segmentation techniques. Pattern Recognit. 1993, 26, 1277–1294. [Google Scholar] [CrossRef]
  6. Minaee, S.; Boykov, Y.Y.; Porikli, F.; Plaza, A.J.; Kehtarnavaz, N.; Terzopoulos, D. Image segmentation using deep learning: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, in press. Available online: https://ieeexplore.ieee.org/abstract/document/9356353 (accessed on 14 December 2021). [CrossRef] [PubMed]
  7. Mousavirad, S.J.; Schaefer, G.; Oliva, D.; Hinojosa, S. HCS-BBD: An effective population-based approach for multi-level thresholding. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, Lille, France, 10–14 July 2021; pp. 1923–1930. [Google Scholar]
  8. Farshi, T.R. A multilevel image thresholding using the animal migration optimization algorithm. Iran J. Comput. Sci. 2019, 2, 9–22. [Google Scholar] [CrossRef]
  9. Farshi, T.R.; Demirci, R. Multilevel image thresholding with multimodal optimization. Multimed. Tools Appl. 2021, 80, 15273–15289. [Google Scholar] [CrossRef]
  10. Abdel-Basset, M.; Chang, V.; Mohamed, R. A novel equilibrium optimization algorithm for multi-thresholding image segmentation problems. Neural Comput. Appl. 2021, 33, 10685–10718. [Google Scholar] [CrossRef]
  11. Esmaeili, L.; Mousavirad, S.J.; Shahidinejad, A. An efficient method to minimize cross-entropy for selecting multi-level threshold values using an improved human mental search algorithm. Expert Syst. Appl. 2021, 182, 115106. [Google Scholar] [CrossRef]
  12. Mousavirad, S.J.; Schaefer, G.; Ebrahimpour-Komleh, H. A benchmark of population-based metaheuristic algorithms for high-dimensional multi-level image thresholding. In Proceedings of the IEEE Congress on Evolutionary Computation, Wellington, New Zealand, 10–13 June 2019; pp. 2394–2401. [Google Scholar]
  13. Mousavirad, S.J.; Ebrahimpour-Komleh, H. Entropy based optimal multilevel thresholding using cuckoo optimization algorithm. In Proceedings of the 11th International Conference on Innovations in Information Technology, Dubai, United Arab Emirates, 1–3 November 2015; pp. 302–307. [Google Scholar]
  14. Mousavirad, S.J.; Schaefer, G.; Korovin, I. High-dimensional multi-level image thresholding using self-organizing migrating algorithm. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, Cancun, Mexico, 8–12 July 2020; ACM: New York, NY, USA; pp. 1454–1459. [Google Scholar]
  15. Farshi, T.R.; Ardabili, A.K. A hybrid firefly and particle swarm optimization algorithm applied to multilevel image thresholding. Multimed. Syst. 2021, 27, 125–142. [Google Scholar] [CrossRef]
  16. Shubham, S.; Bhandari, A.K. A generalized Masi entropy based efficient multilevel thresholding method for color image segmentation. Multimed. Tools Appl. 2019, 78, 17197–17238. [Google Scholar] [CrossRef]
  17. Kapur, J.N.; Sahoo, P.K.; Wong, A.K. A new method for gray-level picture thresholding using the entropy of the histogram. Comput. Vision, Graph. Image Process. 1985, 29, 273–285. [Google Scholar] [CrossRef]
  18. Sahoo, P.; Wilkins, C.; Yeager, J. Threshold selection using Renyi’s entropy. Pattern Recognit. 1997, 30, 71–84. [Google Scholar] [CrossRef]
  19. Lin, J. Divergence measures based on the Shannon entropy. IEEE Trans. Inf. Theory 1991, 37, 145–151. [Google Scholar] [CrossRef] [Green Version]
  20. De Albuquerque, M.P.; Esquef, I.A.; Mello, A.G. Image thresholding using Tsallis entropy. Pattern Recognit. Lett. 2004, 25, 1059–1065. [Google Scholar] [CrossRef]
  21. Naik, M.K.; Panda, R.; Wunnava, A.; Jena, B.; Abraham, A. A leader Harris hawks optimization for 2-D Masi entropy-based multilevel image thresholding. Multimed. Tools Appl. 2021, 80, 35543–35583. [Google Scholar] [CrossRef]
  22. Masi, M. A step beyond Tsallis and Rényi entropies. Phys. Lett. A 2005, 338, 217–224. [Google Scholar] [CrossRef] [Green Version]
  23. Rundo, L.; Militello, C.; Vitabile, S.; Russo, G.; Sala, E.; Gilardi, M.C. A survey on nature-inspired medical image analysis: A step further in biomedical data integration. Fundam. Inform. 2020, 171, 345–365. [Google Scholar] [CrossRef]
  24. Khairuzzaman, A.K.M.; Chaudhury, S. Masi entropy based multilevel thresholding for image segmentation. Multimed. Tools Appl. 2019, 78, 33573–33591. [Google Scholar] [CrossRef]
  25. Chakraborty, R.; Verma, G.; Namasudra, S. IFODPSO-based multi-level image segmentation scheme aided with Masi entropy. J. Ambient. Intell. Humaniz. Comput. 2020, 12, 7793–7811. [Google Scholar] [CrossRef]
  26. Kandhway, P.; Bhandari, A.K. A water cycle algorithm-based multilevel thresholding system for color image segmentation using Masi entropy. Circuits Syst. Signal Process. 2019, 38, 3058–3106. [Google Scholar] [CrossRef]
  27. Bhandari, A.K.; Rahul, K. A context sensitive Masi entropy for multilevel image segmentation using moth swarm algorithm. Infrared Phys. Technol. 2019, 98, 132–154. [Google Scholar] [CrossRef]
  28. Jia, H.; Peng, X.; Song, W.; Oliva, D.; Lang, C.; Li, Y. Masi entropy for satellite color image segmentation using tournament-based Lévy multiverse optimization algorithm. Remote Sens. 2019, 11, 942. [Google Scholar] [CrossRef] [Green Version]
  29. Kandhway, P.; Bhandari, A.K. Spatial context cross entropy function based multilevel image segmentation using multi-verse optimizer. Multimed. Tools Appl. 2019, 78, 22613–22641. [Google Scholar] [CrossRef]
  30. Wunnava, A.; Naik, M.K.; Panda, R.; Jena, B.; Abraham, A. A differential evolutionary adaptive Harris hawks optimization for two dimensional practical Masi entropy-based multilevel image thresholding. J. King Saud-Univ.-Comput. Inf. Sci. 2020, in press. [Google Scholar] [CrossRef]
  31. Bhandari, A.K.; Rahul, K.; Shahnawazuddin, S. A fused contextual color image thresholding using cuttlefish algorithm. Neural Comput. Appl. 2021, 33, 271–299. [Google Scholar] [CrossRef]
  32. Li, H.; Zheng, G.; Sun, K.; Jiang, Z.; Li, Y.; Jia, H. A logistic chaotic barnacles mating optimizer with Masi entropy for color image multilevel thresholding segmentation. IEEE Access 2020, 8, 213130–213153. [Google Scholar] [CrossRef]
  33. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  34. Li, W.; Meng, X.; Huang, Y. Fitness distance correlation and mixed search strategy for differential evolution. Neurocomputing 2021, 458, 514–525. [Google Scholar] [CrossRef]
  35. Mousavirad, S.J.; Rahnamayan, S. Differential Evolution Algorithm Based on a Competition Scheme. In Proceedings of the 14th International Conference on Computer Science and Education, Toronto, ON, Canada, 19–21 August 2019; pp. 929–934. [Google Scholar]
  36. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S.; Faris, H. MTDE: An effective multi-trial vector-based differential evolution algorithm and its applications for engineering design problems. Appl. Soft Comput. 2020, 97, 106761. [Google Scholar] [CrossRef]
  37. Fister, I.; Fister, D.; Deb, S.; Mlakar, U.; Brest, J. Post hoc analysis of sport performance with differential evolution. Neural Comput. Appl. 2018, 32, 10799–10808. [Google Scholar] [CrossRef]
  38. Tang, Y.; Ji, J.; Zhu, Y.; Gao, S.; Tang, Z.; Todo, Y. A differential evolution-oriented pruning neural network model for bankruptcy prediction. Complexity 2019, 2019, 8682124. [Google Scholar] [CrossRef] [Green Version]
  39. Hu, H.; Wang, L.; Tao, R. Wind speed forecasting based on variational mode decomposition and improved echo state network. Renew. Energy 2021, 164, 729–751. [Google Scholar] [CrossRef]
  40. Ara, A.; Khan, N.A.; Razzaq, O.A.; Hameed, T.; Raja, M.A.Z. Wavelets optimization method for evaluation of fractional partial differential equations: An application to financial modelling. Adv. Differ. Equ. 2018, 2018, 8. [Google Scholar] [CrossRef]
  41. Mousavirad, S.J.; Rahnamayan, S.; Schaefer, G. Many-level image thresholding using a center-based differential evolution algorithm. In Proceedings of the Congress on Evolutionary Computation, Glasgow, UK, 19–24 July 2020; pp. 1–7. [Google Scholar]
  42. MacQueen, J. Some methods for classification and analysis of multivariate observations. In Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA, USA, 7 January 1967; pp. 281–297. [Google Scholar]
  43. Cai, Z.; Gong, W.; Ling, C.X.; Zhang, H. A clustering-based differential evolution for global optimization. Appl. Soft Comput. 2011, 11, 1363–1379. [Google Scholar] [CrossRef]
  44. Deb, K. A population-based algorithm-generator for real-parameter optimization. Soft Comput. 2005, 9, 236–253. [Google Scholar] [CrossRef]
  45. Damavandi, N.; Safavi-Naeini, S. A hybrid evolutionary programming method for circuit optimization. IEEE Trans. Circuits Syst. I Regul. Pap. 2005, 52, 902–910. [Google Scholar] [CrossRef]
  46. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  47. Martin, D.; Fowlkes, C.; Tal, D.; Malik, J. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings of the 8th International Conference on Computer Vision, Vancouver, BC, Canada, 7–14 July 2001; Volume 2, pp. 416–423. [Google Scholar]
  48. Rahnamayan, S.; Wang, G.G. Center-based sampling for population-based algorithms. In Proceedings of the IEEE Congress on Evolutionary Computation, Trondheim, Norway, 18–21 May 2009; pp. 933–938. [Google Scholar]
  49. Yang, X.S. Firefly algorithm, stochastic test functions and design optimisation. arXiv 2010, arXiv:1003.1409. [Google Scholar] [CrossRef]
  50. Yang, X.S. A new metaheuristic bat-inspired algorithm. In Nature Inspired Cooperative Strategies for Optimization; Springer: Berlin/Heidelberg, Germany, 2010; pp. 65–74. [Google Scholar]
  51. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  52. Mirjalili, S. Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 2016, 27, 1053–1073. [Google Scholar] [CrossRef]
  53. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  54. Csurka, G.; Larlus, D.; Perronnin, F.; Meylan, F. What is a good evaluation measure for semantic segmentation? In Proceedings of the British Machine Vision Conference, Bristol, UK, 9–13 September 2013; Volume 27, pp. 10–5244. [Google Scholar]
  55. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  56. Touria, B.; Amine, C.M. Interactive image segmentation based on graph cuts and automatic multilevel thresholding for brain images. J. Med. Imaging Health Inform. 2014, 4, 36–42. [Google Scholar] [CrossRef]
Figure 1. General structure of the ME-GDEAR algorithm.
Figure 1. General structure of the ME-GDEAR algorithm.
Entropy 24 00008 g001
Figure 2. Population clustering: red points represent individuals and black points indicate cluster centres. The population is divided into 3 clusters. A is the set of cluster centres while B contains some random individuals.
Figure 2. Population clustering: red points represent individuals and black points indicate cluster centres. The population is divided into 3 clusters. A is the set of cluster centres while B contains some random individuals.
Entropy 24 00008 g002
Figure 3. Encoding strategy in ME-GDEAR.
Figure 3. Encoding strategy in ME-GDEAR.
Entropy 24 00008 g003
Figure 4. Fractions of cluster centres and random individuals located in the golden region.
Figure 4. Fractions of cluster centres and random individuals located in the golden region.
Entropy 24 00008 g004
Figure 5. Distance between the centre of the golden region and the cluster centres/random individuals.
Figure 5. Distance between the centre of the golden region and the cluster centres/random individuals.
Entropy 24 00008 g005
Figure 6. Mean objective function results with/without grouping strategy.
Figure 6. Mean objective function results with/without grouping strategy.
Entropy 24 00008 g006
Figure 7. Test images and their histograms.
Figure 7. Test images and their histograms.
Entropy 24 00008 g007
Figure 8. Thresholding results for image 147091 for D = 5 . (a) Original image, (bf) true manual segmentation, (g) segmented image for ME-DE, (h) segmented image for ME-FA, (i) segmented image for ME-BA, (j) segmented image for ME-MFO, (k) segmented image ME-DA, (l) segmented image for ME-WOA, and (m) segmented image for ME-GDEAR.
Figure 8. Thresholding results for image 147091 for D = 5 . (a) Original image, (bf) true manual segmentation, (g) segmented image for ME-DE, (h) segmented image for ME-FA, (i) segmented image for ME-BA, (j) segmented image for ME-MFO, (k) segmented image ME-DA, (l) segmented image for ME-WOA, and (m) segmented image for ME-GDEAR.
Entropy 24 00008 g008aEntropy 24 00008 g008b
Figure 9. Thresholding results for image 101087 for D = 10 . (a): original image, (bf): different manual segmentations, (g) segmented image for ME-DE, (h) segmented image for ME-FA,(i) segmented image for ME-BA, (j) segmented image for ME-MFO, (k) segmented image for ME-DA, (l) segmented image for ME-WOA, and (m) segmented image for ME-GDEAR.
Figure 9. Thresholding results for image 101087 for D = 10 . (a): original image, (bf): different manual segmentations, (g) segmented image for ME-DE, (h) segmented image for ME-FA,(i) segmented image for ME-BA, (j) segmented image for ME-MFO, (k) segmented image for ME-DA, (l) segmented image for ME-WOA, and (m) segmented image for ME-GDEAR.
Entropy 24 00008 g009aEntropy 24 00008 g009b
Figure 10. Effect of C P on the mean objective function value for images (a) 147091, (b) 101087, and (c) 253027 for D = 10 .
Figure 10. Effect of C P on the mean objective function value for images (a) 147091, (b) 101087, and (c) 253027 for D = 10 .
Entropy 24 00008 g010
Figure 11. Effect of P r on the mean objective function value for images (a) 147091, (b) 101087, and (c) 253027 for D = 10 .
Figure 11. Effect of P r on the mean objective function value for images (a) 147091, (b) 101087, and (c) 253027 for D = 10 .
Entropy 24 00008 g011
Table 1. Parameter settings for the experiments.
Table 1. Parameter settings for the experiments.
AlgorithmParameterValue
ME-DE [33]scaling factor0.5
crossover probability0.9
ME-FA [49]light absorption coefficient ( γ )1
attractiveness at r = 0 ( β 0 )1
scaling factor0.25
ME-BA [50]loudness0.5
pulse rate0.5
ME-MFO [51]a−1
b1
ME-DA [52]no parameters
ME-WOA [46]constant defining shape of logarithmic spiral1
ME-GDEARscaling factor0.5
crossover probability0.9
clustering period0.5
P r 0.2
Table 2. Objective function results for D = 3 .
Table 2. Objective function results for D = 3 .
Image ME-DEME-FAME-BAME-MFOME-DAME-WOAME-GDEAR
Boatsmean35.2334.2533.9834.8534.7334.4534.71
std.dev.0.020.970.960.490.540.410.64
rank1672354
Peppersmean66.2063.9258.2764.2366.4261.7166.33
std.dev.8.759.267.956.947.9810.346.51
rank3574162
Goldhillmean15.5615.7715.2816.0315.7615.4216.05
std.dev.0.210.570.930.120.310.890.03
rank5372461
Lennamean70.7464.8762.0465.9161.0563.5467.10
std.dev.2.175.395.835.674.926.565.04
rank1463752
Housemean64.7566.3864.6764.4364.6965.1666.64
std.dev.2.927.133.441.5cm74.107.844.36
rank4267531
12003mean66.3062.3858.5762.0664.8864.4464.29
std.dev.6.616.475.777.105.176.297.17
rank1576234
181079mean66.2463.4260.6867.2461.7461.0763.29
std.dev.3.447.565.973.945.257.626.61
rank2371564
175043mean63.1665.5959.1663.6262.3261.7264.77
std.dev.3.506.044.164.755.496.506.30
rank4173562
101085mean63.9662.4961.5964.0866.8561.2166.20
std.dev.4.865.715.095.413.055.945.69
rank4563172
147091mean67.8867.9765.1667.6266.9565.1568.05
std.dev.1.562.703.821.611.204.402.22
rank3264571
101087mean59.4665.7360.5664.9263.6464.9971.12
std.dev.7.209.027.607.097.428.553.91
rank7264531
253027mean29.9930.0729.8730.0329.9229.9730.03
std.dev.0.070.070.230.130.170.160.13
rank4172653
average rank3.253.256.583.424.085.172.25
overall rank2.52.574561
Table 3. Objective function results for D = 4 .
Table 3. Objective function results for D = 4 .
Image ME-DEME-FAME-BAME-MFOME-DAME-WOAME-GDEAR
Boatsmean35.3935.8435.2235.8735.7935.8035.84
std.dev.0.200.020.480.110.190.210.02
rank6371542
Peppersmean66.4565.6762.4668.2366.3964.9071.57
std.dev.6.4910.138.867.657.909.987.94
rank3572461
Goldhillmean16.8017.3217.2418.1417.2116.8717.61
std.dev.0.470.290.560.420.310.750.35
rank7341562
Lennamean71.5766.4963.3468.8763.7465.1770.22
std.dev.2.065.905.445.585.096.615.22
rank1473652
Housemean64.9267.1164.4867.1465.5564.8268.62
std.dev.2.418.912.614.273.787.534.19
rank5372461
12003mean65.8463.3563.7966.2969.5864.1668.69
std.dev.7.147.496.696.403.667.125.71
rank4763152
181079mean68.2968.9760.3368.3365.1862.1865.78
std.dev.2.795.295.144.085.407.616.79
rank3172564
175043mean62.8467.9761.5663.5762.6159.7766.16
std.dev.3.635.654.353.525.776.056.07
rank4163572
101085mean64.2464.3662.4466.8769.0965.9668.04
std.dev.4.225.804.355.691.545.864.98
rank6573142
147091mean70.1170.1168.2269.7369.4166.6970.68
std.dev.2.283.404.161.881.645.462.28
rank2364571
101087mean62.2569.1362.6768.5769.0166.4970.57
std.dev.5.9410.066.967.218.019.047.82
rank7264351
253027mean32.9933.2232.8933.2633.1833.0633.21
std.dev.0.090.110.270.020.110.210.15
rank6271453
average rank4.503.256.422.424.005.501.92
overall rank5372461
Table 4. Objective function results for D = 5 .
Table 4. Objective function results for D = 5 .
Image ME-DEME-FAME-BAME-MFOME-DAME-WOAME-GDEAR
Boatsmean37.7638.3937.9438.4038.1838.1538.35
std.dev.0.220.120.360.180.200.270.20
rank7261453
Peppersmean68.2668.9665.5069.3264.6666.0270.44
std.dev.7.767.898.126.857.0710.108.67
rank4362751
Goldhillmean17.4818.8218.2819.8618.6118.4919.01
std.dev.0.640.640.710.300.410.460.28
rank7361452
Lennamean73.1668.2068.1570.5864.3967.4971.51
std.dev.1.515.735.635.425.677.355.43
rank1453762
Housemean67.7042.0064.8765.3461.4158.1068.78
std.dev.3.3612.443.879.247.858.574.30
rank2743561
12003mean68.5569.4365.1268.9667.9064.6671.27
std.dev.4.657.558.307.074.656.835.62
rank4263571
181079mean70.1653.3262.1069.2963.5961.5366.81
std.dev.3.0216.025.338.515.279.095.89
rank1762453
175043mean63.9654.0161.1164.5559.4360.8468.18
std.dev.4.8013.723.484.804.706.856.20
rank3742651
101085mean67.4665.3867.3769.5769.8566.4369.95
std.dev.4.395.285.684.713.456.234.67
rank4753261
147091mean70.6871.7567.4870.7270.0169.1670.73
std.dev.1.674.513.654.442.495.001.49
rank4173562
101087mean65.9472.4164.9471.2767.2268.9474.58
std.dev.7.508.275.438.137.409.075.51
rank6273541
253027mean35.8736.2936.0236.2836.1636.2536.22
std.dev.0.150.050.340.100.160.170.17
rank7162534
average rank4.173.835.672.334.925.251.83
overall rank4372561
Table 5. Objective function results for D = 10 .
Table 5. Objective function results for D = 10 .
Image ME-DEME-FAME-BAME-MFOME-DAME-WOAME-GDEAR
Boatsmean50.6151.2051.0251.3850.5451.1451.10
std.dev.0.160.150.220.110.270.160.24
rank6251734
Peppersmean51.2749.8460.9554.6250.9258.5373.47
std.dev.3.300.269.3110.816.366.735.87
rank5724631
Goldhillmean24.2124.4524.2226.8523.1623.7224.13
std.dev.0.451.121.551.060.550.870.92
rank4231765
Lennamean58.8449.9370.0354.8250.0760.9974.63
std.dev.7.210.235.4710.612.085.096.31
rank4725631
Housemean49.5150.0663.2850.2949.2251.9264.51
std.dev.0.210.176.970.050.325.598.08
rank6524731
12003mean61.1751.7568.3563.7352.0860.8174.15
std.dev.5.800.128.7312.251.823.686.18
rank4723651
181079mean50.3050.7464.5151.1749.9058.9263.68
std.dev.0.360.396.950.030.524.475.65
rank6514732
175043mean50.8451.1261.6551.7250.4956.5462.62
std.dev.0.200.324.570.150.414.306.13
rank6524731
101085mean60.0952.6168.8458.6355.0066.9674.33
std.dev.7.010.147.578.435.814.634.24
rank4725631
147091mean56.5652.4369.4753.4452.4567.7776.56
std.dev.6.640.135.984.032.524.693.45
rank4725631
101087mean55.6950.3962.7456.5649.6056.7676.40
std.dev.6.800.149.6511.740.3311.038.46
rank5624731
253027mean48.8049.3549.3749.5148.4049.3949.38
std.dev.0.220.160.230.060.350.110.12
rank6541723
average rank5.005.422.423.426.583.331.83
overall rank5624731
Table 6. FSIM results for D = 3 .
Table 6. FSIM results for D = 3 .
Image ME-DEME-FAME-BAME-MFOME-DAME-WOAME-GDEAR
Boatsmean0.47840.52760.53650.47370.47130.46620.4855
std.dev.0.00060.11300.12620.00830.00850.00770.0554
rank4215673
Peppersmean0.60640.59880.60480.60340.59470.60890.6120
std.dev.0.01960.01790.01750.01840.01750.01890.0164
rank3645721
Goldhillmean0.61520.62370.63260.59510.62060.60890.6258
std.dev.0.05650.05130.04180.05210.05620.03520.0036
rank5317462
Lennamean0.63810.62030.61290.60920.61120.62190.6237
std.dev.0.01090.02710.02710.02620.02710.02630.0260
rank1457632
Housemean0.45190.45750.45120.44840.45240.45630.4537
std.dev.0.01370.01410.01180.01050.01330.01460.0138
rank5167423
12003mean0.52880.52670.53430.53290.51180.51820.5327
std.dev.0.02140.02730.02760.02320.02060.02390.0309
rank4512763
181079mean0.51230.51690.51520.51400.51200.51410.5138
std.dev.0.00290.00480.00500.00280.00160.00390.0028
rank6124735
175043mean0.29200.29180.29110.29170.29180.29230.2948
std.dev.0.00330.00200.00450.00330.00280.00270.0023
rank3476521
101085mean0.54750.57480.58620.58530.56070.55900.5631
std.dev.0.02940.04620.04850.04770.03800.04450.0434
rank7312564
147091mean0.59740.62700.61380.60180.59400.65410.6022
std.dev.0.01260.05910.05460.03410.00160.07950.0207
rank6235714
101087mean0.63530.63230.62820.63380.63490.62970.6384
std.dev.0.00760.01340.01460.01110.00980.01560.0025
rank2574361
253027mean0.60520.61690.63480.61730.61370.61540.6171
std.dev.0.01130.00070.04620.00120.00600.00620.0015
rank7412653
average rank4.413.333.254.585.504.082.66
overall rank5326741
Table 7. FSIM results for D = 4 .
Table 7. FSIM results for D = 4 .
Image ME-DEME-FAME-BAME-MFOME-DAME-WOAME-GDEAR
Boatsmean0.76080.76740.79930.75490.73620.74650.7661
std.dev.0.08700.00310.02720.05800.09820.08360.0023
rank4215763
Peppersmean0.60940.60990.60570.60400.59910.60160.6098
std.dev.0.01610.02010.01970.02020.01510.02050.0212
rank3145762
Goldhillmean0.68560.69610.69280.61260.67830.66980.6937
std.dev.0.07790.07450.05260.04880.08080.07870.0803
rank4137562
Lennamean0.63290.60800.60280.61410.61990.62070.6288
std.dev.0.01930.02670.02370.02660.02700.02660.0234
rank1675432
Housemean0.44610.46170.44870.45640.45180.45310.4573
std.dev.0.00760.01660.01050.01480.01360.01400.0146
rank7163542
12003mean0.53470.55000.53720.53530.50890.53910.5324
std.dev.0.02210.02080.02670.02470.01940.02680.0236
rank5134726
181079mean0.51240.51530.51630.51480.51420.51600.5178
std.dev.0.00220.00400.00610.00440.00340.00440.0023
rank7425631
175043mean0.29250.29040.29240.29260.29130.29240.2924
std.dev.0.00280.00330.00340.00280.00340.00340.0028
rank2741653
101085mean0.55730.58580.60290.61120.57500.57930.5761
std.dev.0.03540.05740.05770.05110.03970.04740.0457
rank7321645
147091mean0.60450.64060.62260.60950.60340.64380.6204
std.dev.0.02100.05700.05400.03980.01970.07010.0449
rank6235714
101087mean0.63980.62920.63340.63830.63920.62890.6366
std.dev.0.00820.01710.01160.00940.00770.01620.0116
rank1653274
253027mean0.65120.64390.72780.63410.64560.71240.6538
std.dev.0.02330.03660.08070.00700.03710.08560.0592
rank4617523
average rank4.253.333.414.255.584.083.08
overall rank5.5235.5741
Table 8. FSIM results for D = 5 .
Table 8. FSIM results for D = 5 .
Image ME-DEME-FAME-BAME-MFOME-DAME-WOAME-GDEAR
Boatsmean0.83910.81050.84400.81580.82820.83740.8440
std.dev.0.03020.01880.04070.02720.02790.03680.0278
rank3716542
Peppersmean0.60980.60670.61070.61410.60200.60100.6140
std.dev.0.01690.01830.01570.01310.01730.02010.0208
rank4531672
Goldhillmean0.74170.76130.78590.64580.70970.74320.7860
std.dev.0.08790.05810.06250.07700.07680.07610.0505
rank5327641
Lennamean0.64000.60280.60860.60840.60820.62490.6358
std.dev.0.01050.02420.02510.02590.02670.02580.0187
rank1745632
Housemean0.45650.45240.45240.48770.46780.44650.4565
std.dev.0.01490.12450.01080.10830.08070.00830.0149
rank4651273
12003mean0.52210.54830.52730.54940.51400.54220.5428
std.dev.0.02000.02190.03020.02080.01920.02570.0296
rank6251743
181079mean0.51290.60740.51510.52420.51410.51750.5260
std.dev.0.00210.10710.00510.04560.00420.00490.0050
rank7153642
175043mean0.29190.29170.29340.29220.29160.29080.2911
std.dev.0.00330.24090.00210.00280.00390.00470.0024
rank3412576
101085mean0.59160.58140.59140.60790.58450.59870.5864
std.dev.0.05490.05940.05970.05380.04550.05270.0395
rank3741625
147091mean0.59810.62700.62950.63530.61490.66260.6382
std.dev.0.00160.05730.06030.06070.03960.07330.0016
rank7543612
101087mean0.64190.63410.63560.63810.63600.63100.6405
std.dev.0.00510.01450.00910.01420.01000.01630.0085
rank1653472
253027mean0.79380.81030.80620.80550.79710.79170.8064
std.dev.0.03580.03150.04590.03770.04480.05520.0391
rank6134572
average rank4.174.503.503.085.334.752.67
overall rank4532761
Table 9. FSIM results for D = 10 .
Table 9. FSIM results for D = 10 .
Image ME-DEME-FAME-BAME-MFOME-DAME-WOAME-GDEAR
Boatsmean0.95210.96460.95850.96130.95480.95750.9664
std.dev.0.01570.00530.01110.00760.01190.00940.0063
rank7243651
Peppersmean0.79430.86480.66690.86190.83010.82500.8716
std.dev.0.13940.00640.12580.10830.12470.01640.0137
rank6273451
Goldhillmean0.84470.87780.87660.82110.85670.82580.8708
std.dev.0.04810.03280.05510.04680.02600.06150.0466
rank5127463
Lennamean0.62950.63450.62530.64030.64090.63970.6465
std.dev.0.09020.00700.02350.11750.13460.01500.0281
rank6573241
Housemean0.95250.96450.99940.96370.94840.89530.9600
std.dev.0.01310.00440.19130.00460.01100.16750.2078
rank5213674
12003mean0.59500.59900.53570.59190.59210.58200.5980
std.dev.0.11610.01120.02800.18450.15930.01630.0279
rank3175462
181079mean0.82470.87530.84410.86130.85720.82050.8788
std.dev.0.09500.01060.09530.00690.01920.09450.0053
rank6253471
175043mean0.93270.95510.93690.94860.92520.93220.9488
std.dev.0.01680.00700.18520.00450.02660.22060.0054
rank5143762
101085mean0.83350.84410.87730.83310.83450.82700.8381
std.dev.0.14260.00700.04550.16060.14130.06200.0501
rank5216473
147091mean0.83070.89580.83080.88480.87060.86420.8769
std.dev.0.11920.00670.05870.05410.06020.08330.0591
rank7162453
101087mean0.81220.81230.82610.84170.89150.81940.8376
std.dev.0.11780.00740.00820.11720.01800.13160.0149
rank7642153
253027mean0.90570.91730.90690.91250.89780.90790.9197
std.dev.0.01680.01060.01160.00690.01990.01140.0131
rank6253741
average rank5.672.254.423.584.425.582.08
overall rank724.534.561
Table 10. Dice score results for D = 3 .
Table 10. Dice score results for D = 3 .
Image ME-DEME-BAME-ALOME-DAME-MVOME-WOAME-GDEAR
12003mean0.77750.75370.94120.75890.82070.82030.8128
std.dev.0.06340.07060.00000.06760.00000.00000.0662
rank5716234
181079mean0.38650.65330.76010.65330.78480.78470.6533
std.dev.0.06510.00000.00000.00000.00000.00000.0000
rank7436125
175043Mean0.81480.84210.83550.84160.83140.85370.9438
std.dev.0.00530.05210.04840.05230.04260.05780.0557
rank7354621
101085mean0.65330.94120.65330.82070.82030.65330.9412
std.dev.0.00000.00000.00000.00000.00000.00000.0000
rank51.56.5346.51.5
147091mean0.79670.94120.78750.82240.82710.76150.9412
std.dev.0.02680.00000.05630.00580.01020.05790.0000
rank51.564371.5
101087mean0.65330.94120.65330.82070.82030.65330.9412
std.dev.0.00000.00000.00000.00000.00000.00000.0000
rank51.56.5346.51.5
253027mean0.82280.94120.78890.82250.82180.79660.9412
std.dev.0.00040.00000.03870.00080.00130.03890.0000
rank31.574561.5
average rank5.292.865.004.293.574.712.29
overall rank7264351
Table 11. Dice score results for D = 4 .
Table 11. Dice score results for D = 4 .
Image ME-DEME-BAME-ALOME-DAME-MVOME-WOAME-GDEAR
12003mean0.77490.76080.93940.76220.79340.81920.7031
std.dev.0.05390.06170.00000.06110.01970.00000.0893
rank4615327
181079mean0.43880.46030.76010.53760.65310.78470.5424
std.dev.0.05000.03260.00000.00240.00000.00000.0140
rank7625314
175043mean0.82260.84300.83290.82870.84420.87870.8383
std.dev.0.01570.05130.04210.03530.05240.06370.0471
rank7356214
101085mean0.53670.81960.54060.65310.81920.48940.9394
std.dev.0.00000.00000.00800.00000.00000.04030.0000
rank6254371
147091mean0.78850.82210.76070.78620.82510.74670.9394
std.dev.0.03110.00560.06350.04500.00810.06760.0000
rank4365271
101087mean0.53670.81960.53670.65310.81920.44960.9394
std.dev.0.00000.00000.00000.00000.00000.00000.0000
rank6254371
253027mean0.81330.81960.77100.81550.81920.78200.9394
std.dev.0.00290.00000.03510.00080.00000.04070.0000
rank5274361
average rank5.573.434.434.712.714.432.71
overall rank734.561.54.51.5
Table 12. Dice score results for D = 5 .
Table 12. Dice score results for D = 5 .
Image ME-DEME-BAME-ALOME-DAME-MVOME-WOAME-GDEAR
12003mean0.76440.81960.93600.93600.74140.72680.9381
std.dev.0.04050.00000.00000.00000.05670.06150.0000
rank542.52.5671
181079mean0.48260.78470.75970.75970.54850.65270.7589
std.dev.0.04010.00000.00000.00000.02470.00000.0000
rank712.52.5654
175043mean0.82850.82660.82650.83980.83310.86720.8436
std.dev.0.01640.04910.03470.04590.04220.06650.0502
rank5673412
101085mean0.81960.93600.93600.54550.65270.81990.9381
std.dev.0.00000.00000.00000.02240.00000.00000.0000
R52.52.57641
147091mean0.82330.93600.93600.75570.77480.82150.9381
std.dev.0.00490.00000.00000.07110.04550.00250.0000
rank42.52.57651
101087mean0.81960.93600.93600.53680.65270.81990.9381
std.dev.0.00000.00000.00000.00000.00000.00000.0000
rank52.52.57641
253027mean0.81960.93600.93600.72900.73420.81990.9381
std.dev.0.00000.00000.00000.02490.02850.00000.0000
rank52.52.57641
average rank5.143.003.145.145.714.291.57
overall rank5.5235.5741
Table 13. Dice score results for D = 10 .
Table 13. Dice score results for D = 10 .
Image ME-DEME-BAME-ALOME-DAME-MVOME-WOAME-GDEAR
12003mean0.60940.58690.58860.62940.58240.50200.6506
std.dev.0.08400.02100.14450.07110.04720.07690.0786
rank3542671
181079mean0.63460.62970.52560.63220.62730.73110.6383
std.dev.0.01470.01760.23120.01000.01470.06660.0274
rank3574612
175043mean0.80670.81710.71490.81650.81050.60040.7849
std.dev.0.02480.01730.15760.01360.02370.08640.0419
rank4162375
101085mean0.67790.58340.56080.62480.65730.69340.7201
std.dev.0.14090.03750.23070.10220.12050.28140.0711
rank3675421
147091mean0.55100.44690.69570.47180.48150.52390.6560
std.dev.0.10390.02350.09270.04520.03330.11510.0479
rank3716542
101087mean0.43100.43340.40410.42340.46460.37830.4421
std.dev.0.03330.02460.15920.02470.00620.09710.0677
rank4365172
253027mean0.52560.51910.52100.51570.51360.52330.5205
std.dev.0.02280.02680.02160.02150.04060.02380.0303
rank1536724
average rank3.004.574.864.294.574.292.43
overall rank24.573.54.53.51
Table 14. Results of Wilcoxon signed rank test.
Table 14. Results of Wilcoxon signed rank test.
p-Value
ME-GDEAR vs. ME-DE 5.8052 × 10 5
ME-GDEAR vs. ME-BA 2.9061 × 10 6
ME-GDEAR vs. ME-GWO 4.4433 × 10 9
ME-GDEAR vs. ME-DA 9.4286 × 10 5
ME-GDEAR vs. ME-MVO 1.1412 × 10 7
ME-GDEAR vs. ME-WOA 3.6885 × 10 9
Table 15. Results of Friedman test.
Table 15. Results of Friedman test.
AlgorithmRank
ME-DE4.24
ME-BA3.92
ME-GWO5.27
ME-DA2.91
ME-MVO4.90
ME-WOA4.81
ME-GDEAR1.96
p-value 9.5625 × 10 17
chi-squared87.6
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mousavirad, S.J.; Zabihzadeh, D.; Oliva, D.; Perez-Cisneros, M.; Schaefer, G. A Grouping Differential Evolution Algorithm Boosted by Attraction and Repulsion Strategies for Masi Entropy-Based Multi-Level Image Segmentation. Entropy 2022, 24, 8. https://doi.org/10.3390/e24010008

AMA Style

Mousavirad SJ, Zabihzadeh D, Oliva D, Perez-Cisneros M, Schaefer G. A Grouping Differential Evolution Algorithm Boosted by Attraction and Repulsion Strategies for Masi Entropy-Based Multi-Level Image Segmentation. Entropy. 2022; 24(1):8. https://doi.org/10.3390/e24010008

Chicago/Turabian Style

Mousavirad, Seyed Jalaleddin, Davood Zabihzadeh, Diego Oliva, Marco Perez-Cisneros, and Gerald Schaefer. 2022. "A Grouping Differential Evolution Algorithm Boosted by Attraction and Repulsion Strategies for Masi Entropy-Based Multi-Level Image Segmentation" Entropy 24, no. 1: 8. https://doi.org/10.3390/e24010008

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop