Next Article in Journal
Vasari Scoring System in Discerning between Different Degrees of Glioma and IDH Status Prediction: A Possible Machine Learning Application?
Next Article in Special Issue
Mapping Quantitative Observer Metamerism of Displays
Previous Article in Journal
ARAM: A Technology Acceptance Model to Ascertain the Behavioural Intention to Use Augmented Reality
Previous Article in Special Issue
Spectral Reflectance Estimation from Camera Responses Using Local Optimal Dataset
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Approach to Color Image Segmentation Based on Multilevel Thresholding Using EMO Algorithm by Considering Spatial Contextual Information

by
Srikanth Rangu
1,
Rajagopal Veramalla
1,
Surender Reddy Salkuti
2,* and
Bikshalu Kalagadda
3
1
Department of ECE, Kakatiya Institute of Technology and Science, Warangal 506015, India
2
Department of Railroad and Electrical Engineering, Woosong University, Daejeon 34606, Republic of Korea
3
Department of ECE, Kakatiya University, Warangal 506009, India
*
Author to whom correspondence should be addressed.
J. Imaging 2023, 9(4), 74; https://doi.org/10.3390/jimaging9040074
Submission received: 3 February 2023 / Revised: 15 March 2023 / Accepted: 17 March 2023 / Published: 23 March 2023
(This article belongs to the Special Issue Advances in Color Imaging, Volume II)

Abstract

:
The process of image segmentation is partitioning an image into its constituent parts and is a significant approach for extracting interesting features from images. Over a couple of decades, many efficient image segmentation approaches have been formulated for various applications. Still, it is a challenging and complex issue, especially for color image segmentation. To moderate this difficulty, a novel multilevel thresholding approach is proposed in this paper based on the electromagnetism optimization (EMO) technique with an energy curve, named multilevel thresholding based on EMO and energy curve (MTEMOE). To compute the optimized threshold values, Otsu’s variance and Kapur’s entropy are deployed as fitness functions; both values should be maximized to locate optimal threshold values. In both Kapur’s and Otsu’s methods, the pixels of an image are classified into different classes based on the threshold level selected on the histogram. Optimal threshold levels give higher efficiency of segmentation; the EMO technique is used to find optimal thresholds in this research. The methods based on an image’s histograms do not possess the spatial contextual information for finding the optimal threshold levels. To abolish this deficiency an energy curve is used instead of the histogram and this curve can establish the spatial relationship of pixels with their neighbor pixels. To study the experimental results of the proposed scheme, several color benchmark images are considered at various threshold levels and compared with other meta-heuristic algorithms: multi-verse optimization, whale optimization algorithm, and so on. The investigational results are illustrated in terms of mean square error, peak signal-to-noise ratio, the mean value of fitness reach, feature similarity, structural similarity, variation of information, and probability rand index. The results reveal that the proposed MTEMOE approach overtops other state-of-the-art algorithms to solve engineering problems in various fields.

1. Introduction

Digital image segmentation is a technique of partitioning the image into regions to extract information about features of an image with homogeneous features in terms of intensity level, texture structure, color information, etc. The image segmentation schemes available from the literature, multi-level thresholding [1] of grayscale on the histogram of an image is a highly established method and is used in various applications from satellite image segmentation [2,3,4] to medical images. The important multilevel thresholding-based segmentation techniques are Kapur’s and Otsu’s methods [5,6]. Segmentation can often be used as a preprocessing step in object recognition, computer vision, image analysis, and so on in different applications such as medical [7], agricultural, industrial, fault detection, weather forecasting, etc. In general, the majority of segmentation techniques are based on discontinuity and similarity; among abundant methods available thresholding is the most important technique for both grayscale and color images.
Image segmentation is a significant step in image processing. Major advances in image segmentation are in the area of biomedical imaging to investigate the function, structure, and pathology of the human body, and in other industrial applications from robotics to satellite image segmentation.
In the multilevel thresholding method of segmentation, the pixels are grouped into different classes or groups (two or more) based on the gray-levels and multiple threshold values. The quality level of segmentation is affected by the technique used to compute threshold values. The use of a classical or traditional method of selecting the thresholds is computationally expensive as the technique needs to search in a huge range of sample space to identify the optimized levels using the objective function; at this stage, optimization techniques can be applicable and then there is a scope of research computing the optimized threshold levels.
The various significant multilevel thresholding approaches are based on image histograms. The techniques based on histograms have two major disadvantages, which are (i) spatial contextual information (relationships among the pixels in an image) not considered for finding the histogram, which leads to less efficiency in computing the optimized threshold levels on the histogram, and (ii) methods based on the histogram are incompetent for applications of segmentation with thresholding levels greater than two (MT).
Techniques with histogram plots are incapable of owning spatial contextual information to compute optimized thresholds. To conquer the drawbacks of the histogram of an image, a novel methodology is proposed: multilevel thresholding based on EMO and energy curve (MTEMOE). A curve that has similar characteristics to the histogram and the spatial contextual information of image pixels is named an “energy curve” and can be used in place of the histogram; an electro-magnetism optimization algorithm is used to select and optimize gray levels; an Energy Curve characteristics are similar to a histogram. For each value in an image, energy is computed in the grayscale range of that image. The threshold levels can be computed based on valleys and peak points on the energy curve.
In general, to find out the optimized threshold values, there are two types of computational techniques, called parametric and nonparametric [8]. In the case of parametric techniques, statistical parameters are used, depending on initial conditions, and hence are inflexible to be applied. In the case of nonparametric techniques, thresholds are computed based on some criteria such as Otsu’s inter-class variance and Kapurs’s entropy functions [9,10,11]. The thresholding method holds properties such as simplicity [12,13], accuracy, and robustness, which can be classified into two major categories: bi-level and multilevel [11]; the pixels of an image are classified into different classes based on the threshold level selected on the histogram. All the pixels are grouped into two classes based on threshold level in the case of bi-level thresholding. In the second category of multilevel thresholding, pixels are categorized into more than two classes. Nevertheless, the primary constraints in multilevel thresholding are accuracy, stability, time for execution, and so on.
In the case of color images [14], each pixel consists of three components (red, green, and blue) [15]; due to this heavy load, the segmentation of color images might be more exigent and intricate. Accordingly, it is essential to find the optimal thresholds by using optimization algorithms by maximizing the inter-class variance in Otsu’s method and the histogram entropy in the case of Kapur’s method on a histogram of an image. As per the no-free-lunch (NFL) principle [16], no algorithm can solve all types of optimization problems [17]; one optimization algorithm may be very useful in one type of application and not succeed in solving other kinds of applications; thus, it is indispensable to devise and transform new algorithms.
Techniques with histogram plots are incapable of owning spatial contextual information to compute optimized thresholds. To conquer the drawbacks of the histogram of an image, a novel methodology is presented in this chapter; a curve that has similar characteristics to that of the histogram and considers spatial contextual information of image pixels named an “energy curve” [6] can be used in place of the histogram; the harmony search algorithm [5] is used to select optimized gray levels; energy curve characteristics are similar to a histogram. For each value in an image, energy is computed in the grayscale range of that image. The threshold levels can be computed based on valleys and peak points on the Energy Curve. In the literature, numerous optimization techniques along with the efficiencies and applications in particular fields are available, to mention a few, PSO [18], ACO [19], BFO [20], ABC [21], GWO [22], MFO [23], SSA [24], FA [25], WOA [26], SCA [27], KHO [28], BA [29], FPA [30], and MVO [31]. Moreover, several modified algorithms have been used in the multilevel thresholding field. For example, Chen et al. [32] proposed an improvised algorithm (IFA) to segment compared with PSO [33] and other methods [15,34].
From the above discussion, the techniques mentioned above mainly spotlight gray-scale images and extend to color images to some scale. Additionally, color satellite images have the features of complex backgrounds and poor resolution [35]; in this situation, it is very difficult to segment such color images. In this article, a new approach is projected for color image segmentation [36,37] and it aims at satellite images from experimental results. The proposed method is based on Kapur’s and Otsu’s methods with EMO on the energy curve to find optimal threshold levels. In a clearer way, the proposed model uses the energy curve instead of the histogram of an image. Multilevel thresholding [38,39] with EMO on energy curve, named MTEMOE, for color image segmentation, improves spotless performance in many aspects. The proposed segmentation approach is experienced on color images including satellite images and natural images and compared with competitive algorithms: MFO, WOA, FPA, MVO, SCA, ACO, ABC, and PSO. The segmented images are evaluated concerning seven metrics, which validate the dominance of MTEMOE.

2. Multilevel Thresholding

2.1. Otsu Method

This technique [5,9] is used for multi-level thresholding ( M T ) , in which gray levels will be partitioned into different regions or classes; in this process thresholding ( t h ) levels are selected; the set of rules to be followed for bi-level thresholding are
C 1 p   i f   0 p < t h , C 2 p   i f   t h p < L 1
where C 1 and C 2 are two classes, p indicates the pixel value for the gray levels 1 , 2 , 3 , , L 1 in an image and L 1 indicates the maximum gray level. If the gray level is below the threshold t h then that pixel is grouped into class C 1 , else it is grouped into class C 2 . The set of rules for multi-level thresholding ( M T ) are
C 1 p   i f   0 p < t h 1 C 2 p   i f   t h 1 p < t h 2 C i p   i f   t h i p < t h i + 1 C n p   i f   t h n p < t h n + 1
From Equation (2), C 1 , C 2 , , C n indicates different classes, and threshold levels to find objects represented by t h 1 , t h 2 , . . . , t h i , t h i + 1 , t h n ; these thresholds can be computed based on either a histogram or an energy curve. By use of these threshold levels, all the pixels will be classified into different classes or exclusive regions. The significant methods of segmentation of images based on threshold levels are Otsu’s and Kapur’s methods and, in both cases, threshold levels can be computed by maximizing the cost function (inter-class variance). In this work, optimized threshold levels are used by Otsu’s method t h values [23]. In this method, inter-class variance is considered the objective function, also called a cost function. For experimentation, grayscale images are considered. The below expression gives the probability distribution for each gray level
P h c i = h c i N P , i = 1 N P P h c i = 1
From Equation (3), the pixel value is denoted by i , with the range of grayscale is ( 0 i L 1 ) , where c = 1 , 2 , 3 for RGB and c = 1 for a grayscale image, and total image pixels are represented by NP; the histogram of considered images is represented by h c i . In bi-level thresholding, the total pixels in the image are grouped into two classes
C 1 = P h 1 c w 0 c ( t h ) , . . . P h t h c w 0 c ( t h ) , C 2 = P h t h + 1 c w 1 c ( t h ) , . . . P h L c w 1 c ( t h )
whereas w 0 t h and w 1 t h are the probabilities distributions for C 1 and C 2 , as is shown below as
w 0 c t h = j = 1 t h P h i c , w 1 c t h = j = t h + 1 t h P h i c
The means of two classes μ 0 c and μ 1 c are computed by Equation (6), and the variance between classes σ 2 c being given by Equation (7).
μ 0 c = i = 1 t h i P h i c w 0 c ( t h ) , μ 1 c = i = t h + 1 L i P h i c w 1 c ( t h )
σ 2 c = σ 1 c + σ 2 c
Notice that, for both Equations (6) and (7), c is determined by the type of image, where σ 1 c and σ 2 c in Equation (5) are the variances of classes C1 and C2 which are given in Equation (8).
σ 1 c = w 0 c μ 0 c + μ T c 2 , σ 2 c = w 1 c μ 1 c + μ T c 2
where μ T c = w 0 c μ 0 c + w 1 c μ 1 c and w 0 c + w 1 c = 1 . Based on the values σ 1 c and σ 2 c , Equation (9) presents the objective function:
J t h = max σ 2 c t h , 0 t h L 1
From Equation (9), σ 2 c t h is the total variance between two various regions after segmentation by Otsu’s scheme [40,41] for given t h ; the optimization techniques required to find the threshold level ( t h ) by maximizing the fitness function are as shown in Equation (8). Similarly for multi-level thresholding (MT), the objective (or fitness) function J ( t h ) , shown in Equation (11) to segment an image into k classes, requires k variances.
J T H = max σ 2 c t h i , 0 t h i L 1 ,   w h e r e   i = 1,2 , k
where TH is a vector, T H = t h 1 , t h 2 , t h 3 t h k 1 for multi-level thresholding, and the variances between classes can be computed from Equation (12).
σ 2 c = i = 1 k σ i c = i = 1 k w i c μ i c μ T c 2
where i th represents i class, w i c indicates probability of i t h classes and μ j c is the mean of the ith class. For MT segmentation, these parameters are anticipated as below:
w 0 c t h = i = 1 t h 1 P h i c , w 1 c t h = i = t h 1 + 1 t h 1 P h i c w k 1 c t h = i = t h k + 1 t h 1 P h i c
Furthermore, the averages of each class can be computed as
μ 0 c = i = 1 t h 1 i P h i c w 0 c ( t h 1 ) , μ 1 c = i = t h 1 + 1 t h 2 i P h i c w 0 c ( t h 2 ) μ k 1 c = i = t h k + 1 L i P h i c w 1 c ( t h k )

2.2. Multilevel Thresholding with Kapur’s Method

One more important nonparametric technique that is used to compute the optimal threshold values is Kapur’s method, entropy as an objective function. This method focuses on finding the optimal thresholds by maximizing the overall entropy. The entropy measures the compactness and separability between classes. For the multilevel, the objective function of Kapur’s method is defined as,
J T H = max i = 1 k H i C , 0 t h i L 1 ,   w h e r e   i = 1,2 k
where TH is a vector, T H = t h 1 , t h 2 , t h 3 t h k 1 . Each entropy is calculated separately with its th value, given for k entropies
H 1 c = i = 1 t h 1 P h i c w 0 c ln P h i c w 0 c H 2 c = i = 1 t h 1 P h i c w 1 c ln P h i c w 1 c H k c = i = t h k + 1 t h 1 P h i c w k 1 c ln P h i c w k 1 c
P h i c is the probability distribution of the particular intensity levels and it is obtained using (5). The values of the probability occurrence ( w 0 c , w 1 c , w 2 c , …, w k 1 c ) of the 𝑘 classes are obtained using (12). In the end, by using Equation (2) classify the pixels into various classes.

2.3. Electro-Magnetism Optimization (EMO) Algorithm

The EMO [12] can be used to discover the solutions to global problems which are nonlinear in nature, and it can be used for minimization and maximization problems. For maximizing x , x = x 1 , x 1 ,   x 1 R where x R , whereas X = x R | l 1 x i u i , i = 1 , 2 ,   n is a solution set limited between ( l 1 ) and ( u i ) lower and upper limits, respectively. The EMO uses N, n-dimensional points x i , t as a population, the X indicates a solution set from the above expression, and t represents several generations or iterations by using the algorithm. Similar to other evolutionary optimization techniques, in EMO the initial population can also be taken as S t = { x 1 , t , x 2 , t , x N , t } (being t = 1), selected from uniformly distributed random samples of the search region, X, whereas S t is the resultant solution set at the tth iteration. At the first iteration S t should be initialized by arbitrary values randomly, then the EMO algorithm executes until the stopping criterion is satisfied.
In every iteration of EMO, two essential operations will take place; the first operation is the solution set S t moved to another different location or solution by means of the attraction and repulsion mechanism of the electromagnetism theory [11]; in the next operation positions moved as per the electromagnetism technique are auxiliary moved locally by local search and reach a member of S t + 1 in the (t + 1)th iteration. These two operations bring the solutions to the set close to global optimization solutions.
In EMO, similarly to electromagnetism theory, each solution x i , t S t is treated as a charged particle, whereas the magnitude of the particle’s charge is treated as an object function, the solutions with better or optimal (higher/lower) object functions are associated with higher charges than the other set of solutions and also have a greater repulsion–attraction mechanism. In the evolution process of EMO, the points or solutions with higher charges can attract other points in the search space S t and points with a lower charge repel other points.
The total force F i t exerted at each point, ( x i , t ) , can be calculated by a combination of attraction-repulsion forces and each x i , t S t is moved towards its total force to the location y i , t . After this step, a local search algorithm is used to find the vicinity of every y i , t by y i , t to z i , t . The solution set x i , t + 1 S t + 1 at (t + 1)th iteration is subsequently computed as:
x i , t + 1 = y i , t   i f   f y i , t f ( z i , t ) x i , t + 1 = z i , t ,   o t h e r w i s e
A detailed description of each step in EMO is given in Algorithm 1 below.
Algorithm 1: A summary of the EMO algorithm is given below
i.InputParameters: Maximum number of iterations max I t e r m a x , local search parameters such as local I t e r l o c a l , and δ, and the size of the population N
ii.Initialize: set the iteration counter 1 = t, initialize the number of St uniformly in X, and identify the best point in St
iii.while t < I t e r m a x do
iv.   F i t C a l c F S t
v.   y i , t M o v e ( x i , t , F i t )
vi.   z i , t L o c a l ( I t e r l o c a l , δ , y i , t )
vii.   x i , t + 1 S e l e c t ( S t + 1 , y i , t , z i , t )
viii.end while
Step 1: The algorithm runs for I t e r m a x iterations or generations; n × I t e r l o c a l is the maximum number of locations z i , t .
Step 2: The points x i , t , t = 1 are selected uniformly in X, i.e., x i , t   i n   U n i f ( X ) , i = 1,2,…, N where U n i f represents the uniform distribution. The cost function f ( x i , t ) is computed at each iteration and the best point is identified as follows:
x t B = arg m a x f ( x i , t )   w h e r e   x i , t S t
From Equation (17), x t B is the element of S t that gives the maximum numerical value in terms of the fitness function or objective function f .
Step 3: while t < I t e r m a x do
Step 4: At this step, a value ( q i , t ) is assigned to each point x i , t , the charge q i , t of x i , t depends on the function f x i , t and the points which have the best cost function have more charge than other points. At every point, the charges can be computed by Equation (18) as given below:
q i t = e x p n f x i , t f ( x t B ) j = 1 N f x i , t f ( x t B )
Then, at this point, the force F i , j t , connecting two points x i , t and x j , t , can be found by using Equation (19).
F i , j t = ( x j , t x i , t ) q j , t . q i , t | | x j , t x i , t | | 2   i f ,   f ( x i , t > x j , t )
In the end, the total force F i t computed at each x i , t is
F i t = j = 1 , j i N F i , j t
Step 5: each point x i , t except for x t B is moved along the total force F i t using:
x i , t = x i , t + λ F i t | | F i t | | R N G , i = 1,2 , , N , i B
where λ in U n i f ( 0,1 ) for each coordinate of x i , t , and RNG is the range of movement toward the upper or lower limits.
Step 6: For each, y i , t a maximum of local I t e r l o c a l , points are generated in each coordinate direction in the δ neighborhood of y i , t . This means that the process of generating local points is continued for each y i , t until either a better z i , t is found or the n × I t e r l o c a l the trail is reached.
Step 7: x i , t + 1 ϵ S t + 1 are chosen from y i , t and z i , t by using Equation (20), and the best solution is recognized by using Equation (21).
The significant steps of the EMO algorithm are given in [8] and the EMO algorithm needs a smaller number of iterations to generate solutions for complex nonlinear optimization problems.
Table 1 depicted the comparative parameters and expressions used for evaluating the proposed method. The main reason for selecting the EMO is, that it gives much better results, as shown in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16 and Table 17. The EMO has been used for solving various optimization problems, including image-processing tasks such as multilevel thresholding. EMO is known for its efficiency in solving complex optimization problems. In the context of multilevel thresholding, EMO can efficiently search for the optimal set of thresholds that maximize the image segmentation quality. EMO is a population-based algorithm that can search the entire solution space and avoid getting stuck in local optima. This is important for multilevel thresholding because the optimal set of thresholds may be located in a complex and highly nonlinear search space. EMO can be easily adapted to handle different types of objective functions and constraints. EMO is robust to noise; in the context of multilevel thresholding, it can handle images with different levels of noise and variability. EMO requires a few parameters to be tuned, which makes it easy to use.

3. Energy Curve

To find effective optimized threshold levels, the energy curve [3] will be used instead of the histogram of an image for various applications.

3.1. Equation of Energy Curve

Consider an image indicated as I = x ( i , j ) where i   a n d   j are spatial coordinates, i = 1 , 2 , N   a n d   j = 1 , 2 , . . . M and the size of the image are X = M × N . For an image, spatial correlation among neighboring pixels can be devised by defining the neighborhood system with N of order d, for an image with spatial coordinates ( i , j ) as N i j d = i + u , j + v , ( u , v ) N d ; various configurations of the neighborhood are described in [30]. Neighborhood systems with second-order are measured for the generation of energy curve, i.e., ( u , v ) ± 1 , 0 , 0 , ± 1 , 1 , ± 1 , ( 1 , ± 1 ) .
The foremost step is to find the energy of each pixel value of the entire grayscale range of an image considered; generate a binary matrix B x = b i j , 1 i M , 1 j N ; b i j = 1 if x i j > x , else b i j = 1 . Let C = c i j , 1 i M , 1 j N be another matrix, where c i j = 1 , ( i , j ) . At each pixel value x , the energy value E x of the image, I can be computed with the below expression.
E x = i = 1 M j = 1 N p q N i j 2 b i j b p q + i = 1 M j = 1 N p q N i j 2 c i j c p q
From Equation (22), its second term should be a constant; consequently, the energy associated with each pixel is E x 0 . From the above equation, we can see that the energy for a particular gray level is zero if each element of B x , either 1   o r   −1 can be put forward in another way as all the pixels of an image I ( i , j ) with gray level either greater than x or less than x, otherwise, the energy level at a particular gray value x is positive as given in Figure 1.

3.2. Characteristics of Energy Plot

The energy plot generated as per Equation (22) is associated with some exciting characteristics. Each object in an image is represented by a gray level range, for instance, the pixel range t 1 , t 2 represents an object in a given image, at x = t 1 ; the elements in B x are 1 for pixels corresponding to the object in the same image. As x increases few elements in the matrix B x will become −1, at x = t 2 ; all the matrix elements in B x corresponding to pixels in the object becomes −1. The energy curve produced for the gray-level range t 1 , t 2 is a bell shape. Figure 1 depicts the image histogram and energy curve related to eight images. The valley and peak points on the energy curve are useful to identify objects in an image.

4. Proposed Method

The variety of multilevel thresholding techniques for image segmentation is given in the introduction section and the limitations of the histogram-based techniques are also presented. The proposed method uses an energy curve instead of the histogram, and EMO was used to find optimized threshold levels on the energy curve by maximizing the inter-class variance and entropy for Otsu’s method and Kapur’s method, respectively, as given in Equation (11) for Otsu’s method; the flow chart of a new approach is given in Figure 2.
From the flow chart, take an image for experimentation x ( i , j ) for multilevel thresholding-based segmentation and plot the energy curve of the considered color image by using Equation (1), then assign the design parameters of EMO and the solution matrix values are filled with arbitrary numbers, initially denoted as x i (set of threshold levels) as per Equation (18), then divide all the pixels in the image as per selected threshold levels into different classes or regions as per Otsu’s technique and Kapur’s method, then find the inter-class variance and entropy of the segmented image, as given in Equation (11). Afterward, find the new set of threshold levels with Equation (17) again, find the fitness and compare it with the previous fitness function, and run this procedure until there is no improvement in the objective function or the specified number of iterations is reached, and lastly find the optimized threshold valued ( x n e w ) and classify the gray levels as Equation (3) for final segmentation for R, G, and B components separately for color images. The results of this method are compared with histogram-based techniques for evolution.
Steps in the implementation of the proposed method for color image segmentation are given in Table 18 below.
The Algorithm for EMO initialization is given below as Algorithm 2.
Algorithm 2: EMO initialization
1.For i = 1 to m do
2.   for k = 1 to d do
3.            λ r a n d ( 0,1 )
4.            x k i l k + λ ( u k l k )
5.   end for
6.Compute  f ( X i )
7.End for
The Algorithm to find optimized or best threshold values is given below as Algorithm 3.
Algorithm 3: Find optimized threshold values
1. c o u n t 1
2. L e n g t h δ ( m a x ( u k l k ) )
3.For i = 1 to m do
4.for k = 1 to d do
5. λ 1 r a n d ( 0,1 )
6.while count < LSITER do
7. y x i
8. λ 2 r a n d ( 0,1 )
9.    if  λ 1 > 0.5  then
10.     y d = y d + λ 2 · ( L e n g t h )
11.    else
12.     y d = y d λ 2 · ( L e n g t h )
13.    end if
14.     i f f y < f ( X i )  then
15.       x p y
16.       c o u n t L S I T E R 1
17.    end if
18.     c o u n t c o u n t + 1
19.    End while
20.    end for
21.end for
22. X b e s t arg min ( f ( x i ) ) , x i X
From the above algorithms, pseudo-code, LSITER is the number of local search iterations. The steps given in Algorithms 2 and 3 can be treated as pseudo-code also.
The proposed “multilevel thresholding based on EMO and energy curve (MTEMOE)” has many advantages over other methods for natural color images as illustrated in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16 and Table 17 and Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12. Despite its merits, the MTEMOE method also has some limitations such as being based on an energy curve, which takes more time compared to the time needed to compute the histogram of an image. Direct keywords for computing the histogram of an image are available in Matlab and other scientific languages but the code required to generate the energy curve needs to be developed by researchers based on Equation (22). In the case of color image segmentation, the time taken to compute is much greater than the energy curve that needs to be computed for three color components of the image. While EMO has been successfully applied to a wide range of optimization problems, it also has some limitations. Multilevel thresholding of images often involves optimizing over high-dimensional search spaces, which can make it difficult for EMO to converge to an optimal solution in a reasonable amount of time. Images may contain noise that can affect the performance of EMO. EMO may not be able to handle the noise and may converge to suboptimal solutions. EMO may not be adaptable to different types of images, such as images with varying contrast or illumination.
The advantage of context-sensitive multilevel thresholding with an energy curve can be used with different upcoming new optimization techniques to further improve the effectiveness of segmentation. This method proposed with electromagnetic optimization can be extended for color images with different sorts of artifacts and can be tested for its efficiency. EMO with an Energy Curve can be applied to other image-processing tasks, such as image denoising, image compression, and image restoration. Hybrid optimization algorithms can be developed that combine EMO with other optimization techniques to further improve the performance of multilevel thresholding. The robustness of EMO can be studied for multilevel thresholding by testing it on a variety of images with different characteristics, such as size, complexity, and noise levels. Future research work in this area has the potential to contribute to the development of more efficient and effective algorithms for image-processing tasks.

5. Results and Discussions

This section describes the experimental results of the proposed method and compares it with existing state-of-the-art techniques, and also explains the source of images under test and metrics used for the evolution of the segmentation techniques.
The proposed algorithm and existing techniques are experienced with color images fetched from USC-SIPI and Berkeley segmentation data set (BSDS500); a total of nine images are considered for the test, six natural images and three satellite images as shown in Figure 1; in the same image the histograms and energy curves are also illustrated, indicated as Images 1–9; all of the images considered for experimentation have distinct features. In this study mainly objective analysis is adapted and depends on numerical values instead of quality measures based on visual perception [40].
The comparative analysis between the proposed algorithm and other different optimization algorithms such as SAMFO -TH [9], MVO [29], WOA [24], FPA [28], SCA [25], ACO [16], PSO [13], ABC [19], and MFO [21] is necessary. The results and experimental setups are taken from published articles [8] to compare with the proposed method, all the algorithms executed until there is no change in the fitness function, and the MEAN value fitness function of all the algorithms [8] is illustrated in Table 17. All the images are tested with the number of threshold levels N = 4, 6, 8, 10, 16, 20, and 24.
The selection of comparative metrics [8] is an important task; it should be done in such a way as to test all the aspects of segmentation. The parameters used in this study [4] are described in this section. (i) The mean value of fitness (MEAN) with Kapur’s and Otsu’s method, is considered a significant metric to test the performance of optimization schemes. This index is computed using Equation (9) in Otsu’s method or Equation (3) in Kapur’s entropy. It demonstrates the robustness of the optimization algorithm in the course of selecting the optimized threshold vector. (ii) Peak signal-to-noise ratio (PSNR), this parameter estimates the deviation of a segmented image from its original image, which indicates the quality of a reconstructed image. A high PSNR value refers to better segmentation. (iii) Mean square error (MSE), a lower MSE value illustrates better segmentation; it computes the average of the square of the error. (iv) Structural similarity (SSIM), this parameter gives the level of similarity between the segmented and input image under test; a greater value of SSIM [39] indicates a better segmentation effect; it is in the range from −1 to +1. (v) Feature similarity (FSIM), this is similar to SSIM, which indicates degradation of image quality; it ranges [−1, 1]; a high value of FSIM means better segmentation of the color image. (vi) probability Rand index (PRI) or simply Rand index (RI), this computes the connection between the ground truth and segmented image; better performance [9,42,43] is indicated by a higher PRI value. (vii) Variation of information (VOI), this gives the randomness of a segmented image; a low VOI value indicates better segmentation performance. All comparative parameters are described along with the required equations in Table 1. The segmented images with various optimization techniques are obtained from published articles and this study proves that the proposed approach provides better performance [44,45] than the techniques considered in this research work. Figure 2 and Figure 3 illustrate the segmented results using the proposed (MTEMOE) approach to color image segmentation based on Otsu’s and Kapur’s methods [43,46,47]. In the end, a statistical analysis is firmly used to demonstrate the dominance of the proposed approach. The segmented images are depicted in Figure 2, Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7 for threshold levels = 4, 6, 8, 10, 16, 20, and 24 using Otsu’s variance and Kapur’s entropy. Figure 2 illustrates the segmented resultant images with the proposed image with Kapur’s methods; at the same time, segmented results of the proposed technique are given in Figure 3, Figure 4 and Figure 5 with a focus on results with SAMFO-TH, MVO, WOA, ABC, MFO, ACO, and ABC based on Kapur’s entropy as the fitness function. Figure 6 and Figure 7 demonstrate results with Otsu’s methods with the above-mentioned optimization techniques. The comparative metrics of segmentation performance are presented in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16 and Table 17; the performance parameters used are MEAN, PSNR, MSE, SSIM, FSIM, PRI, andVoI.
The required expressions of comparative parameters are given in Table 1. From Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16 and Table 17, the values of comparative metrics are presented for the proposed method and another existing method. In Table 17, the average MEAN values of fitness with Kapur’s and Otsu’s methods on optimization techniques MVO, WOA, PFA, SCA, ACO, PSO, ABC, MFO, and SAMFO-TH, and for the proposed approach on nine images considered with threshold levels N = 4, 6, 8, and 10 are given. It shows clearly that the proposed methods result in higher values of average MEAN with both Kapur’s and Otsu’s techniques. The average MEAN values of fitness are computed separately for three color components (R, G, and B) for each image. In particular, the values with the proposed method with Otsu’s techniques are much higher compared with other optimization techniques. In Table 2 PSNR values are presented for SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO, and the proposed model using Kapur’s method with threshold levels N = 4, 6, 8, and 10; the results clearly show that the PSNR values with the proposed method are much better than other techniques, especially with N= 10. In Table 9, PSNR values with Otsu’s method are given and with all the images the PSNR values for the proposed method are superior to any other method considered; the average PSNR with the proposed method is 26.2278 which is higher than other techniques; after the proposed method, the SAMFO-TH method gives the best PSNR values. In Table 3 and Table 10, the mean square error (MSE) values with Kapur’s and Otsu’s techniques are given for the proposed method and other techniques. The required expression to compute MSE is mentioned in Table 1. MSE value should be less for better segmentation; the MSE values are much less for the proposed method compared to other techniques, especially for higher thresholding levels (8 and 10). From Table 10, the average MSE for all nine images is 229.5213 with the proposed methods, whereas its value is 1449.4559 with SCA-based segmentation. After the proposed method, the SAMFO-TH provides the best MSE values with both Kapur’s and Otsu’s techniques. In Table 4 and Table 11, the structural similarity index (SSIM) is given for Kapur’s and Otsu’s techniques; its value should be higher for better segmentation. The value of SSIM with the proposed method is slightly higher than the SAMFO-TH method but much higher than multilevel thresholding techniques with other optimization methods considered for comparison.
In Table 5 and Table 12, the featured similarity index (SSIM) is given for Kapur’s and Otsu’s techniques; its value should be higher for better segmentation. The value of FSIM with the proposed technique is higher than all other techniques. The average FSIM computed for nine images with the proposed technique with Otsu’s method is 0.8818; its value with SCA is only 0.8011. From Table 6 and Table 13, the PRI should be a higher value for better image segmentation. The PRI values are slightly better for SAMFO-TH compared to the proposed method, whereas its values are much better than other techniques.
In Table 15 and Table 16, there is a comparison of MEAN computed by SAMFO-TH, MVO, and WOA using Otsu’s and Kapur’s methods with N = 4, 6, 8, and 10 for red, green, and blue components separately; its values are much higher with Otsu’s method than Kapur’s method. After analyzing the information from Table 17 it can be concluded that the proposed method gives a much better average MEAN of fitness with both Kapur’s and Otsu’s methods than all other techniques considered.
In this discussion of results, the proposed approach is compared with other algorithms using the mean of fitness function (MEAN); in Table 8, the MEAN values computed by the proposed method are given for both Kapur’s and Otsu’s methods. Higher MEAN values indicate higher accuracy. These values are significantly higher than those values obtained with other methods, including SAMFO-TH; as the level of threshold increases the MEAN values increase in both Kapur and Otsu methods. The MEAN values are much greater with Otsu’s method than with Kapur’s method. Table 15 depicts the MEAN values with SAMFO_TH, MVO, and WOA with Otsu’s methods; these values are much lower than with the proposed approach and MEAN values with other optimization techniques can be fetched from published [8] articles for comparison. In Table 16, a comparison of MEAN computed by SAMFO-TH, MVO, and WOA using Kapur’s method with N = 4, 6, 8, and 10 for the red, green, and blue components is given. Very importantly, in Table 17, the average of MEAN values with various optimization techniques with both Otsu’s and Kapur’s methods are presented; the results show that the results with the proposed method are highly superior to all the techniques considered in this research, for color components red, green, and blue. From Table 17, we can conclude that the mean of MEAN value for all the images is higher with the proposed approach with both Otsu’s and Kapur’s methods; at the same time the performance of SCA, PSO, and MFO is not up to the mark; after the proposed approach SAMFO-TH is the best one. From this discussion, we can conclude that the proposed approach for segmentation performs with better stability.
In Table 1 PSNR values with Kapur’s method are presented for all the optimization techniques which are under test and, in Table 9, PSNR values with Otsu’s method are given. From the two tables mentioned above, we can deduce the conclusion that the proposed approach produces better PSNR compared to other methods; PSNR performance is much higher with Kapur’s than with Otsu’s method, as the level thresholding increases PSNR also increases tremendously. The mean PSNR for nine images with the proposed method is 25.2768 (from Table 2) and 24.6188 with SAMFO-TH; the lowest value is with FPA at 21.678. At the same time, the mean of PSNR with Otsu’s criteria is 26.222 for the proposed method, 21.2768 with SAMFO-TH, and the lowest value is 19.5712 with FPA. PSNR values are higher for satellite images (Images 7, 8) compared to the rest of the images; for Image 3 PSNR performance is very low; from the above discussion, the proposed method can provide better PSNR compared to the other methods considered. Lower MSE implies better segmentation performance; from Table 3 and Table 10, MSE with the proposed approach is much lower than with other methods for Kapur’s and Otsu’s techniques. The average MSE value with the proposed method is 294.4714, whereas it is 707.477 with FPA for Kapur’s method.
Other most significant quality metrics for color image segmentation are SSIM and FSIM, and higher values of FSIM and SSIM indicate accurate image segmentation. In Table 4, SSIM values are presented and computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO, and with the proposed model using Kapur’s method with N = 4, 6, 8, and 10. In Table 11, a comparison of SSIM with Otsu’s method is described; mean values of SSIM for the technique are given which indicate the overall SSIM performance of nine images. For instance, from Table 4, SSIM is 0.9867 with the proposed method and 0.98539 with SAMFO-TH, with only slight variation with other methods. In Table 5 and Table 12, FSIMs with Kapur’s and Otsu’s methods are presented, respectively. From Table 5, they are 0.8923 for the proposed method, 0.7898 with SAMFO-TH, and finally, the lowest value is 0.8377 with PSO. Both the SSIM and FSIM values are enhanced along with threshold levels from 4 to 10.
The VOI and PRI are important and distinguishing comparative metrics in the field of segmentation. High-quality segmentation is referred to by higher PRI and low value of VOI. The PRI values with various techniques including the proposed one (MTEMOE) are illustrated in Table 6 and Table 13 with Kapur’s and Otsu’s methods, respectively. From Table 6, the PRI value with the proposed method is better than WOA, FPA, SCA, and ACO, but lower than other methods with Kapur’s method. With Otsu’s criteria, MTEMOE performs well in terms of PRI compared to all the techniques other than SAMFO-TH and WOA, as illustrated in Table 13; finally, we point out that higher PRI values are generated with Otsu’s method compared to Kapur’s method. However, with higher threshold levels (N = 16, 20, and 24) the proposed method gives higher PRI values compared to all other methods considered in this study. From Table 7 and Table 14, the VOIfor the proposed method gives better results than other techniques for both Kapur’s and Otsu’s methods; only WOA and SAMFO-TH give a minute improvement in the case of Otsu’s methods; at a higher level of thresholding, the proposed method gives much lower(or better) values compared with the methods in this study including SAMFO-TH. The overall impression is that the MTEMOE is a better approach to color image segmentation than other state-of-the-art techniques and the proposed technique uses an energy curve instead of a histogram.

6. Conclusions

In this article, many schemes for color image segmentation are discussed. From that pool of methods, multilevel thresholding (MT) is a powerful technique, generally based on the histogram of an image. To nullify the shortfalls of the histogram, another curve that is similar to the histogram called the energy curve is used instead of the histogram to efficiently compute optimized thresholds. The proposed model for segmentation is based on Otsu’s and Kapur’s methods for MT on an energy curve with EMO for finding optimized threshold levels by maximizing the inter-class variances and entropy. The results for a group of color benchmark images clearly show that MT on the energy curve is more efficient than the histogram-based techniques. The energy curve can consider spatial contextual information to find energy levels at each pixel. Consequently, the same veiled information is used to compute optimized levels. The efficiency of the proposed approach is evaluated with mean of fitness (MEAN), PSNR, MSE, PRI, VOI, SSIM, and FSIM. The proposed approach (MTEMOE) is tested on nine color images using both Otsu’s and Kapur’s methods at different threshold levels (N = 4, 6, 8, 10, 16, 20, and 24); the proposed method is compared with other state-of-the-art methods for color image segmentation: SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO. From the results, we can conclude that the value of PSNR is greater with the energy curve than with methods based on the histogram; for the proposed method, the MEAN of the objective function is very high compared with a histogram-based method with optimization techniques. The higher PRI and lower VOI values mean better inter-class variance with the proposed method. Based on the values of comparative metrics such as PSNR, MSE, VOI, PRI, and the average MEAN value of fitness function and other parameters, the methods for segmentation of a color image are arranged from best to worst as the proposed method, SAMFO-TH, ACO, SCA, PSO, WOA, MFO, ABC, FPA, SCA, and MVO. Finally, we can conclude that the proposed approach gives an overall better performance for color image segmentation than the methods considered for various applications. The energy curve can be used with the latest upcoming optimization algorithms for still better results.

Author Contributions

Conceptualization, S.R., B.K. and R.V.; methodology, S.R., R.V. and S.R.S.; software, S.R., B.K. and S.R.S.; validation, R.V. and S.R.S.; formal analysis, S.R. and S.R.S.; investigation, S.R. and B.K.; resources, S.R.; data curation, S.R. and S.R.S.; writing—original draft preparation, S.R.; writing—review and editing, S.R., R.V. and S.R.S.; visualization, S.R. and B.K.; supervision, R.V., B.K. and S.R.S.; project administration, S.R.S.; funding acquisition, S.R. and S.R.S. All authors have read and agreed to the published version of the manuscript.

Funding

Woosong University’s Academic Research Funding—2023.

Institutional Review Board Statement

Not Applicable.

Informed Consent Statement

Not Applicable.

Data Availability Statement

The data set used for experimentation in this study is taken from BSDS500 and USC-SIPI, and this data is available in the public domain.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zaitoun, N.M.; Aqel, M.J. Survey on Image Segmentation Techniques. Procedia Comput. Sci. 2015, 65, 797–806. [Google Scholar] [CrossRef]
  2. Chavan, M.; Varadarajan, V.; Gite, S.; Kotecha, K. Deep Neural Network for Lung Image Segmentation on Chest X-ray. Technologies 2022, 10, 105. [Google Scholar] [CrossRef]
  3. Teixeira, L.O.; Pereira, R.M.; Bertolini, D.; Oliveira, L.S.; Nanni, L.; Cavalcanti, G.D.C.; Costa, Y.M.G. Impact of Lung Segmentation on the Diagnosis and Explanation of COVID-19 in Chest X-ray Images. Sensors 2021, 21, 7116. [Google Scholar] [CrossRef] [PubMed]
  4. Shi, L.; Wang, G.; Mo, L.; Yi, X.; Wu, X.; Wu, P. Automatic Segmentation of Standing Trees from Forest Images Based on Deep Learning. Sensors 2022, 22, 6663. [Google Scholar] [CrossRef]
  5. Oliva, D.; Cuevas, E.; Pajares, G.; Zaldivar, D.; Perez-Cisneros, M. Multilevel thresholding segmentation based on harmony search optimization. J. Appl. Math. 2013, 2013, 575414. [Google Scholar] [CrossRef]
  6. Patra, S.; Gautam, R.; Singla, A. A novel context-sensitive multilevel thresholding for image segmentation. Appl. Soft. Comput. J. 2014, 23, 122–127. [Google Scholar] [CrossRef]
  7. Moorthy, J.; Gandhi, U.D. A Survey on Medical Image Segmentation Based on Deep Learning Techniques. Big Data Cogn. Comput. 2022, 6, 117. [Google Scholar] [CrossRef]
  8. Jia, H.; Ma, J.; Song, W. Multilevel Thresholding Segmentation for Color Image Using Modified Moth-Flame Optimization. IEEE Access 2019, 7, 44097–44134. [Google Scholar] [CrossRef]
  9. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybernet. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  10. Liu, L.; Huo, J. Apple Image Recognition Multi-Objective Method Based on the Adaptive Harmony Search Algorithm with Simulation and Creation. Information 2018, 9, 180. [Google Scholar] [CrossRef]
  11. Baby Resma, K.P.; Nair, M.S. Multilevel thresholding for image segmentation using Krill Herd Optimization algorithm. J. King Saud Univ.—Comput. Inf. Sci. 2021, 33, 528–541. [Google Scholar] [CrossRef]
  12. Oliva, D.; Cuevas, E.; Pajares, G.; Zaldivar, D.; Osuna, V. A Multilevel Thresholding algorithm using electromagnetism optimization. Neurocomputing 2014, 139, 357–381. [Google Scholar] [CrossRef]
  13. Hammouche, K.; Diaf, M.; Siarry, P. A comparative study of various meta-heuristic techniques applied to the multilevel thresholding problem. Eng. Appl. Artif. Intell. 2010, 23, 676–688. [Google Scholar] [CrossRef]
  14. Ferreira, F.; Pires, I.M.; Costa, M.; Ponciano, V.; Garcia, N.M.; Zdravevski, E.; Chorbev, I.; Mihajlov, M. A Systematic Investigation of Models for Color Image Processing in Wound Size Estimation. Computers 2021, 10, 43. [Google Scholar] [CrossRef]
  15. Bhandari, A.K.; Kumar, A.; Chaudhary, S.; Singh, G.K. A novel color image multilevel thresholding based segmentation using nature inspired optimization algorithms. Expert Syst. Appl. 2016, 63, 112–133. [Google Scholar] [CrossRef]
  16. Wolper, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  17. Bernardes, J., Jr.; Santos, M.; Abreu, T.; Prado, L., Jr.; Miranda, D.; Julio, R.; Viana, P.; Fonseca, M.; Bortoni, E.; Bastos, G.S. Hydropower Operation Optimization Using Machine Learning: A Systematic Review. AI 2022, 3, 78–99. [Google Scholar] [CrossRef]
  18. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  19. Socha, K.; Dorigo, M. Ant colony optimization for continuous domains. Eur. J. Oper. Res. 2008, 185, 1155–1173. [Google Scholar] [CrossRef]
  20. Passino, K.M. Biomimicry of bacterial foraging for distributed optimization and control. IEEE Control Syst. Mag. 2002, 22, 52–67. [Google Scholar]
  21. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  22. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  23. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  24. Yu, J.J.Q.; Li, V.O.K. A social spider algorithm for global optimization. Appl. Soft Comput. 2015, 30, 614–627. [Google Scholar] [CrossRef]
  25. Yang, X.-S. Multiobjective firefly algorithm for continuous optimization. Eng. Comput. 2013, 29, 175–184. [Google Scholar] [CrossRef]
  26. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  27. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  28. Gandomi, A.H.; Alavi, A.H. Krill herd: A new bio-inspired optimization algorithm. Commun. Nonlinear Sci. Numer. Simul. 2012, 17, 4831–4845. [Google Scholar] [CrossRef]
  29. Yang, X. A new metaheuristic bat-inspired algorithm. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010); Studies in Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2010; pp. 65–74. [Google Scholar]
  30. Yang, X.S. Flower pollination algorithm for global optimization. In Unconventional Computation and Natural Computation, Proceedings of the 11th International Conference, UCNC 2012, Orléan, France, 3–7 September 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 240–249. [Google Scholar]
  31. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  32. Chen, K.; Zhou, Y.; Zhang, Z.; Dai, M.; Chao, Y.; Shi, J. Multilevel image segmentation based on an improved firefly algorithm. Math. Probl. Eng. 2016, 2016, 1578056. [Google Scholar] [CrossRef]
  33. Akay, B. A study on particle swarm optimization and artificial bee colony algorithms for multilevel thresholding. Appl. Soft Comput. 2013, 13, 3066–3091. [Google Scholar] [CrossRef]
  34. Agarwal, P.; Singh, R.; Kumar, S.; Bhattacharya, M. Social spider algorithm employed multi-level thresholding segmentation approach. In Proceedings of First International Conference on Information and Communication Technology for Intelligent Systems: Volume 2. Smart Innovation, Systems and Technologies; Springer: Cham, Switzerland, 2016; Volume 2, pp. 249–259. [Google Scholar]
  35. Pare, S.; Bhandari, A.K.; Kumar, A.; Singh, G.K. An optimal color image multilevel thresholding technique using grey-level co-occurrence matrix. Expert Syst. Appl. 2017, 87, 335–362. [Google Scholar] [CrossRef]
  36. Deshpande, A.; Cambria, T.; Barnes, C.; Kerwick, A.; Livanos, G.; Zervakis, M.; Beninati, A.; Douard, N.; Nowak, M.; Basilion, J.; et al. Fluorescent Imaging and Multifusion Segmentation for Enhanced Visualization and Delineation of Glioblastomas Margins. Signals 2021, 2, 304–335. [Google Scholar] [CrossRef]
  37. Jardim, S.; António, J.; Mora, C. Graphical Image Region Extraction with K-Means Clustering and Watershed. J. Imaging 2022, 8, 163. [Google Scholar] [CrossRef]
  38. Jumiawi, W.A.H.; El-Zaart, A. A Boosted Minimum Cross Entropy Thresholding for Medical Images Segmentation Based on Heterogeneous Mean Filters Approaches. J. Imaging 2022, 8, 43. [Google Scholar] [CrossRef]
  39. Ortega-Ruiz, M.A.; Karabağ, C.; Garduño, V.G.; Reyes-Aldasoro, C.C. Morphological Estimation of Cellularity on Neo-Adjuvant Treated Breast Cancer Histological Images. J. Imaging 2020, 6, 101. [Google Scholar] [CrossRef]
  40. Shahid, K.T.; Schizas, I. Unsupervised Mitral Valve Tracking for Disease Detection in Echocardiogram Videos. J. Imaging 2020, 6, 93. [Google Scholar] [CrossRef] [PubMed]
  41. Almeida, M.; Lins, R.D.; Bernardino, R.; Jesus, D.; Lima, B. A New Binarization Algorithm for Historical Documents. J. Imaging 2018, 4, 27. [Google Scholar] [CrossRef]
  42. Fedor, B.; Straub, J. A Particle Swarm Optimization Backtracking Technique Inspired by Science-Fiction Time Travel. AI 2022, 3, 390–415. [Google Scholar] [CrossRef]
  43. Kubicek, J.; Varysova, A.; Cerny, M.; Hancarova, K.; Oczka, D.; Augustynek, M.; Penhaker, M.; Prokop, O.; Scurek, R. Performance and Robustness of Regional Image Segmentation Driven by Selected Evolutionary and Genetic Algorithms: Study on MR Articular Cartilage Images. Sensors 2022, 22, 6335. [Google Scholar] [CrossRef]
  44. Khairuzzaman, A.K.M.; Chaudhury, S. Multilevel thresholding using grey wolf optimizer for image segmentation. Expert Syst. Appl. 2017, 85, 64–76. [Google Scholar] [CrossRef]
  45. Fu, Z.; Wang, L. Color Image Segmentation Using Gaussian Mixture Model and EM Algorithm. In Multimedia and Signal Processing: Second International Conference, CMSP 2012, Shanghai, China, 7–9 December 2012; Wang, F.L., Lei, J., Lau, R.W.H., Zhang, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; Volume 346. [Google Scholar] [CrossRef]
  46. Abdel-Basset, M.; Mohamed, R.; Abouhawwash, M. A new fusion of whale optimizer algorithm with Kapur’s entropy for multi-threshold image segmentation: Analysis and validations. Artif. Intell. Rev. 2022, 55, 6389–6459. [Google Scholar] [CrossRef] [PubMed]
  47. Abdel-Basset, M.; Mohamed, R.; Abouhawwash, M. Hybrid marine predators algorithm for image segmentation: Analysis and validations. Artif. Intell. Rev. 2022, 55, 3315–3367. [Google Scholar] [CrossRef]
Figure 1. (A) Images considered for experimentation along with histogram and energy curves of images. Red, green, and blue color plots indicate histograms and energy curves of red, green, and blue components of input images. (B) Images considered for experimentation along with histogram and energy curves of images. Red, green, and blue color plots indicate the histograms and energy curves of red, green, and blue components of input images.
Figure 1. (A) Images considered for experimentation along with histogram and energy curves of images. Red, green, and blue color plots indicate histograms and energy curves of red, green, and blue components of input images. (B) Images considered for experimentation along with histogram and energy curves of images. Red, green, and blue color plots indicate the histograms and energy curves of red, green, and blue components of input images.
Jimaging 09 00074 g001aJimaging 09 00074 g001b
Figure 2. Flow chart of the proposed approach to color image segmentation.
Figure 2. Flow chart of the proposed approach to color image segmentation.
Jimaging 09 00074 g002
Figure 3. Segmented images of Image 1 to Image 9 for N = 4, 6, 8, 10, 16, 20, and 24 using the proposed method based on Otsu’s method.
Figure 3. Segmented images of Image 1 to Image 9 for N = 4, 6, 8, 10, 16, 20, and 24 using the proposed method based on Otsu’s method.
Jimaging 09 00074 g003
Figure 4. Segmented images of Image 1 to Image 9 for N = 4, 6, 8, 10, 16, 20, and 24 using the proposed method based on Kapur’s method.
Figure 4. Segmented images of Image 1 to Image 9 for N = 4, 6, 8, 10, 16, 20, and 24 using the proposed method based on Kapur’s method.
Jimaging 09 00074 g004
Figure 5. Segmented images of Image 1 at N = 4, 6, 8, 10, 16, 20, and 24, using SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO, and with the proposed model based on Kapur’s method.
Figure 5. Segmented images of Image 1 at N = 4, 6, 8, 10, 16, 20, and 24, using SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO, and with the proposed model based on Kapur’s method.
Jimaging 09 00074 g005
Figure 6. Segmented images of Image 6 at N = 4, 6, 8, 10, 16, 20, and 24, using SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO, and with the proposed model based on Otsu’s method.
Figure 6. Segmented images of Image 6 at N = 4, 6, 8, 10, 16, 20, and 24, using SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO, and with the proposed model based on Otsu’s method.
Jimaging 09 00074 g006
Figure 7. Segmented images of Image 7 at N = 4, 6, 8, 10, 16, 20, and 24, using SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO, and with the proposed model based on Otsu’s method.
Figure 7. Segmented images of Image 7 at N = 4, 6, 8, 10, 16, 20, and 24, using SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO, and with the proposed model based on Otsu’s method.
Jimaging 09 00074 g007
Figure 8. Comparison of PSNR for the proposed method using Otsu’s and Kapur’s methods.
Figure 8. Comparison of PSNR for the proposed method using Otsu’s and Kapur’s methods.
Jimaging 09 00074 g008
Figure 9. Comparison of PSNR based on Otsu’s method.
Figure 9. Comparison of PSNR based on Otsu’s method.
Jimaging 09 00074 g009
Figure 10. Comparison of PSNR based on Kapur’s method.
Figure 10. Comparison of PSNR based on Kapur’s method.
Jimaging 09 00074 g010
Figure 11. Comparison of SSIM with the proposed method based on Otsu’s and Kapur’s criteria.
Figure 11. Comparison of SSIM with the proposed method based on Otsu’s and Kapur’s criteria.
Jimaging 09 00074 g011
Figure 12. (A) Comparison of PSNR based on Kapur’s method. (B) Comparison of FSIM with the proposed method based on Otsu’s and Kapur’scriteria.
Figure 12. (A) Comparison of PSNR based on Kapur’s method. (B) Comparison of FSIM with the proposed method based on Otsu’s and Kapur’scriteria.
Jimaging 09 00074 g012
Table 1. Different metrics to test the efficiency of the algorithms.
Table 1. Different metrics to test the efficiency of the algorithms.
S.NoComparative
Parameters
FormulaRemarks
1The mean value of fitness (MEAN)It can be calculated as the average value of fitness values of objective values at each iteration of the algorithm.Inter-class variance and entropy are the objective functions for Otsu’s and Kapur’s methods.
3Peak signal-to-noise ratio (PSNR) 20 l o g 10 M A X R M S E 2 The MAX is the maximum gray value taken as 255.
4Mean square error (MSE) M S E = ( R M S E ) 2
R M S E = i = 1 R 0 j = 1 C 0 I i , j I s i , j 2 R 0 × C 0
I ( i , j ) is the input image and the segmented image is I s i , j . C 0 × R 0 is the size of the image.
5Structural similarity (SSIM) = 2 μ x μ y + c 1 ( 2 σ x y + c 2 ) μ x 2 + μ y 2 + c 1 ( σ x 2 + σ y 2 + c 2 ) μ x and μ y are the mean intensities of input and segmented images. σ x y is the covariance, σ x 2 and σ y 2 are the variance of images.
6Feature similarity index (FSIM) = x ϵ Ω S L x P C m ( x ) x ϵ Ω P C m ( x ) S L x is the similarity between images. P C m ( x ) is the maximum phase congruency of two images.
7Probability Rand Index (PRI)The internal validation measurePRIis an indication of resemblance between two regions in an image or clusters; it is expressed as given below
         P R I = a + b a + b + c + d
Whereas dataset X is portioned into two subsets C 1 and C 2 , the number of pairs of pixels (or elements) that are present in both subsets C 1 and C 2 is indicated by a .
b indicates the number of pairs of elements in X that are a different subset in C1 and a different subset in C2.
c indicates the number of pairs of elements in X that are the same subset in C1 and a different subset in C2.
d denotes the number of pairs of elements in X that are a different subset in C1 and the same subset in C2.
8Variation of information (VOI) V O I = E n t I s + E n t I R 2 M I ( I s , I R )
M I I s , I R = E n t I s + E n t I R E n t ( I s , I R )
I s is the segmented image, I R is the reference image, E n t is entropy, E n t ( I s , I R ) is joint entropy.
Table 2. Comparison of PSNR computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO with the proposed model using Kapur’s method with N = 4, 6, 8, and 10.
Table 2. Comparison of PSNR computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO with the proposed model using Kapur’s method with N = 4, 6, 8, and 10.
ImageNPSNR
SAMFO-THMVOWOAFPASCAACOPSOABCMFOProposed
Image 1414.286714.285014.121816.676713.937316.638215.304014.131914.285017.411
60.762317.447420.014418.025122.007719.541819.941021.232617.447423.3624
824.580623.634922.713019.252324.179523.406819.564823.205722.676226.4105
1026.950226.785826.470124.209125.869822.926421.811024.928226.886629.3038
Image 2413.578913.379913.211114.075013.132216.956912.605813.211613.379916.9332
621.248619.195819.171716.770518.018820.622820.366318.908219.201123.2135
825.172624.916124.902919.340921.848718.198023.746024.322724.112627.0255
1027.655626.010525.744123.419021.345725.149625.149923.744627.467729.7932
Image 3414.973014.751614.751614.660114.641614.373414.343614.783914.801719.6832
619.631515.358315.356320.805718.084219.566115.418015.124915.461623.8255
821.811020.283220.276316.166721.783621.414120.327820.896319.996726.1728
1024.092220.840720.788016.948421.067322.966915.851823.568522.037928.1644
Image 4422.665022.618722.618720.385022.561021.984322.126022.290622.552221.0083
623.356124.843522.832322.169821.118123.169420.068122.592523.074926.5867
825.871725.951525.661524.800725.102220.675520.938424.589123.112828.2479
1028.370927.201228.339024.241422.989226.391725.857123.594627.580129.3581
Image 5422.665022.618722.618720.385022.561021.984322.126022.290622.552219.2267
623.356124.843522.832322.169821.118123.169420.068122.592523.074923.5926
825.871725.951525.661524.800725.102220.675520.938424.589123.112825.8969
1028.370927.201228.339024.241422.989226.391725.857123.594627.580128.7066
Image 6421.529021.529021.529020.856921.086420.199219.968222.276521.506819.3631
624.362224.360524.390523.627222.401521.027721.529024.314223.074124.6747
827.863326.400626.206126.737424.933024.270724.864024.583726.294026.9835
1029.179027.752729.119422.783024.280027.060025.017027.225929.073028.3946
Image 7425.547025.457025.457024.176619.123524.524019.453619.088125.457025.195
629.736227.643428.975426.108925.746819.799928.192827.187528.955728.6755
833.630030.736630.680229.533528.665622.092130.993827.197433.321733.1943
1036.451929.976933.785730.399524.972327.874131.974229.191434.167836.1217
Image 8422.639222.524422.524418.437719.006620.864620.829122.527922.524420.4749
626.165525.853625.835022.423919.013620.156724.561721.384225.813426.2102
828.657527.727327.573224.956126.093825.754125.713124.507128.642028.6113
1030.591528.40130.280023.303722.482021.979426.611226.863630.200931.1743
Image 9419.819319.742119.742117.076019.490517.298819.277619.498419.808721.3238
622.629122.559622.615721.741520.770618.797722.037121.362822.606024.1806
825.170224.952325.068021.714321.533021.405923.950423.958525.018627.439
1027.030926.718926.688622.802423.891323.010922.626224.381726.650130.0399
Average 24.618823.632523.802621.672821.748221.731621.944422.498423.708625.7213
Table 3. Comparison of MSE computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO with the proposed model using Kapur’s method with N = 4, 6, 8, and 10.
Table 3. Comparison of MSE computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO with the proposed model using Kapur’s method with N = 4, 6, 8, and 10.
ImageNMSE-Kapur
SAMFO-THMVOWOAFPASCAACOPSOABCMFOProposed
Image 142423.322424.222517.141397.722626.341410.171917.332511.232424.281183.26
6545.5641170.47648.1021024.60409.55722.602659.140489.5751170.47299.806
8197.301281.571348.158772.419226.473296.753718.782310.824351.125148.604
10131.237136.302146.579246.702168.305331.467428.533209.054133.17576.3309
Image 242956.022986.043104.452544.333161.271310.453568.643104.052986.031317.53
6355.465782.530786.8851367.801026.15487.773597.653836.101781.568310.2633
8197.617209.639210.277756.819424.825984.645274.458240.330252.243128.9824
10111.562162.940173.249295.925476.991198.665198.650274.547116.49568.19623
Image 342152.352177.332177.332223.712233.282375.451899.972151.262152.30699.4559
6707.8251893.421894.34540.1461010.88718.5661867.601998.011848.93269.4824
8428.524609.199610.1691571.85431.236469.541602.970528.995650.742156.9641
10253.433535.808542.3471312.95508.565328.3931690.17285.908406.71699.2294
Image 44352.032355.804355.804595.089336.500411.762316.575383.729361.295515.5257
6300.242213.170338.727394.552502.651313.429640.131357.955320.322142.6955
8168.231165.168176.575215.281159.537556.585523.894226.032317.54397.33978
1094.6216123.86995.3183244.874326.710149.250168.800284.196113.51875.38247
Image 541525.071529.681529.632356.501356.781662.811220.391689.701558.81776.9809
6623.770650.887636.915634.497469.5791258.92702.469441.362666.638284.3285
8325.161343.619335.542425.048553.578861.665568.193357.963326.081167.2593
10195.209237.657205.215407.361330.530264.978529.601203.436236.27787.58311
Image 64457.273457.273457.273533.814506.337621.104655.032384.975459.617752.9573
6236.608238.250320.382282.072374.051513.229457.273240.802238.152221.6207
8106.353148.943155.764137.829208.824243.228212.170226.312152.644130.2358
1078.5562109.09779.6421342.597242.706127.961204.821123.16480.496394.10666
Image 74185.090185.090185.090248.551795.662229.447737.426802.177185.090392.2657
669.0973111.87582.3258159.289173.142680.91598.5827124.25882.7000176.0072
828.189254.881155.598372.398888.4136401.66851.7245123.97530.263878.27992
1014.719065.371227.196459.3091206.943106.08941.272378.332262.561463.22813
Image 84354.124363.613363.613931.783817.366532.874537.243363.320363.613582.895
6157.227168.935169.662372.125816.056627.207227.463472.784170.506155.6182
825.754288.5788113.698207.717159.844109.735174.491230.34188.894589.52624
1062.085993.965460.9653303.882367.180412.227141.894133.88256.745549.61927
Image 94677.871690.033690.0331274.95731.1821211.21767.929729.868679.526479.4026
6354.950360.679356.051435.445544.529857.653406.786475.113356.846248.325
8197.722207.897202.433438.178456.855470.421261.845261.355204.750117.2682
10128.822138.417139.384341.065265.432325.078355.185237.088140.62664.43033
Average 477.194568.672563.662707.477652.618627.331678.474608.11170.219294.4718
Table 4. Comparison of SSIM computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO with the proposed model using Kapur’s method with N = 4, 6, 8, and 10.
Table 4. Comparison of SSIM computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO with the proposed model using Kapur’s method with N = 4, 6, 8, and 10.
ImageNSSIM-Kapur
SAMFO-THMVOWOAFPASCAACOPSOABCMFOProposed
Image 140.92750.91900.92090.92060.90160.87880.91930.91460.92130.9568
60.97680.96000.96520.96240.97580.94190.97230.96810.96530.9713
80.98820.98580.98300.93890.98600.98770.98370.98610.98520.9636
100.99580.99470.99520.98790.98850.99270.99140.99490.99080.9932
Image 240.93860.93530.93390.94750.92950.93530.93050.93570.93570.9619
60.97850.97350.97310.96340.96360.96620.97620.97380.97410.9815
80.99350.98740.98610.97940.98780.97900.98960.99240.99200.9967
100.99530.99510.99440.98500.98940.99070.98760.99340.99520.9971
Image 340.94170.93930.93950.93820.94040.93320.93970.93900.93970.961
60.96110.94600.94640.97330.95490.96490.95590.94320.94900.9724
80.97850.96730.96620.95720.97130.96150.97260.97660.96750.9881
100.98900.97720.97250.96940.98530.98090.97220.98820.97700.9888
Image 440.98860.98810.98810.98730.98760.98780.98790.98850.98790.9988
60.99420.99320.99340.98970.99050.99220.98730.99040.99390.9971
80.99590.99510.99590.99380.99190.98770.98830.99340.99550.9921
100.99730.99710.99720.99240.99100.98940.99510.99430.99700.999
Image 540.96990.96950.96950.96320.96380.96060.96950.96580.96910.9964
60.98690.98580.98650.98170.98360.97890.98550.98940.98630.9921
80.99160.99150.99140.99100.98720.98590.98330.99180.99110.9986
100.99520.99490.99450.99360.99320.99200.99170.99530.99430.9934
Image 640.98870.98870.98870.98700.98370.98260.97660.98740.98880.9885
60.99350.99270.99350.98990.99060.98850.99030.99300.99310.98871
80.99650.99600.99570.99500.99180.99280.99340.99450.99590.9984
100.99780.99730.99750.99350.99210.99320.99290.99680.99750.9986
Image 740.99150.98110.98110.98100.97680.97680.97300.97590.98090.9939
60.99500.99460.99460.98380.98830.99030.99130.99310.99480.9252
80.99710.99540.99590.99320.99520.99560.98880.99470.99580.9994
100.99800.99770.99640.99720.99560.99480.99270.99730.99760.9908
Image 840.98550.98530.98530.97710.97970.97960.97990.98510.98530.9918
60.99320.99300.99290.99000.98400.98200.98500.98990.99310.9899
80.99610.99560.99540.99120.99390.99050.99360.99180.99540.9829
100.99740.99690.99710.99340.99200.99090.99490.99460.99710.9995
Image 940.97990.97950.97950.97350.97740.97080.97440.97420.97960.993
60.99020.98980.98950.97670.98050.98160.98570.98810.98920.9892
80.99350.99340.99330.98430.99030.98950.99220.99330.99320.9976
100.99630.99620.99610.99180.98550.99260.99200.99380.99620.9948
Average 0.985390.9824720.982370.978110.979450.977200.979890.982170.982810.9867
Table 5. Comparison of FSIM computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO with the proposed model using Kapur’s method with N = 4, 6, 8, and 10.
Table 5. Comparison of FSIM computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO with the proposed model using Kapur’s method with N = 4, 6, 8, and 10.
ImageNFSIM
SAMFO-THMVOWOAFPASCAACOPSOABCMFOProposed
Image 140.68750.67160.67490.67830.65580.61830.67540.67250.67540.74
60.77560.76260.76070.74050.77280.71820.73920.75320.76790.8089
80.83770.83650.82170.72940.81170.78070.79070.82180.83190.8787
100.89880.88900.89030.82680.83150.84440.83130.87270.87330.9108
Image40.76130.76110.75970.75780.74520.73290.74960.75900.75630.7461
60.82340.81140.81090.78290.78810.80380.81110.80770.81500.8696
80.89820.87690.87040.81840.84090.81270.84990.88350.89100.8978
100.92330.92060.91530.85410.85610.87820.85400.90070.92160.9421
Image40.76740.76640.76620.74410.76650.75370.74690.76810.76700.8265
60.81180.80010.79970.79270.79770.78780.77650.79950.80050.8310
80.83440.83260.83240.79060.81880.81490.82300.83550.83300.9001
100.86630.85610.85170.78830.84280.84170.83390.86910.86290.9168
Image 440.82450.82410.82420.81960.81610.81140.81490.81830.82050.7890
60.88520.87560.87460.83340.83300.86500.83000.83910.87260.8751
80.91220.90940.89720.87180.85830.82940.81760.87750.90030.9129
100.93490.92460.93410.87660.83670.81710.89060.87660.92830.9446
Image 540.78560.78540.78540.76410.78420.72220.75850.78090.78420.7800
60.86030.85760.85510.83700.83970.80690.83180.86780.85960.9032
80.90100.89830.90070.87930.85680.84190.85500.90380.89930.9350
100.93080.92870.93050.89530.90080.89950.88110.93110.93060.9834
Image 640.74560.74540.74540.73110.72090.70690.73270.73590.74530.7363
60.80790.79530.80770.77170.77100.77510.76840.80320.79630.8682
80.86820.85440.86620.83260.78710.80440.82820.82360.86590.8621
100.89650.88620.89460.80660.78530.83160.81540.87310.88280.8851
Image 740.90680.88240.87920.86870.85460.82980.83840.83840.87740.9252
60.94180.93770.93760.89280.89400.88570.89800.89210.94000.9198
80.96230.95000.95370.91610.93930.92790.88300.92980.95690.9647
100.97320.97070.96330.95530.91820.91460.91860.94420.96570.9746
Image 840.91600.91390.91390.85620.89020.85720.81790.90990.91390.9111
60.95930.95910.95920.91700.91000.88880.91580.90040.95890.9599
80.97160.97720.97400.94720.95550.96050.93780.95370.97360.9759
100.98050.98490.98460.94790.94730.92990.95950.96200.98120.9827
Image 940.92900.92890.92860.88540.91170.85620.86930.92030.92820.8489
60.95390.95770.95740.90500.93610.91060.91350.94720.95660.9541
80.96220.96820.96720.94680.94370.94950.94890.96580.96760.9756
100.97920.97970.97900.95580.94790.96130.95230.96300.97830.9897
Average 0.87980.87440.874090.839360.843500.832510.837740.861130.874430.89237
Table 6. Comparison of PRI computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO with the proposed model using Kapur’s method with N = 4, 6, 8, and 10.
Table 6. Comparison of PRI computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO with the proposed model using Kapur’s method with N = 4, 6, 8, and 10.
ImageNPRI-Kapur-EMO
SAMFO-THMVOWOAFPASCAACOPSOABCMFOProposed
Image 140.64700.63520.64180.58330.59860.40050.64230.63500.64220.5264
60.74670.74270.71820.70700.71980.61570.65520.68280.72470.6681
80.79480.79450.76930.66840.74890.68200.75190.79340.78390.7899
100.82820.828270.82010.79900.76110.76930.77770.80330.81700.8188
Image 240.62100.60080.59310.63720.53610.55230.59070.57310.60100.5698
60.73320.72140.72340.70450.70640.70200.71470.71210.69690.6712
80.79190.77970.77900.70060.70500.73930.70590.78020.77290.7458
100.82650.80780.80130.72670.70530.80690.75110.76780.82500.8225
Image 340.52900.50920.51090.50440.52260.46110.51220.51030.51250.4985
60.65970.54660.54960.62190.60390.67810.63350.52370.57000.63255
80.71250.67710.66470.62870.66020.63780.70670.71900.68070.6043
100.79030.75260.71650.69490.78750.78940.67290.77810.74960.7192
Image 440.61520.61930.61930.62140.60460.64120.60490.60040.61170.4589
60.71800.70140.69030.64010.60350.65770.61160.64210.69350.6546
80.74760.75580.76050.73890.67950.70270.66580.74120.72630.721
100.79660.79270.78020.75210.62310.72730.73630.71320.78400.7524
Image 540.79420.79140.79250.77040.76770.73110.75760.79020.79390.7542
60.84870.83640.84410.83480.82010.79110.80870.84660.84330.8525
80.88380.88150.88300.85950.84630.82340.85230.87500.88220.8778
100.90480.90310.90570.88310.87060.87350.86730.89510.90030.8952
Image 640.66420.66390.66420.62860.63290.57970.66210.66380.66340.61258
60.75660.74140.74530.74650.67730.76940.68090.75800.72310.7299
80.81410.79820.79060.79860.73160.78320.77630.77620.79480.7788
100.84550.82760.84310.78470.74020.76640.79380.82810.83200.8436
Image 740.43760.32530.32420.31800.34030.33400.24120.27720.32800.3658
60.46150.45020.45400.41480.39370.36540.39250.45660.45810.4589
80.52270.51540.49700.51170.45900.48370.49010.42650.51330.6384
100.62220.59890.54420.55790.54170.61130.57900.54400.53200.6683
Image 840.75850.73370.73370.73370.71430.70490.70050.71830.73410.6867
60.81290.80960.80940.77170.78090.74900.74760.76630.80170.8423
80.85440.84640.83600.79860.82830.79160.83890.78780.83950.8507
100.87380.86120.87170.83970.77800.79370.86210.83780.86840.865
Image 940.75310.75200.75140.71400.73800.69810.74120.75180.75240.6142
60.82140.82010.80410.76130.76190.77490.73810.78310.82000.6983
80.85630.84790.84940.81910.80030.80170.80630.83500.83370.775
100.88250.87650.87600.84900.82100.84110.84250.84370.87260.8384
Average 0.74240.72620.72100.69790.68360.68410.69200.70650.72160.7027
Table 7. Comparison of VOI computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO with the proposed model using Kapur’s method with N = 4, 6, 8, and 10.
Table 7. Comparison of VOI computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO with the proposed model using Kapur’s method with N = 4, 6, 8, and 10.
ImageNVOI-Kapur-EMO
SAMFO-THMVOWOAFPASCAACOPSOABCMFOProposed
Image 145.33305.35765.33365.43715.47645.96135.29455.34705.33245.4929
64.82274.84974.96174.95344.94105.22205.17225.05494.91534.7986
84.50994.51704.65695.03364.76454.93134.77574.53554.57314.4826
104.24234.31214.30654.46404.64104.51764.52884.37744.32154.2165
Image 245.22845.30035.31765.20225.48365.46415.30485.38135.29585.1645
64.73334.84884.84594.90184.96394.81434.92374.85484.79644.8883
84.44194.50264.52224.89484.92124.92524.79194.54474.54864.3051
104.21674.28744.32514.52194.76204.34814.67974.44384.21724.1258
Image 345.14865.18945.18765.24045.18375.31845.16675.18455.18145.1224
64.63924.96284.95744.64764.82424.75064.72765.01354.90104.6039
84.41264.52104.56074.73444.61104.60304.44754.39784.51234.3412
104.10304.18004.31024.42594.11704.11684.47454.16074.15494.1243
Image 445.17875.16175.16145.14025.20155.06375.16675.14575.18515.1776
64.70214.76504.80774.97955.04934.71234.91764.98284.82114.6573
84.44054.47534.44384.57954.76484.68954.88424.53444.55064.3403
104.20494.20944.28834.42424.89114.69644.44744.56264.27794.2658
Image 545.08415.09335.08415.16155.20425.31475.23775.16765.08485.0065
64.66104.73384.69994.75724.89164.96114.89814.69374.70254.6558
84.32124.33854.32784.49094.65914.63474.50104.38484.33614.2331
104.04694.06384.10394.25304.42204.32794.33994.16304.05834.3297
Image 645.26505.26765.26505.36325.39585.55195.28725.27885.26745.1710
64.79544.90824.89704.82785.11774.91705.05894.79934.97514.4071
84.44884.54504.60604.50464.90174.55624.61084.68144.56394.0305
104.17734.31554.20424.41494.82854.51174.53944.28924.28144.3887
Image 744.43714.67154.69344.69164.64624.75134.92614.83034.68384.7709
64.31054.30174.32224.39264.51404.55574.49914.33344.33034.4257
84.07664.06294.13893.87374.23784.17504.21344.34804.12534.0134
103.73023.80343.95913.87143.95763.79683.92793.95943.94413.5823
Image 845.23535.23965.23965.32745.30965.25845.35555.28645.23965.1201
64.73034.74254.74134.87064.89875.07545.00034.95684.79324.7156
84.38514.45304.52174.71004.61064.69584.47054.77024.51394.7666
104.17014.28394.17974.39474.72084.60884.22424.43254.22044.2265
Image 945.25275.26855.27585.35645.31785.44395.27395.35245.27055.1827
64.80434.80974.89805.05135.16844.99995.14575.00074.81054.7817
84.49594.56164.55434.70004.89734.87504.80794.62744.65734.4210
104.22814.28064.28734.41284.71774.47564.48714.49284.31624.1244
Average 4.58374.64404.66624.750164.86144.822814.791894.732484.659974.5683
Table 8. Comparison of MEAN computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFOwith the proposed model using Kapur’s method with N = 4, 6, 8, and 10.
Table 8. Comparison of MEAN computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFOwith the proposed model using Kapur’s method with N = 4, 6, 8, and 10.
Proposed Method
NEMO_KapurEMO_OTSU
Image RGBRGB
Image 1422.724520.661721.91511.8715 × 10102.7645 × 10102.3448 × 1010
631.383831.173130.21431.8715 × 10102.7645 × 10102.3448 × 1010
838.924137.938637.66821.8715 × 10102.7645 × 10102.3448 × 1010
1045.695744.257144.32221.8715 × 10102.7645 × 10102.3448 × 1010
Image 2423.133521.369521.36951.2788 × 10112.2774 × 10112.2464 × 1011
631.735030.245430.63682.2774 × 10111.2788 × 10112.2464 × 1011
838.706937.154337.76882.2774 × 10112.2464 × 10111.2788 × 1011
1045.758645.423544.41922.2774 × 10112.2464 × 10111.2788 × 1011
Image 3423.381320.715123.11001.3372 × 10101.0266 × 10108.4907 × 109
631.605030.383731.56431.3372 × 10101.0266 × 10108.4907 × 109
839.275438.515139.11511.3372 × 10101.0266 × 10108.4907 × 109
1046.092145.620944.89671.3372 × 10101.0266 × 10108.4907 × 109
Image 4422.050620.348022.49267.1205 × 10101.1270 × 10116.8970 × 1010
630.109628.747829.74877.1205 × 10101.1270 × 10116.8970 × 1010
837.510536.951738.40947.1205 × 10101.1270 × 10116.8970 × 1010
1044.147143.731644.83597.1205 × 10101.1270 × 10116.8970 × 1010
Image 5423.014721.680322.54292.8927 × 10113.1468 × 10112.4490 × 1011
631.164631.038030.47362.8927 × 10113.1468 × 10112.4490 × 1011
838.398537.744838.58512.8927 × 10113.1468 × 10112.4490 × 1011
1044.958045.685145.28062.8927 × 10113.1468 × 10112.4490 × 1011
Image 6422.466921.532821.52432.8820 × 10102.3478 × 10108.8752 × 109
631.389229.507729.65742.8820 × 10102.3478 × 10108.8752 × 109
839.085637.152937.63482.8820 × 10102.3478 × 10108.8752 × 109
1045.910242.963244.08752.8820 × 10102.3478 × 10108.8752 × 109
Image 7422.968219.479521.32551.8752 × 10101.6944 × 10103.6049 × 109
631.252729.012130.43131.8752 × 10101.6944 × 10103.6049 × 109
838.234435.317737.66001.8752 × 10101.6944 × 10103.6049 × 109
1045.578642.021943.98851.8752 × 10101.6944 × 10103.6049 × 109
Image 8422.580422.663221.89421.1475 × 10111.0766 × 10115.5300 × 1010
635.565633.895534.65981.1475 × 10111.0766 × 10115.5300 × 1010
837.986537.904138.17151.1475 × 10111.0766 × 10115.5300 × 1010
1045.705544.277045.00631.1475 × 10111.0766 × 10115.5300 × 1010
Image 9479.793279.244079.24722.1651 × 10113.3482 × 10113.7903 × 1011
622.325822.053221.73752.1651 × 10113.3482 × 10113.7903 × 1011
831.495429.673430.13722.1651 × 10113.3482 × 10113.7903 × 1011
1039.066937.702337.64572.1651 × 10113.3482 × 10113.7903 × 1011
Average 42.7288841.80941.99091.83× 10112.59× 10112.71× 1011
Table 9. Comparison of PSNR computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFOwith the proposed model using Otsu’s method with N = 4, 6, 8, and 10.
Table 9. Comparison of PSNR computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFOwith the proposed model using Otsu’s method with N = 4, 6, 8, and 10.
ImageNPSNR
SAMFO-THMVOWOAFPASCAACOPSOABCMFOProposed
Image 1416.386016.440616.430816.119815.951816.518815.947115.750416.350919.6804
618.950318.646418.645317.051418.887116.286317.677918.840518.660022.3901
822.106221.626123.148419.191320.158224.353718.376222.101121.619223.9987
1023.770523.327323.323323.542622.820722.557722.187922.620222.889128.0707
Image 2416.623716.891816.750215.517716.821615.293417.357616.636716.623718.8124
619.929719.912220.188018.986721.296618.314920.838718.946119.902824.6275
822.380822.234122.132224.799523.155721.809820.550422.437322.237924.8692
1025.544822.234124.660326.089825.369625.832120.553323.607824.638626.0680
Image 3416.418416.106016.410113.953917.004618.152116.739016.303216.137020.5265
619.982819.577019.619519.100915.452321.333419.619819.544119.553024.4963
822.739522.005021.937319.514014.959321.974820.740816.194722.166528.0946
1024.210923.694223.670820.815923.359521.994526.262319.074823.453427.7046
Image 4421.714421.714021.751721.271721.711123.548617.444821.689221.714026.2798
624.936524.949924.971123.855823.248722.791923.257124.516924.923429.6004
828.278127.841227.817225.097726.499426.463123.071726.319427.468232.1619
1029.747929.716328.879226.581919.585726.481828.191127.084329.354634.1414
Image 5414.836314.831114.839312.327514.902413.550113.371614.350714.678219.5873
617.731717.695517.578316.383316.403319.707317.958420.142117.726922.7009
820.270820.032820.043219.231321.879920.389321.862420.574919.795124.5320
1021.983322.349822.953220.259021.693918.939722.899519.101221.203026.2652
Image 6417.319617.401917.384516.66888.159412.702413.848314.623217.319623.3272
622.030321.742721.305521.33268.382017.878023.820221.186921.415223.5649
824.629823.559224.215123.971715.832724.425524.354010.352624.238827.4039
1027.615530.028226.733828.180115.392926.211826.783915.146026.962030.1257
Image 7417.040617.044717.040915.623718.926826.605126.973116.755517.010426.8075
619.39852.77062.770617.774616.351923.111222.432813.535819.365828.9907
822.72312.770620.758327.012023.544729.194221.368222.177821.932232.2855
1027.85912.770622.66942.770628.754630.013812.354325.48572.817528.8837
Image 8412.355312.412612.355712.477114.319610.952615.763019.330512.089321.3351
617.828617.818017.860518.348417.932116.574721.336718.199617.756026.3956
821.22913.471420.467620.474918.450320.735421.314021.645721.126828.9553
1024.09333.471422.672123.473622.711416.438624.322016.614223.856231.0910
Image 9415.856115.693815.834014.912915.765014.749415.113515.759715.655021.0972
619.406019.069418.971219.274016.086016.583318.637821.294118.862426.3328
822.908922.409321.934420.469019.473520.635822.664223.202721.829230.3863
1025.225724.145724.680222.109422.332221.051822.753125.718523.743632.6120
Average 21.279518.28920.372319.571218.988220.671020.520719.635120.196526.2278
Table 10. Comparison of MSE computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO with the proposed model using Otsu’s method with N = 4, 6, 8, and 10.
Table 10. Comparison of MSE computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO with the proposed model using Otsu’s method with N = 4, 6, 8, and 10.
ImageNMSE
SAMFO-THMVOWOAFPASCAACOPSOABCMFOProposed
Image 141494.511475.821479.161588.941651.681449.461653.481730.051506.67699.907
6828.028888.056888.2751282.26840.1691529.141109.97874.585885.276375.0338
8400.368447.168314.952783.336622.564238.619945.053400.843447.880258.9467
10272.918302.235302.520287.620339.636360.835392.904355.687334.323101.3936
Image 241414.801330.121374.241825.261351.811922.021194.931410.621414.80854.7525
6660.860663.536622.700821.135482.416958.506536.061828.859664.974224.0425
8375.835388.752397.975215.342314.418428.643572.847354.681388.413211.9143
10181.383388.752222.356159.992188.852169.775572.463283.335223.471160.7979
Image 341483.351594.031486.212616.301296.09995.0991377.871523.201582.63576.0104
6652.823716.768709.797799.8111825.94478.348709.746722.213720.747230.9141
8346.045409.811416.244727.2402075.66412.666548.2781561.85394.849100.8371
10246.600277.754279.254538.876300.003410.796153.761804.646293.589110.3114
Image 44438.168438.205434.423485.192438.496287.2211171.15440.713438.205153.1441
6208.656208.012206.998267.610307.761341.890307.164229.82209.28671.29193
896.6658106.895107.489201.050145.593146.814320.563151.750116.48339.52668
1068.911969.414784.1704142.853715.340146.18598.6201127.24175.442425.05765
Image 542135.332137.822133.853840.842103.042871.262991.752387.922214.41715.073
61096.231105.401135.721495.411488.58695.5771040.57629.3241097.59349.1326
8610.943645.363643.810776.151421.786594.500423.484569.622681.670229.0237
10411.860378.529329.429612.604440.239830.066333.524799.767492.920153.6598
Image 641205.401182.731187.551400.219934.323490.102680.72242.711205.47302.2459
6407.424435.319481.429478.4279438.041059.96269.818494.758469.419286.1478
8223.925286.523246.359260.5611697.58234.709238.6065995.41245.019118.2198
10112.59864.6047137.94298.87101878.42155.562136.3611988.32130.88263.16992
Image 741285.301284.141285.351781.25832.535142.093130.5481372.621291.32135.6221
6746.8423435.823435.821085.561506.27317.659371.3672880.74753.74282.03711
8350.5203435.82546.068129.383287.47978.2817474.529380.25621.932238.41761
10105.1633435.82351.6713435.8286.620364.81873781.45183.8683398.8184.08342
Image 843780.543731.03780.283676.022405.195221.881725.08758.6333786.56478.1568
61072.161074.71064.25951.1361046.801430.92477.979984.2881080.15149.1147
8489.9742923.71583.880582.888929.082548.963480.488445.158489.97482.70854
10253.3662923.71351.453292.227348.2901476.44240.3721418.05253.36650.58018
Image 941688.441752.751697.012097.921724.252178.421591.211762.361768.52505.0802
6745.555805.642824.069769.1971601.331428.13889.810482.687844.971151.2866
8332.804373.378416.522583.683734.065561.690352.099311.036426.74159.49088
10195.214250.330221.338400.074380.069510.392344.961174.271274.61335.6353
Average 733.87521149.1232838.34891041.41801449.4559949.0955851.09901057.2747850.6977229.5213
Table 11. Comparison of SSIM computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO with the proposed model using Otsu’s method with N = 4, 6, 8, and 10.
Table 11. Comparison of SSIM computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO with the proposed model using Otsu’s method with N = 4, 6, 8, and 10.
ImageNSSIM-O
SAMFO-THMVOWOAFPASCAACOPSOABCMFOProposed
Image 140.93940.93950.93950.92390.93610.92840.93650.93300.93930.8898
60.96960.96690.96610.94860.95680.96240.96630.96720.96880.9755
80.98470.98350.98430.98360.98220.96680.97840.97280.98170.9857
100.99280.98830.98780.98980.98590.98820.98650.98770.98740.9954
Image 240.94720.94620.94690.93220.93110.93730.94650.91050.94660.9358
60.97380.97370.97330.96690.97070.96770.98180.95690.97440.9748
80.98440.98370.98360.98010.97790.98230.98250.98790.98370.9844
100.99120.99010.98930.98730.98490.98980.98650.99100.99100.9955
Image 340.95590.94610.94690.92890.94630.94300.94580.94740.94810.9599
60.97810.97660.97670.97000.95500.97280.97710.97440.97640.9689
80.98710.98560.98520.98430.96420.98650.97760.97070.98650.9879
100.99070.98930.98900.98320.98970.97440.98900.97480.98900.9911
Image 440.98570.98570.98580.98390.98600.97740.97800.98590.98580.9861
60.99380.99370.99350.99060.99190.99070.99250.99370.99380.9942
80.99680.99670.99670.99480.99450.99390.99360.99580.99670.9971
100.99780.99760.99740.99500.98380.99580.99660.99670.99770.9979
Image 540.94160.94080.94140.93290.94530.93140.93900.92830.94100.95187
60.97830.97000.96940.97020.96020.97230.97500.97090.96970.98109
80.98650.98250.98280.97920.98260.97980.98370.98540.98260.9877
100.98920.98870.98890.98140.98390.97990.98900.98320.98770.9900
Image 640.95400.95390.95420.92580.86340.95040.92230.95530.95380.9489
60.98420.97950.97920.97840.90620.94270.98170.97860.98210.9852
80.99130.99110.99100.98000.96460.98740.99080.92640.99140.9925
100.99750.99680.99580.99460.96960.99280.99350.97600.99590.9988
Image 740.98870.98050.98050.96110.98660.98020.97960.98650.98020.98898
60.98900.79630.79640.99100.98110.99390.98270.97240.98900.9945
80.99530.79870.99230.99280.99460.99360.99260.98100.99360.9968
100.99690.64640.99490.79930.98110.99580.97940.99500.80290.9978
Image 840.93710.93710.93720.96560.95600.93050.93510.92260.93690.9487
60.98420.98040.98040.98020.97870.96920.98080.98550.98060.9845
80.99120.81350.98990.95970.98750.99100.98860.99030.99060.9925
100.99520.81560.99390.98600.99210.98740.99350.98590.99490.9958
Image 940.95170.95110.95110.94770.95080.92920.94720.95060.95140.9625
60.98100.97930.97890.96740.96390.97400.96720.98070.97910.9811
80.98940.98690.98870.98550.97760.98600.98180.98700.98700.9985
100.99520.99470.99410.99040.98630.99050.99010.99580.99390.8898
Average 0.98010.94790.97280.96700.96800.9720.97520.97170.97300.9803
Table 12. Comparison of FSIM computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO with the proposed model using Otsu’s method with N = 4, 6, 8, and 10.
Table 12. Comparison of FSIM computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO with the proposed model using Otsu’s method with N = 4, 6, 8, and 10.
ImageNFSIM
SAMFO-THMVOWOAFPASCAACOPSOABCMFOProposed
Image 140.69640.69880.69910.68180.69450.68820.67890.64680.69600.7303
60.78980.78000.77720.73720.72390.73160.77790.78550.77820.8091
80.83570.83280.83320.82450.79110.76700.80180.71810.82990.8446
100.87050.87040.86880.84310.77970.81320.85120.79510.86670.8934
Image 240.75640.74570.75440.73590.66610.73260.74400.70390.75250.8099
60.82010.81650.81980.79290.77010.80250.80820.75140.81740.8706
80.86550.86330.86090.87850.81180.82410.85050.84140.85920.8579
100.90220.89770.89500.87920.90140.87880.86150.87580.90030.8443
Image 340.73090.73030.73680.74510.74120.71120.72450.73590.73120.8224
60.79730.79150.79510.77130.76220.77560.79120.78950.79220.8786
80.84940.84220.84270.82360.77050.80790.80350.80340.83370.9181
100.87620.86920.86900.83110.84090.80210.85620.80700.87280.9189
Image 440.82140.82120.82120.81120.82050.78170.78410.82150.82120.8430
60.89100.89140.89410.87000.86750.84620.87120.88520.89260.9124
80.93180.93050.93150.90190.90310.89090.87750.91700.92860.9318
100.95000.94830.94640.90350.81120.91840.92710.92690.94790.9446
Image 540.74870.74500.74740.71230.69310.70530.72600.65470.74710.8122
60.83290.82980.83270.81140.73230.79350.81480.83400.83250.8663
80.87880.87730.87340.86700.81560.84380.86760.86770.87480.8961
100.90270.90170.90610.84910.83790.85480.88700.85690.89920.9160
Image 640.71460.71180.71340.68940.63110.65500.66510.69360.71030.7740
60.80610.80160.79560.79540.71130.69850.76670.75080.80000.7830
80.85670.85290.85400.80850.67700.79530.80090.74030.85600.8376
100.91170.89880.89930.83540.70200.82830.83840.71610.89890.8710
Image 740.84150.84350.84390.75430.83130.86700.81200.86160.83830.8982
60.9093NaNNaN0.88880.83260.89530.8560NaN0.90750.9455
80.9369NaN0.93110.87000.90380.91870.85220.86360.93670.9614
100.9559NaN0.9486NaN0.85310.93070.89820.8894NaN0.9462
Image 840.81250.81730.81380.87770.76300.78390.80350.80730.81110.8636
60.89680.89580.89650.91270.88900.86000.89280.87890.89620.9294
80.9379NaN0.93430.86370.86320.93290.91640.93100.93460.9545
100.9594NaN0.95610.91240.94000.86800.95380.91660.95740.9683
Image 940.86320.85010.85110.84760.84780.81330.84110.85150.85620.8645
60.92750.92190.91790.89920.88400.89150.88270.92380.92630.9125
80.94910.94400.94950.93180.85750.92090.92250.94370.94210.9375
100.96810.96530.96520.95720.91990.95440.94070.97570.96280.9775
Average 0.86090.8440.85640.83180.80110.82170.83180.82170.85450.8818
Table 13. Comparison computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO with the proposed model using Otsu’s method with N = 4, 6, 8, and 10.
Table 13. Comparison computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO with the proposed model using Otsu’s method with N = 4, 6, 8, and 10.
ImageNPRI for Otsu with EMO
SAMFO-THMVOWOAFPASCAACOPSOABCMFOProposed
Image 140.69450.69410.69440.65020.68770.68050.67830.55000.69430.6854
60.80130.79930.79760.75460.71430.68890.75690.80080.79960.8099
80.84830.84710.83850.82530.74700.76600.79710.59640.84510.8486
100.88710.88100.87990.83450.69860.77430.81920.75710.87920.8945
Image 240.66030.65980.65990.62200.36510.56530.59180.50770.65970.6877
60.79050.78980.78940.74730.65060.75300.68180.68180.78750.7819
80.84100.84150.84290.79680.77970.76740.78190.74910.84040.8519
100.87550.87350.87450.81880.82290.81240.77960.81070.87540.8676
Image 340.56910.55930.56050.43600.54580.49900.55310.56610.55980.5488
60.74140.73770.73780.68980.50480.71040.70870.64020.73720.6587
80.82800.81250.80650.76470.65540.78170.72460.70880.81260.6813
100.86630.85830.85070.77720.85740.69770.80030.54420.84770.7494
Image 440.68760.67550.67550.67550.66570.64340.63850.67300.67460.6795
60.78550.78100.78180.76150.72810.71520.72610.76460.77940.782
80.83740.83030.82330.80770.78480.79120.72470.79690.81990.8216
100.86940.85620.86560.82310.69770.83570.82280.84380.86820.8708
Image 540.77960.76950.77420.73700.68910.75140.74990.64130.77350.7612
60.85730.85380.85040.83200.70430.80330.82650.84800.84990.8447
80.89000.88790.89000.85930.82050.85120.87010.87720.88870.8846
100.91640.91340.90760.87990.83230.87800.88630.87530.91120.9084
Image 640.75860.74040.74880.74760.51640.62190.73020.54370.73860.7496
60.82520.82270.81120.81590.56500.75370.73640.74780.82150.8145
80.86430.86280.85770.83510.46080.76920.79230.72730.86450.8353
100.88800.88290.88620.79440.71840.80710.81990.55120.88410.8742
Image 740.61980.61980.61910.51380.28370.40300.38940.38040.61880.6266
60.75190.51130.51150.57820.51910.45730.46520.28840.75130.7895
80.83570.56410.83000.52350.53600.58010.57920.36820.82830.8145
100.85680.31710.84700.46660.53640.59000.52470.53190.57260.8836
Image 840.74940.75590.75020.69770.55010.75730.71080.68690.74940.7436
60.82420.82150.82270.79830.74350.70450.77080.71650.82160.8256
80.86860.58260.86820.82000.70620.78790.82220.81910.86690.8674
100.89700.59950.89300.82400.80850.67960.84160.83970.89180.8919
Image 940.76160.74700.75470.74690.74400.70470.73660.75550.76080.75477
60.83120.82430.83030.80410.79470.78140.79960.81570.83120.8164
80.87390.86270.87360.84520.67070.83260.82610.85450.86760.8612
100.89340.89090.88820.86290.79960.83570.86640.86820.88210.8995
Average 0.80900.7590.79700.74350.66400.71750.73130.68680.79590.7982
Table 14. Comparison of VOI computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO with the proposed model using Otsu’s method with N = 4, 6, 8, and 10.
Table 14. Comparison of VOI computed by SAMFO-TH, MVO, WOA, FPA, SCA, ACO, PSO, ABC, and MFO with the proposed model using Otsu’s method with N = 4, 6, 8, and 10.
ImageNVOI
SAMFO-THMVOWOAFPASCAACOPSOABCMFOProposed
Image 145.15975.17355.16355.29145.18985.19065.19535.56395.17385.2185
64.60874.60914.62084.74864.96525.06434.83964.69544.61964.5787
84.22444.22834.29724.33044.64244.70984.54905.19784.23834.2931
103.85223.89260.87994.18814.90094.55184.35794.59623.90944.1352
Image 245.24905.26785.26225.41006.02615.52415.42675.68395.26945.1835
64.61314.61864.61804.79965.23734.80484.99395.12334.63894.4032
84.21764.22414.22564.40964.56294.60964.52864.66374.24574.1839
103.90583.92013.91924.25134.27604.22974.45734.23643.91044.1574
Image 345.09875.16485.13155.41535.21985.30135.13395.10875.14765.0607
64.46244.50064.50334.60725.17354.49204.55574.79434.52364.355
83.96344.04874.09244.27684.62204.19604.41604.42064.04064.3397
103.64843.70033.74744.10433.73824.43163.98504.84803.76884.0055
Image 444.96525.05885.05885.05885.09245.10885.15715.06465.06204.81154
64.48584.51024.49424.52524.73124.74364.71744.57904.51664.2371
84.09004.13634.14664.25134.42954.30044.63724.27664.18934.1069
103.76923.86703.84964.10804.84383.98244.08033.95723.79973.6062
Image 545.16475.27325.21705.33755.60245.29175.22915.59935.21835.1761
64.61694.66274.68784.75625.30314.95914.85544.68634.70214.4915
84.29454.31234.28904.53864.75414.52764.41134.37944.30814.2279
103.95363.97904.06804.26604.61344.26404.21684.27444.01253.8988
Image 644.91215.04244.98904.98385.68975.45705.06185.56685.06725.1982
64.47744.51024.58464.51895.31674.76874.89224.87584.49735.0782
84.11214.14674.18514.33035.67514.76124.59744.78674.12324.8123
103.85813.89613.87854.42415.05604.42884.38835.40373.87774.0184
Image 744.05404.05414.07504.31824.91034.57744.65464.67164.08584.0738
63.50374.20384.18983.96694.23144.35354.33754.82603.51003.3603
83.02163.90173.06084.07904.09593.96324.02524.56353.05453.3305
102.79554.61332.81734.24304.11403.95184.07474.09443.77903.1255
Image 845.22315.15865.21305.32585.75695.14685.37725.48785.22554.6588
64.69644.73714.71904.79385.10835.20714.93155.19154.73654.1033
84.31315.27244.31664.58755.08234.82864.58024.64384.33284.2363
103.96435.07404.00374.49494.66584.88814.41334.44844.04144.0374
Image 945.24505.36625.30445.34425.37405.46145.35955.30945.26145.1524
64.75904.83034.78834.89105.02625.01214.96214.86564.77244.632
84.34924.48954.36994.57265.41844.67164.74054.54774.42984.3059
104.14604.17864.19954.35244.83984.54924.38744.35324.25304.0671
Average 4.327054.517304.30464.6084.95234.73084.68134.81624.39834.3516
Table 15. Comparison of MEAN computed by SAMFO-TH, MVO, and WOA using Otsu’s method with N = 4, 6, 8, and 10 for red, green, and blue components.
Table 15. Comparison of MEAN computed by SAMFO-TH, MVO, and WOA using Otsu’s method with N = 4, 6, 8, and 10 for red, green, and blue components.
NSAMFO-THMVOWOA
RGBRGBRGB
Image 141.6087 × 103994.5160845.75281.6087 × 103994.5160845.75281.6087 × 103994.5160845.7528
61.6950 × 1031.0597 × 103902.28551.6955 × 1031.0599 × 103902.45051.6955 × 1031.0599 × 103901.7954
81.7279 × 1031.0839 × 103922.77771.7283 × 1031.0844 × 103922.29131.7284 × 1031.0838 × 103921.2086
101.7435 × 1031.0954 × 103932.58961.7443 × 1031.0961 × 103933.41051.7436 × 1031.0956 × 103932.3066
Image 241.6719 × 1031.5099 × 1031.5454 × 1031.6719 × 1031.5099 × 1031.5454 × 1031.6719 × 1031.5099 × 1031.5454 × 103
61.7701 × 1031.5920 × 1031.6191 × 1031.7702 × 1031.5921 × 1031.6192 × 1031.7702 × 1031.592 l × 1031.6192 × 103
81.8055 × 1031.6233 × 1031.6442× 1031.8062 × 1031.6237 × 1031.6444 × 1031.8063 × 1031.6229 × 1031.6440 × 103
101.8209 × 1031.6365 × 1031.6558× 1031.8216 × 1031.6374 × 1031.6566 × 1031.8218 × 1031.6374 × 1031.6556 × 103
Image 344.2892 × 1033.0362 × 1032.5487 × 1034.2892 × 1033.0362 × 1032.5487 × 1034.2892 × 1033.0362 × 1032.5487 × 103
64.4047 × 1033.1388 × 1032.6482 × 1034.4049 × 1033.1390 × 1032.6483 × 1034.4049 × 1033.1390 × 1032.6483 × 103
84.4408 × 1033.1738 × 1032.6767 × 1034.441 l × 1033.1743 × 1032.6774 × 1034.441 l × 1033.1743 × 1032.6775 × 103
104.4564 × 1033.1876 × 1032.6898 × 1034.4569 × 1033.1882 × 1032.6903 × 1034.4570 × 1033.1883 × 1032.6903 × 103
Image 441.4749 × 1031.8297 × 1031.5474 × 1031.4749 × 1031.8297 × 1031.5474 × 1031.4749 × 1031.8297 × 1031.5474 × 103
61.5234 × 1031.8925 × 1031.5981 × 1031.5223 × 1031.8927 × 1031.5982 × 1031.5235 × 1031.8927 × 1031.5982 × 103
81.5453 × 1031.9177 × 1031.6163 × 1031.5458 × 1031.9188 × 1031.6167 × 1031.5444 × 1031.9186 × 1031.6164 × 103
101.5542 × 1031.9285 × 1031.6255 × 1031.5546 × 1031.9293 × 1031.6257 × 1031.5552 × 1031.9300 × 1031.6259 × 103
Image 543.7536 × 1033.0007 × 1033.7919 × 1033.7536 × 1033.0007 × 1033.7919 × 1033.7536 × 1033.0007 × 1033.7919 × 103
63.9152 × 1033.1253 × 1033.9104× 1033.9155 × 1033.1256 × 1033.9110 × 1033.9155 × 1033.1256 × 1033.9110 × 103
83.9619 × 1033.1642 × 1033.9501 × 1033.9627 × 1033.1655 × 1033.9518 × 1033.9623 × 1033.1656 × 1033.9502 × 103
103.9846 × 1033.1814 × 1033.9680 × 1033.9866 × 1033.1836 × 1033.970l × 1033.986l × 1033.1836 × 1033.9678 × 103
Image 641.8662 × 103860.0400601.36541.8662 × 103860.0424601.36781.8662 × 103860.0424601.3678
61.9412 × 103913.4510634.93861.9415 × 103914.0838635.17281.9415 × 103914.1091632.7640
81.9734 × 103932.0683648.17861.974 l × 103933.0491649.11211.974l × 103933.2065648.1710
101.9874 × 103941.6650654.58531.9892 × 103943.0225655.72981.9899 × 103943.2505654.2640
Image 741.3407 × 103243.843940.01941.3407 × 103243.843939.90511.3407 × 103243.839539.9050
61.3917 × 103262.746843.75091.3918 × 103263.000943.73511.3918 × 103262.517743.7559
81.4132 × 103270.221245.09381.4139 × 103270.5621−Inf1.4136 × 103270.3070−Inf
101.4235 × 103273.083348.26571.4242 × 103274.2223−Inf1.4244 × 103274.348445.8379
Image 843.1017 × 1032.2453 × 103683.38683.1017 × 1032.2453 × 103683.38683.1017 × 1032.2453 × 103683.3868
63.2166 × 1032.3568 × 103717.06203.2170 × 1032.3571 × 103717.35553.2170 × 1032.357 l × 103717.3477
83.2556 × 1032.3951 × 103730.87623.2566 × 1032.3958 × 103729.99603.2567 × 1032.3959 × 103730.0061
103.2742 × 1032.4123 × 103735.70583.276l × 1032.4129 × 103−Inf3.2760 × 1032.4136 × 103−Inf
Image 941.3756 × 1031.9760 × 1031.7498 × 1031.3756 × 1031.9760 × 1031.7498 × 1031.3732 × 1031.9760 × 1031.7498 × 103
61.4472 × 1032.0578 × 1031.8340 × 1031.4473 × 1032.0566 × 1031.8344 × 1031.4468 × 1032.0532 × 1031.8314 × 103
81.4808 × 1032.0862 × 1031.8657 × 1031.4813 × 1032.0870 × 1031.8652 × 1031.4799 × 1032.0874 × 1031.8642 × 103
101.4947 × 1032.1044 × 1031.8806 × 1031.4954 × 1032.1055 × 1031.8806 × 1031.496 l × 1032.1060 × 1031.8791 × 103
Average 1.71 × 1031.44 × 1036.15 × 1021.58 × 1031.66 × 1031.06 × 1031.73 × 1031.47 × 1031.06 × 103
Table 16. Comparison of MEAN computed by SAMFO-TH, MVO, and WOA using Kapur’s method with N = 4, 6, 8, and 10 for red, green, and blue components.
Table 16. Comparison of MEAN computed by SAMFO-TH, MVO, and WOA using Kapur’s method with N = 4, 6, 8, and 10 for red, green, and blue components.
ImageNSAMFO-THMVOWOA
RGBRGBRGB
Image 1418.601118.322418.119218.601318.322818.124918.601318.322818.1259
623.742523.411223.324523.751923.437423.345523.753523.437423.3129
828.407527.942427.992528.398127.988928.00928.325127.916128.0052
1032.575231.989532.248632.521332.098932.300232.484232.006132.3188
Image 2417.580217.542418.589917.580517.550518.593317.580517.550518.5933
622.567622.566523.848522.591222.593123.860722.597422.593623.858
827.125327.031928.739327.067227.060328.782527.132627.071628.8099
1031.283731.197433.020231.225731.300333.064731.257231.324533.1572
Image 3417.677617.599617.387817.677717.617.388617.677717.600317.3885
622.948222.638622.582122.947922.647422.671722.942122.648822.6716
827.778627.264227.426727.769527.350327.499427.739327.363127.5042
1032.205731.54231.747432.171731.699331.88332.040531.644131.9212
Image 4417.508217.979917.441317.502817.998317.441917.507917.998217.4415
622.499823.028422.258122.509923.03522.28222.483223.050122.282
826.927227.597626.753326.980127.636626.821926.889127.649826.7813
1030.894631.754830.742430.857231.901930.688730.850531.894930.8114
Image 5418.05317.587317.603718.053417.587717.604318.053417.587817.6043
623.001122.370822.744723.006722.384822.750523.007622.385622.7519
827.596226.663327.312127.587926.72627.368727.581526.732227.3685
1031.875130.538531.494331.859630.586131.543531.802830.562431.5951
Image 6418.37217.681716.590918.372317.681516.59118.372417.671816.5849
623.704722.955921.441823.70722.978721.452323.689622.984821.4489
828.461627.529525.694828.457127.622625.517928.406927.662825.6499
1032.531331.623329.276132.635331.632628.966132.470331.658929.3176
Image 7417.999916.034612.194217.991916.033912.048617.997216.036911.7758
623.23520.714715.565323.206520.713814.004723.199120.48714.4237
827.962824.82117.94427.921724.597816.230827.924.679516.8152
1032.080428.368819.531.997227.590717.964132.042927.935718.1042
Image 8418.599618.628515.991318.599618.628615.991418.599618.628715.9912
623.902823.889320.369623.901623.830220.321123.838623.839420.1171
828.62328.583724.206328.619128.463523.705928.562128.57623.7238
1032.921232.870927.5532.893632.72726.261932.78532.927726.9726
Image 9417.877717.820418.242717.877217.82118.242917.877817.820918.2428
622.772922.642623.533422.682222.65723.531122.68522.657923.5242
827.113426.912628.275927.072926.80328.264427.124126.666328.1919
1031.376530.695732.446631.303930.533832.496331.378130.563432.4157
Average 24.8998924.1258522.7999124.8606223.9984422.4757124.8270824.0380722.54371
Table 17. Average MEAN of fitness with Kapur’s and Otsu’s methods on optimization techniques for MVO, WOA, PFA, SCA, ACO, PSO, ABC, MFO, and SAMFO-TH and on the proposed approach for nine images considered with N = 4, 6, 8, and 10.
Table 17. Average MEAN of fitness with Kapur’s and Otsu’s methods on optimization techniques for MVO, WOA, PFA, SCA, ACO, PSO, ABC, MFO, and SAMFO-TH and on the proposed approach for nine images considered with N = 4, 6, 8, and 10.
MEAN with KAPUR’s MethodMEAN with OTSU’s Method
MethodsRGBRGB
Proposed42.728841.809141.99091.83 × 10112.59 × 10112.71 × 1011
MVO24.860623.998422.47571.58 × 1031.66 × 1031.06 × 103
WOA24.827024.038022.54371.73 × 1031.47 × 1031.06 × 103
PFA24.678824.129622.80157.10 × 1025.69 × 1024.04 × 102
SCA22.604121.292619.94499.95 × 1022.09 × 1031.89 × 103
ACO28.887727.503825.40091.02 × 1031.38 × 1032.31 × 103
PSO23.587322.546720.82948.92 × 1021.12 × 1031.25 × 103
ABC24.279823.301921.55081.47 × 1032.04 × 1031.86 × 103
MFO23.766022.760421.22341.47 × 1032.04 × 1031.86 × 103
SAMFO-TH24.899824.125822.79991.71 × 1031.44 × 1036.15 × 102
Table 18. Steps for implementation of the Proposed method on a color image.
Table 18. Steps for implementation of the Proposed method on a color image.
StepOperation
1:Read a color image I and separate it into IR, IG, and IB. For RGB image c = 1,2,3 and for gray image c = 1.
2:Obtain energy curves for RGB images ER, EG, and EB.
3:Calculate the probability distribution using Equation (3) and the histograms.
4:Initialize the parameters: I t e r m a x , I t e r l o c a l , δ, and N
5:Initialize a population S t c of N random particles with k dimensions.
6:Find w i c and μ i c ; evaluate S t c in the objective function f o t s u or f K a p u r depends on the thresholding method to find threshold values for segmentation.
7:Compute the charge of each particle using Equation (18), and with Equations (19) and (20) compute the total force vector.
8:Move the entire population S t c along the total force vector using Equation (21).
9:Apply the local search to the moved population and select the best elements of this search depending on their values of the objective function.
10:The t index is increased in 1. If t ≥ I t e r m a x or if the stop criteria are satisfied the algorithm finishes the iteration process and jumps to step 11. Otherwise, jump to step 7.
11:Select the particle that has the best x i B objective function value using f o t s u or f K a p u r from Equation (9)or Equation(14).
12:Apply the best thresholds values contained in x i B to the input image I as per Equation (2).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rangu, S.; Veramalla, R.; Salkuti, S.R.; Kalagadda, B. Efficient Approach to Color Image Segmentation Based on Multilevel Thresholding Using EMO Algorithm by Considering Spatial Contextual Information. J. Imaging 2023, 9, 74. https://doi.org/10.3390/jimaging9040074

AMA Style

Rangu S, Veramalla R, Salkuti SR, Kalagadda B. Efficient Approach to Color Image Segmentation Based on Multilevel Thresholding Using EMO Algorithm by Considering Spatial Contextual Information. Journal of Imaging. 2023; 9(4):74. https://doi.org/10.3390/jimaging9040074

Chicago/Turabian Style

Rangu, Srikanth, Rajagopal Veramalla, Surender Reddy Salkuti, and Bikshalu Kalagadda. 2023. "Efficient Approach to Color Image Segmentation Based on Multilevel Thresholding Using EMO Algorithm by Considering Spatial Contextual Information" Journal of Imaging 9, no. 4: 74. https://doi.org/10.3390/jimaging9040074

APA Style

Rangu, S., Veramalla, R., Salkuti, S. R., & Kalagadda, B. (2023). Efficient Approach to Color Image Segmentation Based on Multilevel Thresholding Using EMO Algorithm by Considering Spatial Contextual Information. Journal of Imaging, 9(4), 74. https://doi.org/10.3390/jimaging9040074

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop