Next Article in Journal
Eye-Blink Event Detection Using a Neural-Network-Trained Frame Segment for Woman Drivers in Saudi Arabia
Previous Article in Journal
An Explainable Artificial Intelligence-Based Robustness Optimization Approach for Age-Related Macular Degeneration Detection Based on Medical IOT Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MSWOA: A Mixed-Strategy-Based Improved Whale Optimization Algorithm for Multilevel Thresholding Image Segmentation

1
School of Computer Science, Hubei University of Technology, Wuhan 430068, China
2
CCCC Second Highway Consultants Co., Ltd., Wuhan 430118, China
3
School of Information Management, Hubei University of Economics, Wuhan 430205, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(12), 2698; https://doi.org/10.3390/electronics12122698
Submission received: 26 April 2023 / Revised: 6 June 2023 / Accepted: 11 June 2023 / Published: 16 June 2023
(This article belongs to the Section Artificial Intelligence)

Abstract

:
Multilevel thresholding image segmentation is one of the most widely used segmentation methods in the field of image segmentation. This paper proposes a multilevel thresholding image segmentation technique based on an improved whale optimization algorithm. The WOA has been applied to many complex optimization problems because of its excellent performance; however, it easily falls into local optimization. Therefore, firstly, a mixed-strategy-based improved whale optimization algorithm (MSWOA) is proposed using the k-point initialization algorithm, the nonlinear convergence factor, and the adaptive weight coefficient to improve the optimization ability of the algorithm. Then, the MSWOA is combined with the Otsu method and Kapur entropy to search for the optimal thresholds for multilevel thresholding gray image segmentation, respectively. The results of algorithm performance evaluation experiments on benchmark functions demonstrate that the MSWOA has higher search accuracy and faster convergence speed than other comparative algorithms and that it can effectively jump out of the local optimum. In addition, the image segmentation experimental results on benchmark images show that the MSWOA–Kapur image segmentation technique can effectively and accurately search multilevel thresholds.

1. Introduction

With the rapid development of computer technology, computer vision has gradually formed its own scientific system. Computer vision usually involves the evaluation of images or videos, including image classification, object detection [1], image segmentation, image enhancement [2], and other subtasks. In recent years, deep learning technology has been applied to almost every field of computer vision. Researchers have proposed various network optimization models for different tasks and achieved a series of remarkable research results [3].
Image segmentation, one of the computer vision research hotspots, is a fundamental stage in image processing, visual analysis, and pattern recognition. At present, image segmentation has been applied in a variety of fields, including medical image processing and analysis [4], traffic images [5], remote sensing images [6], and satellite images [7]. Image segmentation makes an image easier to analyze by simplifying or changing the representation of it. The most classical method for image segmentation is binarization. Shaikh et al. [8] discussed the method of image segmentation using binarization. The numerous image segmentation methods [9] can be broadly categorized into thresholding segmentation, image edge detection, region growth-based segmentation, feature space-based clustering segmentation, and morphology-based image segmentation. Among these methods, thresholding segmentation has become the most widely used segmentation method in image segmentation due to its stable performance, simple algorithm, easy implementation, and high efficiency. It has been a hot research point in the field of image segmentation.
Histogram-based thresholding segmentation is one of the most studied image segmentation techniques. The most famous threshold selection criteria include the Otsu method [10] and Kapur entropy [11]. These segmentation methods usually use a specific objective function to find the optimal threshold using an analytic formula under certain conditions. Due to the discrete characteristics of image pixels, these multilevel thresholding segmentation methods based on objective functions are essentially solved using exhaustive search. However, as there are more thresholds, using an analytical formula to solve the threshold has the drawbacks of large search space, high computational complexity, large amount of calculation, and high time consumption. Therefore, it is challenging to employ the conventional exhaustive method for multilevel thresholding segmentation when the number of thresholds to be selected is huge. How to increase real-time performance without lowering search accuracy is a hot issue in research on this kind of algorithm.
Swarm intelligence (SI)-based optimization algorithms have gained popularity in recent years as some of the standard methods for multilevel thresholding image segmentation due to their ability to significantly increase the speed and stability of the optimization process. Segmentation methods based on swarm intelligence optimization algorithms often use some unique computing techniques to improve algorithm performance, and the objective function often uses the Otsu method or some entropy functions such as Kapur, Tsallis, or Minimum Cross Entropy (MCE). Swarm intelligence optimization algorithms have become a popular topic in research in the optimization sector, mainly because they can find a more workable solution with a very low calculation cost, which can reduce operation time and improve solution accuracy. They have the advantages of simple and effective computing, fast operation speed, high precision, and being suitable for large-scale parallel solutions. Common swarm intelligence optimization algorithms include particle swarm optimization (PSO) [12], the artificial bee colony algorithm (ABC) [13], the firefly algorithm (FA), the bat algorithm (BA) [14], moth–flame optimization (MFO) [15], cuckoo search (CS) [16], the bacterial foraging optimization algorithm (BFO) [17], gray wolf optimizer (GWO) [18], etc. Until now, these new optimization algorithms have been applied to the field of image segmentation. Wu Peng applied the firefly algorithm (FA) combined with maximum entropy for image segmentation. The experimental results demonstrate that the method can quickly and accurately find the optimal thresholds and improve accuracy and anti-noise performance. Chao Yuan et al. [19] proposed a multilevel thresholding image segmentation method based on the generalized inverse particle swarm optimization and gravity search hybrid algorithm (GOPSOGSA) by combining PSO and the gravitational search algorithm (GSA). The experimental results indicate that the proposed method is better than the GSA in segmentation accuracy but needs more computing time. Pare et al. [16] proposed ELR-CS (Egg Laying Radius) on the basis of CS (cuckoo search) and extended the energy curve of gray-scale images into color images. The experimental results prove that the color image segmentation method using the CS–Kapur energy curve can obtain higher segmentation accuracy. Prateek Agarwal [20] applied histograms based on bimodal and multimodal thresholds, and the social spider algorithm (SSA) to the multilevel thresholding segmentation of gray-scale images, effectively improving computing time and realizing the optimization of the optimal threshold. Hao Gao et al. [13] applied a modified artificial bee colony algorithm to multilevel thresholding image segmentation to solve the problems of large amount of computation and long computing time. A gray image segmentation method [21] using Kapur entropy based on the month swarm algorithm (MSA) was proposed; it effectively solves the issue of large amount of computation in multilevel thresholding segmentation. Ashish Kumar Bhandari [22] proposed a multilevel thresholding segmentation method named Energy-Masi-MSA based on Masi entropy. Compared with other meta heuristic algorithms, the MSA has better performance in threshold evaluation index and reducing computational complexity. Abdul Kayom et al. [12] used GWO to identify the best threshold of gray images under the conditions of Kapur entropy and Otsu. According to the experimental results, the suggested method has higher convergence precision and is more stable than PSO and BFO. In the same year, Shubham Kapoor [23] applied the GWO algorithm to satellite image segmentation and achieved advantages in computational efficiency and accuracy. K.P. Baby Resma et al. [24] applied the krill herd optimization algorithm (KHO) to gray-scale image segmentation; Lifang He [25] proposed an efficient krill herd algorithm (EKH) for color image segmentation. Comparative experiments on Otsu, Kapur entropy and Tsallis entropy showed that EKH–Kapur entropy is more effective in color image segmentation and is more accurate and robust. GuoShen Ding et al. [26] improved the fruit fly optimization algorithm (FOA). The experimental results demonstrate that the suggested method outperforms other algorithms in terms of segmentation efficiency and global convergence. Pankaj Upadhyay et al. [27] proposed a multilevel thresholding segmentation method for gray images based on the crow search algorithm (CSA) and Kapur entropy. The experimental findings prove that the suggested approach has superior accuracy in solution and segmentation effect when comparing PSNR, SSIM, FSIM, and computation time. Zhikai Xing [28] improved emperor penguin optimization (EPO) by introducing strategies such as Levy flight, Gaussian mutation, and opposition-based learning into the algorithm to improve the search ability of it. Moreover, improved EPO is applied to the segmentation of various types of color images. Erick Rodríguez-Esparza [29], Aneesh Wunnava [30], and Mohamed Abd Elaziz [31] applied Harris Hawks Optimization (HHO) and improved HHO to multilevel thresholding image segmentation and achieved good results. A modified remora optimization algorithm (MROA) [32] was proposed for global optimization and image segmentation tasks. The experimental results prove that it is a promising method for global optimization problems and image segmentation. As intelligent optimization algorithms quickly advance, more and more academics are starting to apply them to multilevel thresholding image segmentation, which can partially address the issue of operating efficiency. These research results also demonstrate the viability and efficiency of intelligent optimization methods for multilevel thresholding image segmentation.
The whale optimization algorithm (WOA) [33] is a brand-new swarm intelligence optimization algorithm that simulates the behavior of whale predation. Compared with other classical intelligent optimization algorithms, such as PSO and the GSA, it has the advantages of straightforward structure, fewer parameters, and strong optimization capacity. The WOA has attracted the attention of numerous academics and has been used to address a variety of real-world issues, such as constrained engineering design problems [34], forecasting water resource demand [35], multiobjective optimization problems [36], large-scale global optimization problems and high-dimensional global optimization problems [37,38], optimal single mobile robot scheduling [39], real-time task scheduling in multiprocessor systems [40], talent stability prediction [41], and multilevel thresholding image segmentation [42]. Similar to other swarm intelligence optimization algorithms, the WOA also has some problems, such as being premature and easily falling into local optimization. Numerous scholars have attempted to modify it to overcome these shortcomings and boost performance by mostly concentrating on three aspects, where the first is to improve population initialization. For example, Zhang Yong et al. [43] used piecewise logistic chaotic maps to generate a chaotic sequence to initialize the population position, so as to maintain the diversity of the initial population in the global search. Secondly, the search strategy and search process are improved to balance the local search and global search. Third, other optimization algorithms or strategies are used to improve the original algorithm. For example, Seyed et al. [44] introduced differential evolution (DE) into the search process of the whale optimization algorithm to obtain better solutions; this obtains better experimental results, but it also makes the algorithm more complex.
The main work and contribution of this paper are as follows:
  • The definitions of k-point and k-point-based initialization algorithm are proposed.
  • The mixed-strategy whale optimization algorithm (MSWOA) is proposed using the k-point-based initialization strategy, the nonlinear convergence factor, and the adaptive weight coefficient to improve the optimization ability.
  • The proposed algorithm was applied to the multilevel thresholding segmentation of gray images using the Otsu method and Kapur entropy, respectively. PSNR, SSIM, FISM, and CPU time were chosen to measure it.
The rest of the paper is arranged as follows: In Section 2, the mathematical description of the multilevel thresholding segmentation of gray images and the mathematical formulas of the Otsu method and Kapur entropy are described. In Section 3, a mixed-strategy whale optimization algorithm is proposed and applied to gray image multilevel thresholding segmentation. In Section 4, the performance of the MSWOA is discussed using experiments based on benchmark functions. Section 5 presents the comparative experiment and analysis of image segmentation. Finally, Section 6 presents the conclusion and future research directions.

2. System Model and Definitions

In this work, the type of image we discuss is gray images. Assuming that there are L gray levels in the given gray image, I, and the gray range is {0,1,2,…,( L 1)}, then m thresholds divide the image into m + 1 classes, which can be described as follows:
C 0 = { g ( i , j ) I 0 g ( i , j ) t 1 1 } , C 1 = { g ( i , j ) I t 1 g ( i , j ) t 2 1 } , C 2 = { g ( i , j ) I t 2 g ( i , j ) t 3 1 } , . . . C m = { g ( i , j ) I t m g ( i , j ) L 1 } ,
where t i ( i = 1 , 2 , . . . , m ) is the ith threshold, C i is the ith pixel set of image I, and  g ( i , j ) is the gray level of image I at pixel ( i , j ) . When m = 1 , there is only one threshold, also known as single-level thresholding, while if m 2 , it is called multilevel thresholding. In this study, we mainly discuss multilevel thresholding image segmentation.
Assuming that f ( t 1 , t 2 , . . . , t m ) is an objective function of a set of thresholds { t 1 , t 2 , . . . , t m } ( t 1 < t 2 < . . . < t m ) of image I, the goal of multilevel thresholding segmentation is to find a set of optimal thresholds { t 1 * , t 2 * , . . . , t m * } to maximize objective function f ( t 1 , t 2 , . . . , t m ) under the condition of a given threshold number, m, which can be described as follows:
t 1 * , t 2 * , . . . , t m * = a r g max t 1 , t 2 , . . . , t m f ( t 1 , t 2 , . . . , t m ) .
In this paper, among the common multilevel thresholding image segmentation methods, the Otsu method and Kapur entropy are selected and combined with the MSWOA to achieve image segmentation. The Otsu method and Kapur entropy are detailed below.

2.1. Otsu Method

The Otsu method [10] was proposed by Japanese scholar Nobuyuki Otsu in 1979. Its basic idea is to use the one-dimensional probability histogram of the image gray level to select the segmentation threshold when the between-class variance of the image reaches the maximum. The best threshold divides the image into sections with the largest between-class variance. The Otsu method considers that the image segmentation effect is the best under this condition.
Assuming that image I has N pixels, probability p i of the ith gray level can be expressed as shown in Equation (3):
p i = h i N , 0 i ( L 1 ) ,
where h i represents the sum of the number of the ith gray levels and  i = 0 L 1 p i = 1 . Threshold set { t 1 , t 2 , . . . , t m } divides the image into m + 1 parts. If the probability of the ith part is ω i and the average gray level of the ith part is μ i , then the function of the between-class variance of image I can be expressed as shown in Equation (4):
f O t s u ( t 1 , t 2 , . . . , t m ) = σ 0 2 + σ 1 2 + σ 2 2 + . . . + σ m 2 ,
where
σ 0 2 = ω 0 ( μ 0 μ T ) 2 , ω 0 = i = 0 t 1 1 p i , μ 0 = i = 0 t 1 1 i p i ω 0 , σ 1 2 = ω 1 ( μ 1 μ T ) 2 , ω 1 = i = t 1 t 2 1 p i , μ 1 = i = t 1 t 2 1 i p i ω 1 , σ 2 2 = ω 2 ( μ 2 μ T ) 2 , ω 2 = i = t 2 t 3 1 p i , μ 2 = i = t 2 t 3 1 i p i ω 2 , . . . σ m 2 = ω m ( μ m μ T ) 2 , ω m = i = t m L 1 p i , μ m = i = t m L 1 i p i ω m .
In Equation (4), μ T = i = 0 L 1 i p i represents the average gray level of the whole image, and  ω 0 μ 0 + ω 1 μ 1 + ω 2 μ 2 + . . . + ω m μ m = μ T .

2.2. Kapur Entropy

Kapur entropy [11], also known as the maximum entropy method, was proposed by Kapur in 1985. Its basic idea is to divide the gray histogram of an image into independent classes with several thresholds, so as to maximize the sum of the entropy values. The function of Kapur entropy of image I can be expressed as shown in Equation (5):
f K a p u r ( t 1 , t 2 , . . . , t m ) = H 0 + H 1 + H 2 + . . . + H m ,
where
H 0 = i = 0 t 1 1 p i ω 0 l n p i ω 0 , ω 0 = i = 0 t 1 1 p i , H 1 = i = t 1 t 2 1 p i ω 1 l n p i ω 1 , ω 1 = i = t 1 t 2 1 p i , H 2 = i = t 2 t 3 1 p i ω 2 l n p i ω 2 , ω 2 = i = t 2 t 3 1 p i , . . . H m = i = t m L 1 p i ω m l n p i ω m , ω m = i = t m L 1 p i .

3. Our Proposed MSWOA Scheme

This section first introduces the basic whale optimization algorithm. Then, a mixed-strategy-based improved whale optimization algorithm (MSWOA) is proposed. Finally, the proposed algorithm is applied to the multilevel thresholding segmentation of gray images. As mentioned above, the Otsu method and Kapur entropy are combined with the MSWOA to achieve image segmentation. When the function of between-class variance or the Kapur entropy function obtains the maximum value, image segmentation works best. In this work, a gray image segmentation task is proposed as shown in Equation (6):
m a x f O t s u ( t 1 , t 2 , . . . , t m ) o r f K a p u r ( t 1 , t 2 , . . . , t m ) ,
where the mathematical expressions of the objective functions are shown in Equations (4) and (5).

3.1. Whale Optimization Algorithm

The whale optimization algorithm is a meta heuristic optimization algorithm simulating the behavior of whales in the ocean. It includes three main stages: prey encircling, bubble hunting, and prey searching. In the whale optimization algorithm, assuming that the size of the whale population is N and the dimension of the solution space is D, the corresponding position of the ith whale in the D-dimensional space is X i = ( x i 1 , x i 2 , . . . , x i D ) , i = 1 , 2 , 3 , . . . , N , and the position of the optimal whale (prey) corresponds to the global optimal solution.
Prey encircling: The whale locates prey and surrounds it throughout the predation phase. Other members of the group move to the best position, which is assumed to be where the prey is currently, and the position is updated using the formula below:
D = C · X * ( t ) X ( t ) ,
X ( t + 1 ) = X * ( t ) A · D ,
where t indicates the current iteration; X * ( t ) represent the optimal whale position in the tth iteration; X ( t ) is the position of the individual whale in the tth iteration; and A and C are coefficients calculated with the following equations:
A = 2 a · r a ,
C = 2 · r ,
where r is a random number in [0, 1] and a decreases linearly from 2 to 0 during iteration.
Bubble hunting: In the WOA, whale predation behavior is classified as shrinking encircling and spiral updating. Of these, shrinking encircling is realized according to Equation (9), where convergence factor a decreases linearly with iterations and A is a random value in [ a , a ] . If | A |     1 , the individual whale updates its position as it approaches prey and completes the contraction of prey in accordance with Equation (8). Individual whales in spiral updating must first calculate how far apart they are from prey and then prey on the prey in a spiral form. The following is the mathematical expression of this behavior:
D = X * ( t ) A ( t ) ,
X ( t + 1 ) = D · e b l · cos ( 2 π l ) + X * ( t ) ,
where Equation (11) indicates the distance between the ith whale and its prey, l is a random number in [ 1 , 1 ] , and b is a constant that defines the shape of the spiral.
In the whale predation process, spiral updating is carried out at the same time as shrinking encircling. To simulate this synchronization process, the WOA assumes that probability p of performing these two predation behaviors is 50%. The mathematical model is as follows:
X ( t + 1 ) = X * ( t ) A · D p < 0.5 D · e b l · cos ( 2 π l ) + X * ( t ) p 0.5 .
Prey searching: When A 1 , a member of the population is randomly chosen as prey for the global search to avoid falling into the local optimum. The mathematical model is shown as follows:
D = C · X r a n d X ,
X ( t + 1 ) = X r a n d A · D ,
where X r a n d indicates the position of a random whale (or prey) in the current population.

3.2. Mixed-Strategy-Based Improved Whale Optimization Algorithm

A mixed-strategy-based improved whale optimization algorithm (MSWOA) is presented using the k-point initialization algorithm, the nonlinear convergence factor, and the adaptive weight coefficient to enhance the search ability of the WOA. In addition, the MSWOA is applied to multilevel thresholding gray image segmentation combined with the Otsu method and Kapur entropy, respectively.

3.2.1. Population Initialization Based on k-Point Search Strategy

Haupt et al. [45] and Gondro et al. [46] pointed out that the accuracy and convergence speed of swarm intelligent optimization algorithms depend on the quality of the initial population. A better initial solution can accelerate the convergence speed of the algorithm and help to find the global optimal solution. However, the global optimal solution of the problem to be optimized is usually unknown. Without any prior knowledge, swarm intelligence optimization algorithms, including the WOA, usually use random methods to generate initial population individuals. The search accuracy and efficiency of an algorithm is somewhat impacted by the variety of the initial population, which cannot be guaranteed using the random initialization method.
In this work, inspired by the idea of the opposition-based learning strategy [47] and the binary search strategy, we propose a new search strategy called k-point and apply it to the algorithm for population initialization.
Definition of k-point: If there is a real number x in interval [ l b , u b ] , the left binary point of x is defined as x l = ( l b + x ) / k , and the right binary point is defined as x r = ( x + u b ) / k . The definition is extended to d-dimensional space. Let p = ( x 1 , x 2 , . . . , x d ) be a point in d-dimensional space, where x i [ l b i , u b i ] , i = 1 , 2 , . . . , d ; the left binary point of p is defined as p l = ( x 1 l , x 2 l , . . . , x d l ) , where x i l = ( l b i + x i ) / k , and the right binary point of p is defined as p r = ( x 1 r , x 2 r , . . . , x d r ) , where x i r = ( x i + u b i ) / k . k is an integer larger than 1.
According to the above definition, the specific steps for generating the initial population using the k-point search strategy are shown in Algorithm 1.
Algorithm 1: Population initialization based on k-point search strategy.
  • Set population size N
  • for i = 1 to N do
  •       for j = 1 to d do
  •              X i j = l b i j + r a n d ( 0 , 1 ) · ( u b i j l b i j )
  •       end for
  • end for
  • for i = 1 to N do
  •       for j = 1 to d do
  •              L X i j = ( l b i j + X i j ) / k
  •              R X i j = ( X i j + u b i j ) / k
  •       end for
  • end for
  • merge { X ( N ) L X ( N ) R X ( N ) } , select N individuals with the best fitness as the initial population.

3.2.2. Nonlinear Convergence Factor and Adaptive Weight Coefficient

Similar to other swarm intelligence optimization algorithms, the basic WOA suffers from imbalance between global and local search capabilities when trying to identify the optimal solution. How to coordinate its exploration and development abilities is very important. Exploration ability means that the group needs to explore a wider search area to prevent the algorithm from falling into the local optimum; exploitation ability primarily uses the existing information of the group to search for some neighborhood of the solution space, which has a significant impact on the convergence speed of the algorithm. In the basic WOA, Equation (9) is used to control the global search or local search. When A 1 , the algorithm conducts a random global search with a probability of 0.5. When A < 1 , the algorithm performs a local search, and the value of A is mostly determined by convergence factor a. Therefore, in order to discover the best solution, the change in convergence factor a is crucial. Convergence factor a in the basic WOA decreases linearly with the increase in the number of iterations, which makes the algorithm’s convergence speed excessively slow.
In the initial stage of iteration, the algorithm is better able to escape the local extremum if a is larger; in the middle stage of iteration, in order to ensure faster convergence speed, a should be rapidly reduced to a smaller value with the increase in the number of iterations; while the search range of the optimal solution is basically determined in the later stage, a smaller a should be selected to improve the convergence accuracy of the algorithm. In order to accelerate the convergence speed while maintaining the algorithm’s potential for both global exploration and local exploitation, we introduce a nonlinear adjustment strategy without altering the changing trend of the original convergence factor. The specific formula is as follows:
a = 2 2 · ( 2 t T m a x 1 ) u ,
where u is a constant coefficient, which is used to adjust the attenuation degree of convergence factor a; and the value range of u is greater than 0; t is the number of iterations; and T m a x is the maximum number of iterations. When T m a x = 500 , the curve of a changing with u is shown in Figure 1.
As shown in Figure 1, when the attenuation degree of convergence factor a changes from slow to rapid, compared with the basic WOA, A 1 accounts for a larger proportion of iterations, and the random global exploration ability of the algorithm is enhanced, while the local development ability is weakened; on the contrary, when the attenuation degree of convergence factor a changes from fast to slow, A 1 takes up a smaller proportion of iterations, and the local development ability of the algorithm is enhanced, while the random global exploration ability is weakened. In this paper, u = 0.4 was selected to enhance the local development ability of the algorithm and improve the convergence speed and accuracy of the algorithm.
In the basic WOA, the position of prey is the location of the optimal solution. However, in the process of algorithm execution, the location of prey X * ( t ) in Equations (8), (13) and (15) is not fully utilized. In this paper, the adaptive weight coefficient is presented to make use of the optimal solution to increase the accuracy of the algorithm. The adaptive weight coefficient, ω , is defined as follows:
ω ( t ) = 1 2 π a r c c o s ( t T m a x ) ,
X ( t + 1 ) = ω ( t ) · X * ( t ) A · D A < 1 , p < 0.5 ,
X ( t + 1 ) = ω ( t ) · X r a n d A · D A 1 , p < 0.5 ,
X ( t + 1 ) = D · e b l · cos ( 2 π l ) + [ 1 ω ( t ) ] X * ( t ) p 0.5 ,
where t is the number of iterations and T m a x is the maximum number of iterations. It is obvious that the value of ω is increased from 0 to 1. In Equations (18)–(20), adaptive weight coefficient ω increases with the number of iterations, which means that after each iteration, the position of prey is closer to the theoretical optimal solution, so as to improve the optimization accuracy of the algorithm; in the spiral update, the whale continues to approach its prey as the number of iterations rises. To enhance the local search capability of the algorithm, a smaller weight is introduced, which can make it possible to discover whether there is a better solution near the prey while updating the location.

3.2.3. Multilevel Thresholding Using MSWOA

In this work, the proposed MSWOA was applied to the threshold segmentation of gray images to obtain the optimal threshold and realize the segmentation of images. The task of image segmentation is described in Equation (6). The basic idea of using the MSWOA to search for the optimal threshold is to transform the threshold search into the minimum or maximum search of the objective function using the Otsu or Kapur method. In this process, a group of thresholds can be regarded as the position values of individual whales in the search space. Therefore, the continuous adjustment of the whale position means that the threshold is searched for continuously until convergence accuracy is reached or the stop condition of the optimization algorithm is satisfied. The flow chart is shown in Figure 2.

4. Performance Evaluation

In order to test the performance of the proposed algorithm, especially in terms of solution accuracy and convergence speed, 13 benchmark functions [33] (as shown inTable 1) were selected for experiments. The benchmark functions in Table 1 included seven unimodal functions and six multimodal functions. We compared the proposed algorithm with the WOA, PSO, the SSA [48], the GOA [49], MFO [15], MVO [50], and the DA [51] to prove its effectiveness. The simulation experiments in this paper were conducted using Intel(R) Core (TM) i7-8700 CPU, 3.20 GHz, 16 GB RAM, and Windows 10 (64-bit) operating system, and the programming software was MATLAB R2016b.

4.1. Parameter Settings

The population size ( P o p S i z e ) of all algorithms in this section was set to 30. Each algorithm was run independently for each function 30 times to obtain the average value and standard deviation. Due to the slow convergence speed of some algorithms (e.g., MFO and DA), more iterations were needed to achieve a certain convergence accuracy, so 500 and 800 were set as the maximum iteration ( M a x _ I t e r ) numbers for independent experiments and compared. The core parameter settings and their values involved in all algorithms are shown in Table 2. Other parameters and their values not listed in the table were set according to the original paper.

4.2. Experimental Results and Analysis

Table 3 and Table 4 show the running results of eight optimization algorithms on 13 benchmark functions in Table 1. Among them, Table 3 shows the results of 500 iterations after running the algorithms independently 30 times; Table 4 shows the results of 800 iterations after running the algorithms independently 30 times. It can be seen from Table 3 that compared with the WOA, except for F12, the MSWOA had better solution accuracy; especially, it could converge to the theoretical minimum value of 0 of F1, F3, F9, and F11. Compared with other six algorithms, PSO had more advantages in solving F6, F12, and F13; the SSA had more advantages in solving F6; and the MSWOA performed better in other benchmark functions. Similarly, it can be seen from Table 4 that compared with the WOA, the MSWOA only performed slightly worse in the results of F6 and F12, but the order of magnitude was the same, and there was no significant difference in numerical value. Similar to the results in Table 3, in Table 4, compared with other six algorithms, the solution accuracy of the MSWOA for F6, F12, and F13 was not as good as that of PSO and the SSA, and there were many differences in the order of magnitude, but it was the best in solving other functions. In general, the accuracy of the MSWOA was obviously better than that of the other seven optimization algorithms.
In order to reflect the speed of convergence and the capability to jump out of the local optimum of the MSWOA intuitively, we selected six typical function convergence curves when the maximum iteration number was 800, as shown in Figure 3. The solution accuracy of the ordinate in the figure was logarithmized based on 10. It can be seen from Figure 2 that the curves of the MSWOA have the fastest descent and convergence speed. Among them, the convergence curves of F1 and F9 are interrupted when the number of iterations is less than 500, which indicates that the algorithm jumped and converged to 0 (logarithm cannot be taken as 0), which is consistent with the values shown in Table 4. The convergence curves of F5 and F10 show that the convergence accuracy of the MSWOA and that of the WOA in these two functions are similar, but the convergence speed of the MSWOA is obviously faster than that of the WOA. Combined with the data in Table 3 and Table 4, it can be concluded that the MSWOA and the WOA generally converge within 500 iterations. When the number of iterations increases from 500 to 800, there is little room for further improvement in solution accuracy, while for other algorithms, especially the GOA and the DA, more iterations (more than 500) are needed to achieve similar convergence accuracy. The convergence curves of F6 and F12 show that the MSWOA and the WOA have faster convergence speed (less than 100 iterations) than other algorithms, but they are not as accurate as PSO and the SSA. Generally speaking, the MSWOA has faster convergence speed than other algorithms in Table 3 and Table 4, and the decline range has mutation, which can effectively jump out of the local optimum.

5. Multilevel Image Thresholding Based on MSWOA

5.1. Benchmark Images

In this work, we selected eight common images for a multilevel thresholding segmentation experiment based on histograms. These images were from Berkeley Segmentation Dataset and the Benchmarks 500 (BSDS500) dataset. The color images were grayed. Figure 4 shows the original images of the benchmark images: (a) Lena, (b) Baboon, (c) Starfish, (d) Couple, (e) Cameraman, (f) Pepper, (g) Tree, and (h) Building, where the size of Lena and Baboon was 512 × 512, the size of Cameraman and Pepper was 256 × 256, and the size of other images was 481 × 321. Figure 5 shows the histogram curve corresponding to each image in Figure 4.

5.2. Experimental Settings

In this experiment, we used m to represent the multiple threshold levels of images. We mainly considered four different threshold levels: 2-threshold level, 3-threshold level, 5-threshold level, and 8-threshold level (Table 5, Table 6, Table 7 and Table 8). In particular, the 10-threshold level, 15-threshold level, 20-threshold level, 25-threshold level, and 30-threshold level were also considered for the proposed algorithm (Table 9). Since this work focused on gray images, gray level L was 255, that is, the value range of each pixel was [ 0 , 255 ] . Therefore, for the m-level optimization problem, the search space was [ 0 , 255 ] m .
In order to compare the image segmentation effect, we compared the MSWOA, and the WOA, PSO, the SSA, the GOA, MFO, MVO, and the DA to prove the effectiveness of the proposed algorithm in image segmentation. The parameter settings of each algorithm are shown in Table 2. The population size of all algorithms was 30, and the maximum iteration number was 800. The experimental environment was the same as that in Section 4.

5.3. Segmented Image Quality Metrics

In order to evaluate the segmentation effect of the eight algorithms, the commonly used peak signal-to-noise ratio (PSNR ), structural similarity (SSIM), feature similarity (FSIM), and CPU time were selected as the evaluation criteria of the quality of image segmentation [16]. As a common image evaluation standard in image segmentation, PSNR is used to compare the signal-to-noise ratio between the original image and the segmented image and is calculated using the following equation:
P S N R = 20 l o g 10 ( 255 M S E ) ,
where MSE is the mean square error, which is defined as follows:
M S E = 1 M N I = 1 M j = 1 N [ I ( i , j ) I ˜ ( i , j ) ] 2 ,
where M and N are the sizes of image I, I is the original image, and I is the segmented image.
SSIM is a metric used to measure the similarity between two images [52]. Assuming that there are image x and image y, the structural similarity of the two images can be calculated using the following equation:
S S I M ( x , y ) = ( 2 μ x μ y + c 1 ) ( 2 σ x y + c 2 ) ( μ x 2 + μ y 2 + c 1 ) ( σ x 2 + σ y 2 + c 2 ) ,
where μ x is the average value of image x, μ y is the average value of image y, σ x 2 is the variance of image x, σ y 2 is the variance of image y, σ x y is the covariance of image x and image y, and c 1 and c 2 are used to maintain a stable constant in order to avoid the case where the denominator is 0. The value range of SSIM is [ 0 , 1 ] , and the larger the value is, the more similar the structures of the two images are. When two images are the same, the value of SSIM is equal to 1.
FSIM is a variant of SSIM whose principle is that all pixels in an image are not of the same importance. Some special pixels, such as those on the edge of objects, are more important in defining the structure of objects than those of other background areas. Therefore, more weight should be given to these pixels in calculation to highlight the important features of an image. The definition of FSIM is shown as follows [53]:
F S I M = x ω S L ( x ) P C m ( x ) x ω P C m ( x ) ,
where ω represents the entire image and S L ( x ) indicates the similarity between the segmented image and the original image.

5.4. Experimental Results and Analysis

5.4.1. Analysis of Otsu and Kapur Methods

According to the PSNR, SSIM, and FSIM results with m = 2, 3, 5, and 8 (as shown in Table 5, Table 6 and Table 7), the method based on Kapur entropy could obtain better data values under the same number of thresholds, and image segmentation quality and accuracy increased, showing a better segmentation effect. Figure 6 shows bar charts for Baboon with MSWOA–Otsu and MSWOA–Kapur with m = 2, 3, 5, and 8. It is obvious that the values of the Otsu-based method were smaller than those of the Kapur entropy-based method. Figure 7 and Figure 8 show the segmentation effects of MSWOA-Otsu and MSWOA-Kapur with m = 2, 3, 5 and 8, respectively. And the segmentation effect in Figure 7 is clearer than that in Figure 8. In terms of CPU time, as shown in Table 8, Kapur took a little longer than Otsu, that is to say, the Otsu-based method took less time, but the difference between them was not significant, and the value was in the same order of magnitude. The segmentation results of MSWOA–Otsu and MSWOA–Kapur with m = 10, 15, 20, 25, and 30 are given in Table 9, which further shows that the segmentation effect of Kapur was still better in high dimensions (with more thresholds), and the numerical difference was significant. Figure 9 shows line charts of PSNR, SSIM, FSIM, and CPU time for Lena with MSWOA–Otsu and MSWOA–Kapur with m = 10, 15, 20, 25, and 30. Generally speaking, for the same optimization algorithm, the method based on Kapur entropy was significantly better than the Otsu-based method in the given segmentation image and the same number of thresholds, m.

5.4.2. Analysis of MSWOA and Other Seven Optimization Algorithms

With the Kapur entropy-based method, when the number of thresholds was small (e.g., m = 2 and 3), the segmentation results of each algorithm were almost the same. When the number of thresholds was large (e.g., m = 5 and 8), the numerical difference in the segmentation results was obvious. By comparing the results of eight optimization algorithms based on the Kapur entropy-based method in Table 5, Table 6, Table 7 and Table 8, it is not difficult to find that the segmentation results of the MSWOA were the best among all algorithms except for individual singular values. Figure 10, Figure 11, Figure 12 and Figure 13 show images segmented with different segmentation methods. In terms of program time consumption (CPU time), except for the GOA and the DA, the time consumption of the other six optimization algorithms was not significantly different, all of which were on the same data level. The reason is that although the solution time of intelligent optimization algorithms increases with the increase in solution space, it is not a positive-proportion (or exponential) growth. Therefore, compared with the traditional exhaustive method, the multilevel thresholding segmentation method based on an intelligent optimization algorithm is much faster, especially in the case of high dimensions (many thresholds). In terms of PSNR, SSIM, and FSIM, their overall numerical trend is also consistent, and the value based on the MSWOA shows the majority of advantages. For the Otsu-based method, the conclusion is similar to that of the Kapur entropy-based method. Generally speaking, the segmentation effect of the MSWOA was the best among the eight algorithms, whether using the Otsu or Kapur method.

5.4.3. Statistical Analysis with Wilcoxon Rank Sum Test

Wilcoxon rank sum test [18] is a kind of nonparametric test. In this paper, it was used to test whether there was a significant difference in the overall distribution of sequence data obtained with two groups of different algorithms at the 5% significance level. The null hypothesis indicated that there was no significant difference among the algorithms, while the alternative hypothesis indicated significant difference among the algorithms. In this work, the MSWOA proposed in this paper was compared with other seven algorithms in terms of the PSNR value of segmented images. The objective functions were Otsu and Kapur. Each algorithm ran 100 times for each of the cases, with m = 5 and 8, and the experimental results are shown in Table 10 and Table 11. Among them, a p-value less than 0.05 (or h = 1) means that the null hypothesis could be rejected at the 5% significance level, indicating that there were obvious differences between algorithms; on the contrary, a p-value greater than 0.05 (or h = 0) indicates that the null hypothesis was accepted, that is to say, there was no obvious difference among the algorithms. Superscript ’+’ indicates a significant difference at the level of p-value < 0.05, which means that the MSWOA performed better than the other algorithms, while ’#’ indicates that MSWOA performance was similar to or worse than that of the other algorithms. As can be seen from Table 10 (Otsu-based), the numbers of instances of p-value<0.05 (or h = 1) were 12 (MSWOA vs. WOA), 13 (MSWOA vs. PSO), 14 (MSWOA vs. SSA), 13 (MSWOA vs. GOA), 13 (MSWOA vs. MFO), 14 (MSWOA vs. MVO), and 14 (MSWOA vs. DA), respectively. Similarly, in Table 11 (Kapur-based), the numbers of instances of p-value < 0.05 (or h = 1) were 13 (MSWOA vs. WOA), 16 (MSWOA vs. PSO), 15 (MSWOA vs. SSA), 14 (MSWOA vs. GOA), 14 (MSWOA vs. MFO), 14 (MSWOA vs. MVO), and 14 (MSWOA vs. DA). The results show that there were significant differences between the MSWOA and the other seven algorithms. Therefore, in most cases, the multilevel thresholding segmentation effect of the MSWOA was better than that of the other seven algorithms.
Based on all the above analysis, we can consider that the MSWOA–Kapur multilevel thresholding image segmentation method proposed in this paper can quickly and accurately find the best thresholds of the image and is better than other algorithms compared in the paper in terms of segmentation effect and statistical analysis.

6. Conclusions and Future Work

To address the issues of extensive calculation and time consumption in multilevel thresholding image segmentation, a new intelligent optimization algorithm called WOA is introduced, which has the advantages of simple structure, few parameters, and fast solving speed. It can effectively handle the high-dimensional space optimization problem, yet it also has some drawbacks, such as easily falling into the local optimum. Firstly, the MSWOA was designed by improving the basic WOA in terms of initialization, nonlinear convergence factor, and adaptive weight, especially proposing a new initialization strategy named k-point. Using 13 benchmark functions, the effectiveness of the MSWOA was proved. Then, the MSWOA was applied to the multilevel thresholding segmentation of gray images. We chose the Otsu method and Kapur entropy as the objective functions and selected PSO, the SSA, the GOA, MFO, MVO, and the DA as the comparative algorithms to perform a series of experiments. According to PSNR, SSIM, FSIM, CPU time, and other evaluation criteria, and statistical analysis (Wilcoxon rank sum test), we believe that the MSWOA–Kapur multilevel thresholding image segmentation proposed in this paper is a better method that can effectively reduce the amount of calculation, improve the operation efficiency, and obtain relatively good thresholds. In general, the main work of this paper is to propose the MSWOA and prove its effectiveness and advantages from the two aspects of algorithm performance and multilevel thresholding image segmentation.
This paper is based on histogram gray image segmentation; as a further study, we will try to improve the histogram, take some spatial properties (e.g., energy curve) [53,54] into consideration to improve the quality of image segmentation, and apply it to color image segmentation. Meanwhile, we are going to apply more intelligent optimization algorithms (e.g., Seagull Optimization Algorithm (SOA)) [55] and segmentation methods (e.g., Tsalli entropy and Reny entropy) to multilevel thresholding image segmentation.

Author Contributions

Conceptualization, C.W., C.T., S.W., L.Y. and F.W.; methodology, C.W. and C.T.; supervision, S.W.; validation, L.Y. and F.W.; writing—original draft, C.W., C.T., S.W. and L.Y.; writing—review and editing, C.W., C.T., S.W., L.Y. and F.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work is funded by National Natural Science Foundation of China under grant No. 61772180, Key R&D plan of Hubei Province (2020BHB004, 2020BAB012).

Data Availability Statement

The data are not publicly available due to trade secrets.

Acknowledgments

The authors are grateful to the editors and reviewers for their insightful comments and suggestions. The authors also acknowledge Neal Xiong for help with language editing and figure artwork.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yan, L.; Li, K.; Gao, R.; Wang, C.; Xiong, N. An Intelligent Weighted Object Detector for Feature Extraction to Enrich Global Image Information. Appl. Sci. 2022, 12, 7825. [Google Scholar] [CrossRef]
  2. Yan, L.; Fu, J.; Wang, C.; Ye, Z.; Chen, H.; Ling, H. Enhanced network optimized generative adversarial network for image enhancement. Multimed. Tools Appl. 2021, 80, 14363–14381. [Google Scholar] [CrossRef]
  3. Yan, L.; Sheng, M.; Wang, C.; Gao, R.; Yu, H. Hybrid neural networks based facial expression recognition for smart city. Multimed. Tools Appl. 2022, 81, 319–342. [Google Scholar] [CrossRef]
  4. Wu, C.; Luo, C.; Xiong, N.; Zhang, W.; Kim, T.H. A greedy deep learning method for medical disease analysis. IEEE Access 2018, 6, 20021–20030. [Google Scholar] [CrossRef]
  5. Li, L.; Qian, B.; Lian, J.; Zheng, W.; Zhou, Y. Traffic scene segmentation based on RGB-D image and deep learning. IEEE Trans. Intell. Transp. Syst. 2017, 19, 1664–1669. [Google Scholar] [CrossRef]
  6. Kotaridis, I.; Lazaridou, M. Remote sensing image segmentation advances: A meta-analysis. ISPRS J. Photogramm. Remote Sens. 2021, 173, 309–322. [Google Scholar] [CrossRef]
  7. Jia, H.; Lang, C.; Oliva, D.; Song, W.; Peng, X. Dynamic harris hawks optimization with mutation mechanism for satellite image segmentation. Remote Sens. 2019, 11, 1421. [Google Scholar] [CrossRef] [Green Version]
  8. Shaikh, S.H.; Saeed, K.; Chaki, N.; Shaikh, S.H.; Saeed, K.; Chaki, N. Moving Object Detection Using Background Subtraction; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  9. Sharma, T.; Singh, H. Computational approach to image segmentation analysis. Int. J. Mod. Educ. Comput. Sci. 2017, 9, 30. [Google Scholar]
  10. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man, Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  11. Kapur, J.N.; Sahoo, P.K.; Wong, A.K. A new method for gray-level picture thresholding using the entropy of the histogram. Comput. Vision, Graph. Image Process. 1985, 29, 273–285. [Google Scholar] [CrossRef]
  12. Khairuzzaman, A.K.M.; Chaudhury, S. Brain MR image multilevel thresholding by using particle swarm optimization, Otsu method and anisotropic diffusion. Int. J. Appl. Metaheuristic Comput. 2019, 10, 91–106. [Google Scholar] [CrossRef] [Green Version]
  13. Gao, H.; Fu, Z.; Pun, C.M.; Hu, H.; Lan, R. A multi-level thresholding image segmentation based on an improved artificial bee colony algorithm. Comput. Electr. Eng. 2018, 70, 931–938. [Google Scholar] [CrossRef]
  14. Yue, X.; Zhang, H. Modified hybrid bat algorithm with genetic crossover operation and smart inertia weight for multilevel image segmentation. Appl. Soft Comput. 2020, 90, 106157. [Google Scholar] [CrossRef]
  15. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  16. Pare, S.; Kumar, A.; Bajaj, V.; Singh, G.K. A multilevel color image segmentation technique based on cuckoo search algorithm and energy curve. Appl. Soft Comput. 2016, 47, 76–102. [Google Scholar] [CrossRef]
  17. Bakhshali, M.A.; Shamsi, M. Segmentation of color lip images by optimal thresholding using bacterial foraging optimization (BFO). J. Comput. Sci. 2014, 5, 251–257. [Google Scholar] [CrossRef]
  18. Khairuzzaman, A.K.M.; Chaudhury, S. Multilevel thresholding using grey wolf optimizer for image segmentation. Expert Syst. Appl. 2017, 86, 64–76. [Google Scholar] [CrossRef]
  19. Chao, Y.; Dai, M.; Chen, K.; Chen, P.; Zhang, Z.S. Image segmentation of multilevel threshold using hybrid PSOGSA with generalized opposition-based learning. Guangxue Jingmi Gongcheng/Optics Precis. Eng. 2015, 23, 879–886. [Google Scholar]
  20. Agarwal, P.; Singh, R.; Kumar, S.; Bhattacharya, M. Social spider algorithm employed multi-level thresholding segmentation approach. In Proceedings of First International Conference on Information and Communication Technology for Intelligent Systems: Volume 2; Springer: Berlin/Heidelberg, Germany, 2016; pp. 249–259. [Google Scholar]
  21. Zhou, Y.; Yang, X.; Ling, Y.; Zhang, J. Meta-heuristic moth swarm algorithm for multilevel thresholding image segmentation. Multimed. Tools Appl. 2018, 77, 23699–23727. [Google Scholar] [CrossRef]
  22. Bhandari, A.K.; Rahul, K. A context sensitive Masi entropy for multilevel image segmentation using moth swarm algorithm. Infrared Phys. Technol. 2019, 98, 132–154. [Google Scholar] [CrossRef]
  23. Kapoor, S.; Zeya, I.; Singhal, C.; Nanda, S.J. A grey wolf optimizer based automatic clustering algorithm for satellite image segmentation. Procedia Comput. Sci. 2017, 115, 415–422. [Google Scholar] [CrossRef]
  24. Resma, K.B.; Nair, M.S. Multilevel thresholding for image segmentation using Krill Herd Optimization algorithm. J. King Saud Univ.-Comput. Inform. Sci. 2021, 33, 528–541. [Google Scholar]
  25. He, L.; Huang, S. An efficient krill herd algorithm for color image multilevel thresholding segmentation problem. Appl. Soft Comput. 2020, 89, 106063. [Google Scholar] [CrossRef]
  26. Ding, G.; Dong, F.; Zou, H. Fruit fly optimization algorithm based on a hybrid adaptive-cooperative learning and its application in multilevel image thresholding. Appl. Soft Comput. 2019, 84, 105704. [Google Scholar] [CrossRef]
  27. Upadhyay, P.; Chhabra, J.K. Kapur’s entropy based optimal multilevel image segmentation using crow search algorithm. Appl. Soft Comput. 2020, 97, 105522. [Google Scholar] [CrossRef]
  28. Xing, Z. An improved emperor penguin optimization based multilevel thresholding for color image segmentation. Knowl.-Based Syst. 2020, 194, 105570. [Google Scholar] [CrossRef]
  29. Rodríguez-Esparza, E.; Zanella-Calzada, L.A.; Oliva, D.; Heidari, A.A.; Zaldivar, D.; Pérez-Cisneros, M.; Foong, L.K. An efficient Harris hawks-inspired image segmentation method. Expert Syst. Appl. 2020, 155, 113428. [Google Scholar] [CrossRef]
  30. Wunnava, A.; Naik, M.K.; Panda, R.; Jena, B.; Abraham, A. An adaptive Harris hawks optimization technique for two dimensional grey gradient based multilevel image thresholding. Appl. Soft Comput. 2020, 95, 106526. [Google Scholar] [CrossRef]
  31. Abd Elaziz, M.; Heidari, A.A.; Fujita, H.; Moayedi, H. A competitive chain-based Harris Hawks Optimizer for global optimization and multi-level image thresholding problems. Appl. Soft Comput. 2020, 95, 106347. [Google Scholar] [CrossRef]
  32. Liu, Q.; Li, N.; Jia, H.; Qi, Q.; Abualigah, L. Modified remora optimization algorithm for global optimization and multilevel thresholding image segmentation. Mathematics 2022, 10, 1014. [Google Scholar] [CrossRef]
  33. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  34. Chen, H.; Xu, Y.; Wang, M.; Zhao, X. A Balanced Whale Optimization Algorithm for Constrained Engineering Design Problems. Appl. Math. Model. 2019, 71, 45–59. [Google Scholar] [CrossRef]
  35. Gharehchopogh, S. A comprehensive survey: Whale Optimization Algorithm and its applications. Swarm Evol. Comput. 2019, 48, 1–24. [Google Scholar] [CrossRef]
  36. Got, A.; Moussaoui, A.; Zouache, D. A guided population archive whale optimization algorithm for solving multiobjective optimization problems. Expert Syst. Appl. 2020, 141, 112972.1–112972.15. [Google Scholar] [CrossRef]
  37. Sun, Y.; Wang, X.; Chen, Y.; Liu, Z. A modified whale optimization algorithm for large-scale global optimization problems. Expert Syst. Appl. 2018, 114, 563–577. [Google Scholar] [CrossRef]
  38. Sun, Y.; Yang, T.; Liu, Z. A whale optimization algorithm based on quadratic interpolation for high-dimensional global optimization problems. Appl. Soft Comput. 2019, 85, 105744. [Google Scholar] [CrossRef]
  39. Petrovi, M.; Miljkovi, Z.; Jokić, A. A novel methodology for optimal single mobile robot scheduling using whale optimization algorithm—ScienceDirect. Applied Soft Comput. 2019, 81, 105520. [Google Scholar] [CrossRef]
  40. Abdel-Basset, M.; El-Shahat, D.; Deb, K.; Abouhawwash, M. Energy-aware whale optimization algorithm for real-time task scheduling in multiprocessor systems. Appl. Soft Comput. 2020, 93, 106349. [Google Scholar] [CrossRef]
  41. Li, H.; Ke, S.; Rao, X.; Li, C.; Chen, D.; Kuang, F.; Chen, H.; Liang, G.; Liu, L. An Improved Whale Optimizer with Multiple Strategies for Intelligent Prediction of Talent Stability. Electronics 2022, 11, 4224. [Google Scholar] [CrossRef]
  42. Abd El Aziz, M.; Ewees, A.A.; Hassanien, A.E. Whale optimization algorithm and moth-flame optimization for multilevel thresholding image segmentation. Expert Syst. Appl. 2017, 83, 242–256. [Google Scholar] [CrossRef]
  43. Zhang, Y.; Chen, F. An improved whale optimization algorithm. Comput. Eng. 2018, 44, 208–213+219. [Google Scholar]
  44. Mostafa Bozorgi, S.; Yazdani, S. IWOA: An improved whale optimization algorithm for optimization problems. J. Comput. Des. Eng. 2019, 6, 243–259. [Google Scholar] [CrossRef]
  45. Haupt, R.L.; Haupt, S.E. Practical Genetic Algorithms; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
  46. Gondro, C.; Kinghorn, B.P. A simple genetic algorithm for multiple sequence alignment. Genet. Mol. Res. 2007, 6, 964–982. [Google Scholar]
  47. Tizhoosh, H.R. Opposition-based learning: A new scheme for machine intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), Vienna, Austria, 28–30 November 2005; IEEE: New York, NY, USA, 2005; Volume 1, pp. 695–701. [Google Scholar]
  48. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  49. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper optimisation algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef] [Green Version]
  50. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  51. Mirjalili, S. Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 2016, 27, 1053–1073. [Google Scholar] [CrossRef]
  52. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  53. Patra, S.; Gautam, R.; Singla, A. A novel context sensitive multilevel thresholding for image segmentation. Appl. Soft Comput. 2014, 23, 122–127. [Google Scholar] [CrossRef]
  54. Kandhway, P.; Bhandari, A.K. Spatial context-based optimal multilevel energy curve thresholding for image segmentation using soft computing techniques. Neural Comput. Appl. 2020, 32, 8901–8937. [Google Scholar] [CrossRef]
  55. Dhiman, G.; Kumar, V. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl.-Based Syst. 2019, 165, 169–196. [Google Scholar] [CrossRef]
Figure 1. The curve of a changing with u.
Figure 1. The curve of a changing with u.
Electronics 12 02698 g001
Figure 2. The flow chart of the proposed algorithm.
Figure 2. The flow chart of the proposed algorithm.
Electronics 12 02698 g002
Figure 3. Convergence curve of typical benchmark functions.
Figure 3. Convergence curve of typical benchmark functions.
Electronics 12 02698 g003
Figure 4. Original images: (a) Lena; (b) Baboon; (c) Starfish; (d) Couple; (e) Cameraman; (f) Pepper; (g) Tree; (h) Building.
Figure 4. Original images: (a) Lena; (b) Baboon; (c) Starfish; (d) Couple; (e) Cameraman; (f) Pepper; (g) Tree; (h) Building.
Electronics 12 02698 g004
Figure 5. Histograms corresponding to the original images: (a) Lena; (b) Baboon; (c) Starfish; (d) Couple; (e) Cameraman; (f) Pepper; (g) Tree; (h) Building.
Figure 5. Histograms corresponding to the original images: (a) Lena; (b) Baboon; (c) Starfish; (d) Couple; (e) Cameraman; (f) Pepper; (g) Tree; (h) Building.
Electronics 12 02698 g005
Figure 6. Bar charts for Baboon with m = 2 , 3 , 5 , and 8: (a) PSNR; (b) SSIM; (c) FSIM; (d) CPU time.
Figure 6. Bar charts for Baboon with m = 2 , 3 , 5 , and 8: (a) PSNR; (b) SSIM; (c) FSIM; (d) CPU time.
Electronics 12 02698 g006
Figure 7. Segmentation results of MSWOA–Kapur (m = 2, 3, 5, and 8.)
Figure 7. Segmentation results of MSWOA–Kapur (m = 2, 3, 5, and 8.)
Electronics 12 02698 g007
Figure 8. Segmentation results of MSWOA–Otsu (m = 2, 3, 5, and 8).
Figure 8. Segmentation results of MSWOA–Otsu (m = 2, 3, 5, and 8).
Electronics 12 02698 g008
Figure 9. Line charts for Lena with m = 10 , 15 , 20 , 25 , and 30: (a) PSNR; (b) SSIM; (c) FSIM; (d) CPU time.
Figure 9. Line charts for Lena with m = 10 , 15 , 20 , 25 , and 30: (a) PSNR; (b) SSIM; (c) FSIM; (d) CPU time.
Electronics 12 02698 g009
Figure 10. Segmentation results of Baboon using Otsu method with m = 5: (a) MSWOA; (b) WOA; (c) PSO; (d) SSA; (e) GOA; (f) MFO; (g) MVO; (h) DA.
Figure 10. Segmentation results of Baboon using Otsu method with m = 5: (a) MSWOA; (b) WOA; (c) PSO; (d) SSA; (e) GOA; (f) MFO; (g) MVO; (h) DA.
Electronics 12 02698 g010
Figure 11. Segmentation results of Baboon using Kapur method with m = 5: (a) MSWOA; (b) WOA; (c) PSO; (d) SSA; (e) GOA; (f) MFO; (g) MVO; (h) DA.
Figure 11. Segmentation results of Baboon using Kapur method with m = 5: (a) MSWOA; (b) WOA; (c) PSO; (d) SSA; (e) GOA; (f) MFO; (g) MVO; (h) DA.
Electronics 12 02698 g011
Figure 12. Segmentation results of Cameraman using Otsu method with m = 8: (a) MSWOA; (b) WOA; (c) PSO; (d) SSA; (e) GOA; (f) MFO; (g) MVO; (h) DA.
Figure 12. Segmentation results of Cameraman using Otsu method with m = 8: (a) MSWOA; (b) WOA; (c) PSO; (d) SSA; (e) GOA; (f) MFO; (g) MVO; (h) DA.
Electronics 12 02698 g012
Figure 13. Segmentation results of Cameraman using Kapur method with m = 8: (a) MSWOA; (b) WOA; (c) PSO; (d) SSA; (e) GOA; (f) MFO; (g) MVO; (h) DA.
Figure 13. Segmentation results of Cameraman using Kapur method with m = 8: (a) MSWOA; (b) WOA; (c) PSO; (d) SSA; (e) GOA; (f) MFO; (g) MVO; (h) DA.
Electronics 12 02698 g013
Table 1. 13 Benchmark functions.
Table 1. 13 Benchmark functions.
FunctionDimRange f min Type
F 1 ( x ) = i = 1 n x i 2 30[−100, 100]0Unimodal
F 2 ( x ) = i = 1 n x i + i = 1 n x i 30[−10, 10]0Unimodal
F 3 ( x ) = i = 1 n ( j = 1 n x j ) 2 30[−100, 100]0Unimodal
F 4 ( x ) = m a x i { x i , 1 i n } 30[−100, 100]0Unimodal
F 5 ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] 30[−30, 30]0Unimodal
F 6 ( x ) = i = 1 n ( [ x i + 0.5 ] ) 2 30[−100, 100]0Unimodal
F 7 ( x ) = i = 1 n i x i 4 + r a n d o m [ 0 , 1 ) 30[−1.28, 1.28]0Unimodal
F 8 ( x ) = i = 1 n x i sin ( x i ) 30[−500, 500] 418.98295 Multimodal
F 9 ( x ) = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] 30[−5.12, 5.12]0Multimodal
F 10 ( x ) = 20 e x p ( 0.2 1 n i = 1 n x i 2 ) e x p ( 1 n i = 1 n cos ( 2 π x i ) ) + 20 + e 30[−32, 32]0Multimodal
F 11 ( x ) = 1 4000 i = 1 n x i 2 i = 1 n cos ( x i i ) + 1 30[−600, 600]0Multimodal
F 12 ( x ) = π n { 10 sin ( π y 1 ) + i = 1 n 1 ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) ] + ( y n 1 ) 2 } + i = 1 n u ( x i , 10 , 100 , 4 )
y i = 1 + x i + 1 4 u ( x i , a , k , m ) = k ( x i a ) m x i > a 0 a < x i < a k ( x i a ) m x i < a 30[−50, 50]0Multimodal
F 13 ( x ) = 0.1 sin 2 ( 3 π x 1 ) + [ 1 + i = 1 n ( x i 1 ) 2 sin 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 [ 1 + sin 2 ( 2 π x n ) ]
+ i = 1 n u ( x i , 5 , 100 , 4 ) 30[−50, 50]0Multimodal
Table 2. Parameter settings.
Table 2. Parameter settings.
AlgorithmParameterValue
MSWOAu0.4
ω [ 0 , 1 ]
WOAConvergence factor a [ 0 , 2 ]
l [ 1 , 1 ]
b1
PSOCognitive and social constants c 1 = c 2 = 2
Inertial weigh ω Linearly decreases from 0.9 to 0.2
SSA c 1 nonlinearly decreases from 2 to 0
c 2 [ 0 , 1 ]
c 3 [ 0 , 1 ]
GOAcmax1
cmin0.00004
MFOb1
l [ 1 , 1 ]
aLinearly decreases from 1 to 2
MVO W E P M a x 1
W E P M i n 0.2
DAf [ 0 , 2 ]
Table 3. Comparison of optimization results obtained on benchmark functions ( P o p S i z e = 30 ; M a x _ I t e r = 500 ).
Table 3. Comparison of optimization results obtained on benchmark functions ( P o p S i z e = 30 ; M a x _ I t e r = 500 ).
FMSWOAWOAPSOSSAGOAMFOMVODA
Avg.Std.Avg.Std.Avg.Std.Avg.Std.Avg.Std.Avg.Std.Avg.Std.Avg.Std.
F10.00E+000.00E+005.68E-742.47E-731.14E-041.28E-041.94E-072.44E-073.88E+012.54E+012.68E+035.83E+032.68E+035.83E+032.73E+031.39E+03
F22.18E-2191.58E-2192.85E-517.99E-517.31E-021.61E-011.71E+001.32E+001.82E+011.95E+012.91E+012.02E+012.91E+012.02E+011.65E+016.19E+00
F30.00E+000.00E+004.32E+041.05E+047.36E+012.74E+012.14E+031.21E+033.36E+031.71E+031.57E+049.57E+031.57E+049.57E+031.32E+047.45E+03
F42.48E-2050.00E+004.89E+013.08E+011.11E+002.42E-011.22E+013.35E+001.39E+014.07E+006.81E+018.40E+006.81E+018.40E+003.14E+011.08E+01
F52.76E+015.08E+002.79E+014.07E-019.62E+016.65E+012.59E+024.92E+026.60E+039.31E+031.42E+043.33E+041.42E+043.33E+042.46E+052.36E+05
F64.04E-011.38E-014.40E-012.30E-012.01E-042.50E-049.92E-088.94E-083.73E+012.33E+013.68E+036.67E+033.68E+036.67E+032.42E+031.60E+03
F76.88E-056.47E-053.51E-035.05E-031.61E-016.37E-021.99E-019.15E-025.08E-022.13E-022.07E+004.19E+002.07E+004.19E+006.89E-017.28E-01
F8−1.25E+041.17E+02−9.89E+031.63E+03−4.82E+031.22E+03−7.47E+035.49E+02−7.30E+035.12E+02−8.66E+038.76E+02−8.66E+038.76E+02−5.35E+036.50E+02
F90.00E+000.00E+000.00E+000.00E+005.53E+011.53E+016.08E+012.00E+011.03E+025.22E+011.62E+024.43E+011.62E+024.43E+011.69E+024.61E+01
F108.88E-160.00E+005.03E-152.48E-152.29E-014.55E-012.70E+008.49E-015.65E+001.53E+001.43E+017.18E+001.43E+017.18E+001.01E+011.97E+00
F110.00E+000.00E+002.02E-026.48E-028.05E-039.65E-032.02E-021.50E-021.15E+007.73E-021.13E+001.09E+001.13E+001.09E+001.73E+017.20E+00
F122.19E-021.47E-022.12E-021.15E-023.12E-064.67E-069.04E+005.79E+001.11E+016.63E+008.59E+009.46E+008.59E+009.46E+003.06E+041.53E+05
F133.01E-011.51E-014.67E-012.42E-017.38E-031.27E-021.83E+011.51E+014.15E+012.43E+011.37E+077.49E+071.37E+077.49E+072.75E+054.14E+05
The bolded part is the algorithm that performs best on the benchmark functions (Avg or Std metrics).
Table 4. Comparison of optimization results obtained on benchmark functions ( P o p S i z e = 30 ; M a x _ I t e r = 800 ).
Table 4. Comparison of optimization results obtained on benchmark functions ( P o p S i z e = 30 ; M a x _ I t e r = 800 ).
FMSWOAWOAPSOSSAGOAMFOMVODA
Avg.Std.Avg.Std.Avg.Std.Avg.Std.Avg.Std.Avg.Std.Avg.Std.Avg.Std.
F10.00E+000.00E+008.88E-1204.54E-1193.20E-061.21E-051.53E-083.79E-091.36E+011.15E+013.33E+021.83E+033.33E+021.83E+031.38E+037.31E+02
F20.00E+000.00E+001.77E-835.71E-831.45E-031.76E-031.53E+001.34E+001.62E+012.58E+013.70E+011.88E+013.70E+011.88E+011.73E+017.80E+00
F30.00E+000.00E+003.01E+041.31E+042.61E+019.84E+006.41E+024.62E+022.72E+031.47E+032.25E+041.78E+042.25E+041.78E+041.07E+046.00E+03
F40.00E+000.00E+003.77E+013.16E+017.26E-012.09E-019.20E+004.01E+001.38E+013.81E+006.54E+019.36E+006.54E+019.36E+002.77E+019.06E+00
F52.74E+015.07E+002.77E+015.08E-015.65E+014.64E+019.81E+011.26E+022.83E+034.98E+031.28E+043.09E+041.28E+043.09E+042.69E+053.48E+05
F62.83E-011.15E-011.65E-011.21E-011.71E-068.05E-061.81E-085.81E-091.17E+018.30E+002.34E+034.32E+032.34E+034.32E+031.38E+037.37E+02
F73.86E-053.29E-052.06E-033.69E-031.03E-014.20E-021.13E-013.82E-022.67E-021.21E-024.81E+006.84E+004.81E+006.84E+003.92E-012.74E-01
F8−1.26E+041.14E+00−1.07E+041.75E+03−5.95E+031.31E+03−7.36E+038.10E+02−7.52E+037.54E+02−8.67E+039.27E+02−8.67E+039.27E+02−5.47E+036.24E+02
F90.00E+000.00E+000.00E+000.00E+004.82E+011.27E+015.62E+011.60E+011.02E+023.64E+011.55E+024.04E+011.55E+024.04E+011.62E+023.29E+01
F108.88E-160.00E+003.61E-152.41E-151.37E-014.26E-012.35E+004.92E-014.55E+001.13E+001.41E+017.19E+001.41E+017.19E+009.16E+001.70E+00
F110.00E+000.00E+001.79E-025.81E-021.55E-018.85E-038.04E-038.04E-038.77E-011.55E-011.51E+014.15E+011.51E+014.15E+011.32E+015.95E+00
F121.59E-021.06E-021.38E-021.88E-026.99E-091.66E-085.03E+002.90E+008.22E+005.74E+001.47E+001.49E+001.47E+001.49E+005.44E+011.28E+02
F132.30E-019.58E-023.17E-012.25E-012.93E-034.94E-037.53E+001.57E+012.76E+011.57E+014.24E+006.60E+004.24E+006.60E+001.15E+052.62E+05
The bolded part is the algorithm that performs best on the benchmark functions (Avg or Std metrics).
Table 5. Comparison of PSNR computed with eight algorithms using the Otsu and Kapur methods with m of 2, 3, 5, and 8.
Table 5. Comparison of PSNR computed with eight algorithms using the Otsu and Kapur methods with m of 2, 3, 5, and 8.
ImagemMSWOAWOAPSOSSAGOAMFOMVODA
OtsuKapurOtsuKapurOtsuKapurOtsuKapurOtsuKapurOtsuKapurOtsuKapurOtsuKapur
Lena211.902514.484511.902514.388311.902514.388311.902514.388311.902514.388311.902514.388311.902514.388311.902514.3883
314.102716.955214.102716.825514.102716.825514.102716.825513.757916.351614.102716.848114.102716.825514.102716.8255
519.211319.872919.161419.792316.964819.792316.964819.792318.984419.788217.202519.866216.964819.792316.964819.8662
822.579724.974120.575621.826919.975222.701820.712622.709320.321622.708520.366722.523420.336822.708320.428622.9321
Baboon211.583712.338111.583712.338111.583712.338111.583712.338111.583712.338111.583712.338111.583712.338111.583712.3381
313.632114.344113.541314.344113.541314.344113.541314.344113.309914.27513.541314.344113.541314.344113.541314.3441
517.177122.933617.272322.85916.172822.854416.172822.854417.158921.607816.172822.85916.172822.854416.172822.859
820.386930.380720.01830.360918.358630.293318.568330.294718.716930.334118.608730.325918.566830.306319.906430.3144
Starfish212.69715.160212.623715.160212.623715.160212.623715.160212.623715.160212.623715.160212.623715.160212.623715.1602
315.258718.157415.258718.157415.258718.157415.258718.157414.478817.747615.365418.157415.258718.157415.269118.1574
519.5922.221719.051822.085819.051821.97119.051821.97119.194722.112119.051822.085819.051821.97119.051821.9724
825.227928.307123.171125.277722.243425.394222.785825.380522.589325.413922.252825.719922.243425.186822.713525.3236
Couple212.112114.288912.112114.288912.112114.288912.112114.288912.112114.288912.112114.288912.112114.288912.112114.2889
314.272616.978514.272616.926514.272616.926514.272616.926517.517118.174314.272616.926514.272616.926514.272616.9265
518.117320.726116.891420.530216.891420.436416.891420.436419.841621.967317.657620.436416.556920.436416.798220.5302
825.315328.113820.738222.076219.91922.076221.749522.076220.567922.048220.713822.076221.486922.076221.453922.241
Cameraman212.23114.901612.23114.72912.23114.72912.23114.72912.23114.72912.23114.72912.23114.72912.23114.729
315.720421.757414.547221.283314.547221.283314.547221.283313.770921.018314.547221.41814.547221.283314.547221.2833
518.622726.067618.620725.68317.633925.829918.559525.667518.509325.385217.874425.833218.431525.667518.527125.6894
820.99329.130320.894329.054320.693229.026720.8129.026720.828529.026520.697629.045420.861828.989223.921829.0309
Pepper212.298113.21112.298113.21112.298113.21112.298113.21112.298113.21112.298113.21112.298113.21112.298113.211
314.483715.418814.408215.418814.408215.418814.408215.418814.412413.677114.408215.418814.408215.418814.408215.4188
519.572223.56118.019523.372116.790923.209916.888823.209918.336221.756917.197823.305817.197823.209917.197823.2954
822.727629.510322.348329.360820.130327.570321.944428.521220.432328.662220.135528.870720.284128.536720.116129.0532
Tree213.993416.573113.993416.411413.993416.411413.993416.411413.993416.411413.993416.411413.993416.411413.993416.4114
316.739319.896816.509119.775316.509119.775316.509119.775315.96820.557816.509119.775316.509119.775316.509119.8155
521.378322.503420.039122.175319.887322.410219.966522.14120.113621.688120.039422.410220.007522.14119.948622.2266
825.462528.379625.023626.023223.518225.552124.41825.554324.308625.508924.23325.565424.43725.393125.515925.6348
Building212.055712.150812.055712.150812.055712.150812.055712.150812.055712.150812.055712.150812.055712.150812.055712.1508
314.077213.431813.927613.431813.927613.431813.927613.431813.681212.796713.927613.431813.927613.431813.927613.4318
518.405219.234817.748519.232516.580119.232516.580119.232518.309222.114716.580119.232516.580119.232517.363519.2325
820.967130.372120.813829.826718.264728.171320.184228.173818.588828.391118.46628.180118.269528.171319.670628.7902
The bolded part is the algorithm that performs best in PSNR metrics at the threshold level (m = 2, 3, 5, 8).
Table 6. Comparison of SSIM computed with eight algorithms using the Otsu and Kapur methods with m of 2, 3, 5, and 8.
Table 6. Comparison of SSIM computed with eight algorithms using the Otsu and Kapur methods with m of 2, 3, 5, and 8.
ImagemMSWOAWOAPSOSSAGOAMFOMVODA
OtsuKapurOtsuKapurOtsuKapurOtsuKapurOtsuKapurOtsuKapurOtsuKapurOtsuKapur
Lena20.384690.483540.384690.478560.384690.478560.384690.478560.384690.478560.384690.478560.384690.478560.384690.47856
30.468630.572610.468630.56610.468630.56610.468630.56610.539530.57470.468630.563270.468630.56610.468630.5661
50.702990.648550.698930.650790.59530.650790.59530.650790.655740.649610.610270.650610.59530.650790.59530.65061
80.801570.789090.723080.726530.708730.725560.727370.7250.713180.729930.719640.725350.716220.724840.71530.7316
Baboon20.444180.517380.444180.517380.444180.517380.444180.517380.444180.517380.444180.517380.444180.517380.444180.51738
30.521430.622950.521150.622950.521150.622950.521150.622950.563170.618090.521150.622950.521150.622950.521150.62295
50.705010.791250.710250.790120.641160.789580.641160.789580.678260.770050.641160.790120.641160.789580.641160.79012
80.803490.907190.800740.906030.72610.905790.734940.905830.735760.906050.727690.906440.73490.905980.751960.90688
Starfish20.324020.376770.319970.376770.319970.376770.319970.376770.319970.376770.319970.376770.319970.376770.319970.37677
30.441350.504110.441350.504110.441350.504110.441350.504110.448560.531170.443720.504110.441350.504110.438920.50411
50.63840.651760.616660.650190.616660.648420.616660.648420.61720.66690.616660.650190.616660.648420.616660.64841
80.854470.845010.758910.760510.735280.766040.749510.769020.750640.767870.735790.774220.735280.764350.724050.76448
Couple20.424240.481480.424240.481480.424240.481480.424240.481480.424240.481480.424240.481480.424240.481480.424240.48148
30.50460.575050.50460.575230.50460.575230.50460.575230.639140.604930.50460.575230.50460.575230.50460.57523
50.658320.699710.626970.696180.626970.694690.626970.694690.680410.717340.644380.694690.613970.694690.621530.69618
80.833630.858120.741420.770940.732740.770940.7560.770940.740620.769770.746410.770940.754150.770940.752710.77335
Cameraman20.43370.545660.43370.542190.43370.542190.43370.542190.43370.542190.43370.542190.43370.542190.43370.54219
30.524010.654350.482180.670890.482180.670890.482180.670890.584830.662030.482180.671770.482180.670890.482180.67089
50.614090.727280.613660.72180.58380.722220.612560.720730.630540.729170.583820.722260.611940.720730.612610.7224
80.675760.753980.67270.754460.665870.75360.669450.75360.669860.751090.66630.753560.672070.751460.725460.75374
Pepper20.372590.525090.372590.525090.372590.525090.372590.525090.372590.525090.372590.525090.372590.525090.372590.52509
30.474070.603680.47090.603680.47090.603680.47090.603680.543240.558670.47090.603680.47090.603680.47090.60368
50.665390.734770.616950.728910.568220.72540.56630.72540.625830.674350.58940.72720.58940.72540.58940.72665
80.760630.827850.763890.821460.69470.810310.746050.809940.699940.813310.694960.816270.698250.809770.690470.82057
Tree20.495660.516680.495660.510240.495660.510240.495660.510240.495660.510240.495660.510240.495660.510240.495660.51024
30.546130.633480.519890.626260.519890.626260.519890.626260.664170.703630.519890.626260.519890.626260.519890.62858
50.670560.667740.643650.657680.628070.668910.644190.655340.658170.669710.636990.668910.643960.655340.63290.66488
80.8140.844060.795440.754820.732690.743230.748580.742790.738250.742410.756440.742480.797210.738170.750120.74028
Building20.352670.5730.352670.5730.352670.5730.352670.5730.352670.5730.352670.5730.352670.5730.352670.573
30.477780.61410.46020.61410.46020.61410.46020.61410.462910.613320.46020.61410.46020.61410.46020.6141
50.6980.723730.688390.722540.597420.722540.597420.722540.680950.785250.597420.722540.597420.722540.622130.72254
80.760080.910420.758120.896990.668330.881930.741630.881850.684330.884790.674210.879960.669620.881930.713730.88609
The bolded part is the algorithm that performs best in SSIM metrics at the threshold level (m = 2, 3, 5, 8).
Table 7. Comparison of FSIM computed with eight algorithms using the Otsu and Kapur methods with m of 2, 3, 5, and 8.
Table 7. Comparison of FSIM computed with eight algorithms using the Otsu and Kapur methods with m of 2, 3, 5, and 8.
ImagemMSWOAWOAPSOSSAGOAMFOMVODA
OtsuKapurOtsuKapurOtsuKapurOtsuKapurOtsuKapurOtsuKapurOtsuKapurOtsuKapur
Lena20.667090.685020.667090.683550.667090.683550.667090.683550.667090.683550.667090.683550.667090.683550.667090.68355
30.714610.755830.714610.754240.714610.754240.714610.754240.735910.742240.714610.752450.714610.754240.714610.75424
50.804620.843610.801610.843270.80610.843270.80610.843270.80270.844420.810270.841350.80610.843270.80610.84135
80.907540.906610.875680.884620.88020.890710.874030.891370.878870.891460.880620.89510.877780.891790.877370.88891
Baboon20.6880.733460.6880.733460.6880.733460.6880.733460.6880.733460.6880.733460.6880.733460.6880.73346
30.750930.796050.750050.796050.750050.796050.750050.796050.760460.793470.750050.796050.750050.796050.750050.79605
50.814790.898070.81790.896760.804840.896630.804840.896630.823660.897960.804840.896760.804840.896630.804840.89676
80.866570.979010.863990.978650.840490.978360.843820.978330.846150.978790.845620.978190.843860.978260.89320.97857
Starfish20.554420.570510.552950.570510.552950.570510.552950.570510.552950.570510.552950.570510.552950.570510.552950.57051
30.640860.663350.640860.663350.640860.663350.640860.663350.636240.662630.642280.663350.640860.663350.640220.66335
50.774690.774910.767940.775840.767940.776180.767940.776180.767850.773940.767940.775840.767940.776180.767940.77644
80.865230.866840.856180.859090.851640.862860.855820.865490.849190.86430.852350.866680.851640.862910.839650.86261
Couple20.626210.645810.626210.645810.626210.645810.626210.645810.626210.645810.626210.645810.626210.645810.626210.64581
30.690870.723180.690870.723360.690870.723360.690870.723360.686010.667140.690870.723360.690870.723360.690870.72336
50.78960.812380.768130.812340.768130.81270.768130.81270.782280.770210.787540.81270.76040.81270.76480.81234
80.84120.890050.846380.864960.846030.864960.835530.864960.842710.864090.847740.864960.836880.864960.839350.86422
Cameraman20.628110.656340.628110.654670.628110.654670.628110.654670.628110.654670.628110.654670.628110.654670.628110.65467
30.710290.784830.675580.795620.675580.795620.675580.795620.762630.791890.675580.795460.675580.795620.675580.79562
50.792430.868970.791290.854910.76140.866270.789380.855960.808230.873040.759290.866230.786030.855960.787820.8592
80.837470.909690.836110.910340.824320.90930.830930.90930.830060.908180.82590.909330.835860.907830.818130.90993
Pepper20.611030.645320.611030.645320.611030.645320.611030.645320.611030.645320.611030.645320.611030.645320.611030.64532
30.666320.70080.66620.70080.66620.70080.66620.70080.676320.684210.66620.70080.66620.70080.66620.7008
50.771340.789930.750420.7880.727920.788840.725390.788840.742030.767810.73570.788810.73570.788840.73570.78903
80.812460.857760.807730.857520.797610.871350.824210.863030.799610.863980.79790.862850.798010.863320.794460.85741
Tree20.69010.694920.69010.69250.69010.69250.69010.69250.69010.69250.69010.69250.69010.69250.69010.6925
30.714420.76640.702640.76470.702640.76470.702640.76470.750910.783280.702640.76470.702640.76470.702640.76547
50.793450.798370.776360.793790.773160.796480.774060.793640.77120.796780.776380.796480.775420.793640.773630.79743
80.846850.864260.832140.852640.829450.849840.835140.849840.835490.84950.843280.849940.833320.848860.837780.84956
Building20.620240.606930.620240.606930.620240.606930.620240.606930.620240.606930.620240.606930.620240.606930.620240.60693
30.67930.659260.678370.659260.678370.659260.678370.659260.644770.647410.678370.659260.678370.659260.678370.65926
50.746980.769230.738420.768160.731810.768160.731810.768160.761160.797630.731810.768160.731810.768160.778780.76816
80.818050.904620.797380.903650.772420.900490.801830.900360.777530.90080.776920.900170.773270.900490.818650.90051
The bolded part is the algorithm that performs best in FSIM metrics at the threshold level (m = 2, 3, 5, 8).
Table 8. Comparison of CPU time computed with eight algorithms using the Otsu and Kapur methods with m of 2, 3, 5, and 8.
Table 8. Comparison of CPU time computed with eight algorithms using the Otsu and Kapur methods with m of 2, 3, 5, and 8.
ImagemMSWOAWOAPSOSSAGOAMFOMVODA
OtsuKapurOtsuKapurOtsuKapurOtsuKapurOtsuKapurOtsuKapurOtsuKapurOtsuKapur
Lena21.7512.5281.6732.4921.2731.8511.8332.3118.2459.0941.3831.9611.5832.05915.75315.838
31.7892.5671.7232.6071.3371.9161.9332.40013.61914.5111.4252.0211.6282.14117.07417.197
51.8912.6781.8572.6921.4392.0251.9692.48918.83220.0811.5362.1151.7142.26717.59420.126
81.9732.8861.9732.8941.5252.2192.0672.50723.85024.9071.6092.2851.7802.46819.03721.626
Baboon21.7472.5771.7242.4971.2911.8701.8712.2898.3059.1621.3681.9751.5462.10514.49914.509
31.7852.6091.7332.5651.3641.9831.9252.34913.53914.4701.4612.0761.5732.21915.97318.692
51.8472.6811.8322.6541.4162.0581.9802.48118.54119.8241.5452.1291.7072.30817.62719.636
81.9892.8951.9692.8301.5282.1722.0642.54224.00924.9031.6432.2391.7362.43018.61721.149
Starfish22.0482.8402.0762.8511.7232.1832.1242.6378.5799.6311.6532.3061.8022.40213.98314.644
32.2092.9072.1372.9811.8272.2432.3772.73313.86514.6791.7412.4161.9582.48814.43317.130
52.2513.1192.1653.0551.8442.3272.3612.80819.00819.9751.8842.5372.0932.65916.49319.225
82.2723.2202.2363.2271.9142.4902.4152.87224.08225.4642.0332.6102.1682.77119.15022.101
Couple22.0242.8352.0142.8121.6192.1392.1952.6488.6019.3821.6902.2621.7992.38216.99018.928
32.1322.9192.0282.8881.6962.2752.2182.72513.76214.8501.7382.3471.9212.45217.76619.570
52.2373.0292.1583.0941.7982.4222.3312.87519.01420.2801.8872.5841.9672.54918.21820.176
82.2893.2292.3333.2621.8532.5342.3833.03324.01925.2341.9852.6142.0542.68719.39421.061
Cameraman21.6862.5861.6482.5111.2411.8591.8192.2828.2689.0201.3781.9511.4572.09016.55217.429
31.7742.6171.7262.5881.3511.9551.8972.41813.52514.3071.4022.0681.5292.15317.06519.041
51.8702.7091.7822.6761.3962.0321.9662.49618.53219.8061.4852.1431.5992.31317.67520.150
81.8702.8891.9082.8701.5542.1062.0192.54323.57624.7411.5872.2441.7232.41618.67923.891
Pepper21.8702.4841.6392.5161.2961.8431.8352.2818.3579.1101.3571.9841.4492.07412.87914.165
31.7632.5851.7072.6081.3321.9741.8462.38113.52814.4971.4292.0571.5072.14013.74115.968
51.8342.6841.7612.6891.4392.0091.9002.40218.41719.6951.5152.1621.6342.30416.07817.843
81.9172.8711.9632.8851.4882.1311.9862.57723.56524.9611.6202.3171.7692.45618.57819.317
Tree22.0062.8732.0292.7971.6532.1802.0952.6798.5499.4781.6902.3561.8552.45216.56118.706
32.1722.9482.1052.9081.7782.3872.1842.77813.83814.8031.7392.4461.9982.53717.56519.387
52.2013.1612.1933.0441.8222.4202.3282.81318.77620.0211.8822.5562.0232.75918.08920.541
82.2423.2392.2713.2551.9242.5222.5362.92224.15225.5661.9332.6392.2072.81919.12122.506
Building22.0792.8192.0422.7981.5742.1502.1552.6348.5709.4161.7042.3491.8522.39613.70614.570
32.1242.9372.0882.8981.6962.2932.2342.73913.73114.7191.8482.3491.9082.49515.16617.454
52.2263.0902.1042.9671.7072.3012.3202.88318.78620.1561.9502.5611.9412.58417.88120.206
82.3573.2532.2473.2081.8352.4332.4742.93524.11825.2502.0132.6342.1632.72518.91621.613
The bolded part is the algorithm that performs best in CPU time metrics at the threshold level (m = 2, 3, 5, 8).
Table 9. Quality metrics using MSWOA with m of 10, 15, 20, 25, and 30.
Table 9. Quality metrics using MSWOA with m of 10, 15, 20, 25, and 30.
ImagemMSWOA–OtsuMSWOA–Kapur
PSNRSSIMFSIMCPU TimePSNRSSIMFSIMCPU Time
Lena1024.41840.829380.922792.08729.23120.825690.934332.957
1526.44130.84890.924192.23934.92050.896440.975523.250
2029.19390.908490.960192.34837.38480.929210.988063.372
2529.41130.92030.963362.47939.5240.955950.992833.583
3030.04870.931760.9662.64640.83320.96510.99593.833
Baboon1021.49730.841810.89082.11132.24840.931570.985972.984
1523.06850.881730.911052.19335.79150.964340.994353.225
2024.15480.899820.92282.35538.28180.978340.996873.429
2524.2040.903050.92482.46540.04480.985680.9983.648
3025.63490.921190.935232.65041.4930.989920.998483.895
Starfish1027.37840.88740.891762.54830.23510.888280.908643.464
1530.40030.93030.937932.70534.35050.937090.954183.651
2032.25430.949460.953442.83436.87570.960210.972813.902
2534.25890.954660.957882.90839.01460.974140.983614.380
3035.24370.968460.968483.07040.09750.979480.987794.806
Couple1027.58220.869650.879142.58031.35830.901110.923463.430
1530.65770.913470.92032.68135.15670.93710.950493.716
2033.09880.947780.947822.80937.80.961250.971693.922
2533.82560.949570.955692.93639.41340.970720.979524.147
3034.07270.95240.961713.03741.20760.979590.986374.238
Cameraman1021.73030.760760.849782.07030.31160.77920.935562.970
1524.8720.811060.88022.21831.60610.801750.954463.179
2025.02290.8510.887592.29532.50660.822420.963543.377
2525.2890.890290.908292.42638.01230.947640.975573.724
3026.64680.919990.928322.53440.48880.966490.98563.818
Pepper1023.05390.796870.83152.04831.06020.844170.888182.968
1524.54560.833950.874392.27334.31320.908890.932833.156
2025.71440.893030.900932.29037.26350.941990.964043.387
2526.53110.893890.904472.47539.15980.960480.977673.600
3027.93150.935630.937862.56040.77480.971190.986383.924
Tree1025.47870.840040.857972.57628.19360.798080.876123.494
1527.63070.896880.908492.73834.20330.908380.932953.752
2028.63420.899840.914912.82437.05250.938870.958314.139
2531.02130.936060.942882.93638.51490.95250.970654.439
3032.27950.946550.953093.02540.02470.962540.978424.512
Building1021.49090.783880.815082.56431.71350.919460.932613.562
1523.10070.817570.848712.69935.76920.949880.965823.644
2024.5090.840780.864572.82637.70510.964850.974793.832
2525.57520.883560.89242.91537.790.964570.980524.078
3026.26170.905590.904743.11840.03280.97440.987084.268
The bolded part is the algorithm that performs best in quality metrics (PSNR, SSIM, FSIM, CPU time) at the threshold level (m = 10, 15, 20, 25, 30).
Table 10. Wilcoxon rank sum test results based on multilevel thresholding methods using the Otsu method.
Table 10. Wilcoxon rank sum test results based on multilevel thresholding methods using the Otsu method.
ImagemMSWOA vs.
WOAPSOSSAGOAMFOMVODA
phphphphphphph
Lena5>0.05 # 0<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1
8>0.05 # 0>0.05 # 0>0.05 # 0<0.05 + 0>0.05 # 0>0.05 # 0<0.05 + 1
Baboon5<0.05 + 1<0.05 + 1<0.05 + 1>0.05 # 0>0.05 # 0<0.05 + 1<0.05 + 1
8<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1>0.05 # 0
Starfish5<0.05 + 1>0.05 # 0<0.05 + 1<0.05 + 1>0.05 # 0>0.05 # 0<0.05 + 1
8>0.05 # 0<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1>0.05 # 0
Couple5<0.05 + 1<0.05 + 1>0.05 # 0<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1
8<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1
Cameraman5<0.05 + 1>0.05 # 0<0.05 + 1<0.05 + 1>0.05 # 0<0.05 + 1<0.05 + 1
8<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1
Pepper5<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1
8>0.05 # 0<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1
Tree5<0.05 + 1<0.05 + 1<0.05 + 1>0.05 # 0<0.05 + 1<0.05 + 1<0.05 + 1
8<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1
Building5<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1
8<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1
‘+’ indicates that the MSWOA performed better than the other algorithms. ‘#’ indicates that MSWOA performance was similar to or worse than that of the other algorithms.
Table 11. Wilcoxon rank sum test results based on multilevel thresholding methods using Kapur entropy.
Table 11. Wilcoxon rank sum test results based on multilevel thresholding methods using Kapur entropy.
ImagemMSWOA vs.
WOAPSOSSAGOAMFOMVODA
phphphphphphph
Lena5>0.05 # 0<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1
8<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1
Baboon5<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1
8<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1
Starfish5<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1
8<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1
Couple5<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1
8<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1
Cameraman5<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1
8<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1
Pepper5>0.05 # 0<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1>0.05 # 0>0.05 # 0
8<0.05 + 1<0.05 + 1<0.05 + 1>0.05 # 0<0.05 + 1<0.05 + 1<0.05 + 1
Tree5>0.05 # 0<0.05 + 1>0.05 # 0<0.05 + 1>0.05 # 0>0.05 # 0>0.05 # 0
8<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1
Building5<0.05 + 1<0.05 + 1<0.05 + 1>0.05 # 0<0.05 + 1<0.05 + 1<0.05 + 1
8<0.05 + 1<0.05 + 1<0.05 + 1<0.05 + 1>0.05 # 0<0.05 + 1<0.05 + 1
‘+’ indicates that the MSWOA performed better than the other algorithms. ‘#’ indicates that MSWOA performance was similar to or worse than that of the other algorithms.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, C.; Tu, C.; Wei, S.; Yan, L.; Wei, F. MSWOA: A Mixed-Strategy-Based Improved Whale Optimization Algorithm for Multilevel Thresholding Image Segmentation. Electronics 2023, 12, 2698. https://doi.org/10.3390/electronics12122698

AMA Style

Wang C, Tu C, Wei S, Yan L, Wei F. MSWOA: A Mixed-Strategy-Based Improved Whale Optimization Algorithm for Multilevel Thresholding Image Segmentation. Electronics. 2023; 12(12):2698. https://doi.org/10.3390/electronics12122698

Chicago/Turabian Style

Wang, Chunzhi, Chengkun Tu, Siwei Wei, Lingyu Yan, and Feifei Wei. 2023. "MSWOA: A Mixed-Strategy-Based Improved Whale Optimization Algorithm for Multilevel Thresholding Image Segmentation" Electronics 12, no. 12: 2698. https://doi.org/10.3390/electronics12122698

APA Style

Wang, C., Tu, C., Wei, S., Yan, L., & Wei, F. (2023). MSWOA: A Mixed-Strategy-Based Improved Whale Optimization Algorithm for Multilevel Thresholding Image Segmentation. Electronics, 12(12), 2698. https://doi.org/10.3390/electronics12122698

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop