Next Article in Journal
Estimation of the Generalized Logarithmic Transformation Exponential Distribution under Progressively Type-II Censored Data with Application to the COVID-19 Mortality Rates
Next Article in Special Issue
Identifying the Association of Time-Averaged Serum Albumin Levels with Clinical Factors among Patients on Hemodialysis Using Whale Optimization Algorithm
Previous Article in Journal
Existence and Hyers–Ulam Stability for a Multi-Term Fractional Differential Equation with Infinite Delay
Previous Article in Special Issue
Optimal Operation for Reduced Energy Consumption of an Air Conditioning System Using Neural Inverse Optimal Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modified Remora Optimization Algorithm for Global Optimization and Multilevel Thresholding Image Segmentation

1
School of Computer Science and Technology, Hainan University, Haikou 570228, China
2
School of Mathematics and Statistics, Hainan Normal University, Haikou 571158, China
3
Key Laboratory of Data Science and Intelligence Education of Ministry of Education, Hainan Normal University, Haikou 571158, China
4
School of Information Engineering, Sanming University, Sanming 365004, China
5
Faculty of Computer Sciences and Informatics, Amman Arab University, Amman 11953, Jordan
6
School of Computer Science, Universiti Sains Malaysia, Pulau Pinang 11800, Malaysia
*
Authors to whom correspondence should be addressed.
Mathematics 2022, 10(7), 1014; https://doi.org/10.3390/math10071014
Submission received: 9 February 2022 / Revised: 12 March 2022 / Accepted: 18 March 2022 / Published: 22 March 2022

Abstract

:
Image segmentation is a key stage in image processing because it simplifies the representation of the image and facilitates subsequent analysis. The multi-level thresholding image segmentation technique is considered one of the most popular methods because it is efficient and straightforward. Many relative works use meta-heuristic algorithms (MAs) to determine threshold values, but they have issues such as poor convergence accuracy and stagnation into local optimal solutions. Therefore, to alleviate these shortcomings, in this paper, we present a modified remora optimization algorithm (MROA) for global optimization and image segmentation tasks. We used Brownian motion to promote the exploration ability of ROA and provide a greater opportunity to find the optimal solution. Second, lens opposition-based learning is introduced to enhance the ability of search agents to jump out of the local optimal solution. To substantiate the performance of MROA, we first used 23 benchmark functions to evaluate the performance. We compared it with seven well-known algorithms regarding optimization accuracy, convergence speed, and significant difference. Subsequently, we tested the segmentation quality of MORA on eight grayscale images with cross-entropy as the objective function. The experimental metrics include peak signal-to-noise ratio (PSNR), structure similarity (SSIM), and feature similarity (FSIM). A series of experimental results have proved that the MROA has significant advantages among the compared algorithms. Consequently, the proposed MROA is a promising method for global optimization problems and image segmentation.

1. Introduction

Image segmentation technique is a primary step in computer vision and pattern recognition for pre-processing and analyzing images in the fields of remote sensing, medicine, etc. [1,2,3,4,5]. This technique divides an image into several homogeneous regions or segments with similar characteristics according to features, color, texture, and contrast [6,7]. In the literature, there exist four common types of image segmentation techniques, which can be categorized into (1) region-based techniques, (2) graph-based techniques, (3) clustering-based techniques, and (4) thresholding-based techniques [8]. Among them, the thresholding-based method has gained more attention from researchers due to its efficiency and ease of implementation. The thresholding-based technique can be categorized into bi-level and multi-level thresholding [9]. As the name suggests, the bi-level thresholding uses a single threshold value to segment an image into two homogeneous foreground and background areas. By contrast, multi-level thresholding divides an image into more than two regions based on pixel intensities [10,11,12]. Thus, the multi-level thresholding technique has widely applied in various fields by researchers. However, selecting suitable threshold values is still the most challenging problem and requires further research.
To determine optimal threshold values to segment a given image into several regions, there are two approaches commonly used, which are Otsu’s method (by maximizing between-class variance) and Kapur’s entropy (by maximizing the entropy of the classes) [13]. These methods are suitable for determining a single threshold value. However, when extended into multi-level thresholding, the fatal shortcomings are the computational time and complexity. Therefore, various nature-inspired MAs were proposed in the literature to handle these problems and gained excellent performance [14,15,16].
MAs are considered stochastic algorithms that use randomly generated search agents and specific operators to determine optimal solutions in the search space instead of gradient information. These operators are inspired by nature, such as swarm behavior, social behavior, physics, evolutionary rules, etc. [17,18,19,20,21,22,23,24,25]. Over the last years, various MAs have been proposed and applied in complex real-world problems. Such MAs can be categorized into three main categories: (1) swarm intelligence-based methods, (2) nature evolution-based methods, (3) physics-based methods. The first category mainly simulates the swarm behavior of biological entities, such as birds, slime mould, or the grey wolf. Particle Swarm Optimization (PSO) [26] is one of the most popular MAs inspired by the foraging behavior of birds. Grey Wolf Optimizer (GWO) [27] simulates the grey wolf’s hunting behavior and leadership relationship. Slime Mould Algorithm (SMA) [28] imitates the slime mould’s contraction and oscillation mode during foraging. Other popular MAs in the first category include the Whale Optimization Algorithm (WOA) [29], Cuckoo Search Algorithm (CSA) [30], Salp Swarm Algorithm (SSA) [31], Ant Colony Optimization (ACO) [32], Harris Hawks Optimization (HHO) [33], Moth Flame Optimization (MFO) [34], and Aquila Optimizer (AO) [35]. The second category mainly imitates the evolution process in nature. Genetic Algorithm (GA) [36] is the most famous one developed by the Darwinian evolution law. Some other popular algorithms include Evolutionary Strategy (ES) [37], Differential Evolution (DE) [38], Genetic Programming (GP) [39], Bio-geography-Based Optimizer (BBO) [40], and Evolution Strategy (ES) [41]. The physics-based method simulates the physical laws of the universe. One of the most popular algorithms in this category is Simulated Annealing (SA) [42], which mimics the principle of simulated annealing. It starts from a higher initial temperature, then reduces with the decrease in temperature parameters. Established by this principle, other algorithms include Gravity Search Algorithm (GSA) [43], Gradient-based method (GBO) [44], Sine Cosine Algorithm (SCA) [45], Golden Sine Algorithm (Gold-SA) [46], Arithmetic Optimization Algorithm (AOA) [47], Henry Gas Solubility Optimization (HGSO) [48], Atom Search Optimization (ASO) [49], Central Force Optimization (CFO) [50], and Multi-Verse Optimizer (MVO) [51]. Although many nature-inspired MAs were proposed to solve numerical optimization problems, as real-world problems grow in complexity and difficulty, we need to present more useful MAs to handle them.
Jia et al. [52] presented a new nature-inspired meta-heuristic, namely the Remora Optimization Algorithm (ROA). The main inspiration of ROA is the parasitic behavior of remora during foraging in the ocean. ROA integrates different position update rules based on different hosts. As one of the intelligent creatures in the world, remora usually adsorbs on different hosts to achieve movement and foraging, so the location update rules are the same as those of the host. Moreover, remora also attempt to move by themselves, which is used to determine whether to change hosts. The experimental results demonstrated that the ROA is superior to other algorithms in terms of optimization accuracy and convergence speed. Zheng et al. [53] proposed an improved ROA, namely IROA, for solving global optimization problems. In this work, a mechanism called autonomous foraging was proposed. This mechanism allows each remora a small chance to find the food randomly or according to the current food position, which was used to improve the ROA’s optimization accuracy. Then, the IROA’s performance was evaluated on function optimization and five constrained engineering design problems. The experimental results demonstrated that IROA has a superior performance to other selected MAs in terms of optimization accuracy and convergence speed. However, the No-Free-Lunch (NFL) theorem [54] stated that no unique MAs are available to solve any optimization problems, which encourages us to develop more efficient methods and apply them in various fields such as multi-level thresholding problems.
This paper presents a modified Remora Optimization Algorithm called MROA for global optimization and multi-level thresholding image segmentation tasks. The motivations of this work are to alleviate ROA’s weaknesses and enhance its performance in solving image segmentation tasks. The two major improvements are contained in this work. First, Brownian motion is used to enhance the ROA’s exploration ability and provide it with a high level of opportunity to find the optimal or near-optimal solution in the search space. Second, the lens opposition-based learning strategy is used to improve the exploitation ability of ROA and help search agents to jump out the local optimal solution. We first used 23 benchmark functions in the experimental design including unimodal and multimodal types to validate its performance.
Furthermore, we selected eight grayscale images as the benchmark images, and cross-entropy is employed as the objective function to evaluate the segmentation quality of search agents. We compared MROA with other well-known algorithms and its original in all the tests such as ROA, RSA, AOA, AO, SSA, SCA, and GWO. The experimental results reveal that MROA significantly improved compared with these others in segmentation precision and convergence speed. Overall, it can be proved that MROA is an effective method for global optimization and multi-level thresholding.
The main contributions of this paper can be summarized as follows:
  • A new modified ROA, namely MROA, is first proposed based on BM and LOBL strategies.
  • The optimization performance of the MROA is evaluated on 23 benchmark functions.
  • MROA is applied for thresholding segmentation using the cross-entropy method.
  • Validate segmentation quality of MROA in terms of PSNR, SSIM, FSIM, and statistical test.
  • The performance of MROA is compared with seven well-known MAs.
The rest of this paper is organized as follows. Section 2 introduces the related work of multi-level thresholding image segmentation using MAs. Section 3 briefly describes the background knowledge. Section 4 presents the proposed MROA. The experimental results are introduced and analyzed in Section 5 and Section 6. Finally, Section 7 concludes this paper and gives future research directions.

2. Related Works

The MAs-based multi-level thresholding segmentation technique has gained more attention from scholars because of its reliable performance, small computational cost, and ease of implementation. In this field, determining the threshold values is the core of the entire segmentation process in the multi-level thresholding technique. In many application scenarios, many scholars use the exhaustive method to determine the optimal threshold values. However, this method has fatal disadvantages, such as complexity, time-consuming computation, and poor threshold values. Therefore, to alleviate these shortcomings, many works show the efficiency and performance of MAs in obtaining optimal threshold values; the following are a few outstanding research works. In [55], Jia et al. presented an efficient satellite image segmentation approach, called DHHO/M, based on dynamic HHO with a mutation mechanism. This work introduces the dynamic control parameter mechanism and mutation operators to improve HHO performance and avoid falling into local optimal solutions. Kapur’s entropy, Otsu between-class variance, and Tsallis entropy were employed to determine optimal threshold values. In the experiments, eight color satellite images and four oil pollution images validate DHHO/M’s performance. The experimental results demonstrated that the DHHO/M provided competitive performance in fitness evaluation, segmentation quality, and statistical tests compared with eight advanced thresholding techniques.
Ewees et al. [56] proposed a new multi-level thresholding method, using modified artificial ecosystem-based optimization (AEO), namely AEODE. The AEODE integrated differential evolution as a local search operator to overcome AEO’s shortcomings. In this way, the AEODE was used to determine a set of optimal threshold values and fuzzy entropy as the objective function. Then, the AEODE was accessed by different grayscale images at six various threshold values. The experimental results show that AEODE outperforms others, including original AEO and DE, in terms of fitness values, PSNR, SSIM, and FSIM.
In [57], an improved version of MPA with an opposition-based learning strategy called MPA-OBL was proposed to determine optimal threshold values at different thresholds. The Otsu between-class variance and Kapur’s were employed as the objective function in this work. The proposed method was evaluated on CEC 2020 test suite and several benchmark images. Compared with ELSHADE-SPACMA-OBL, CMA-ES-OBL, DE-OBL, HHO-OBL, SCA-OBL, SSA-OBL, and standard MPA, the proposed technique MPA-OBL produced better results in different evaluation measurements.
Su et al. [58] presented a variant ABC, namely CCABC, which introduced vertical search and horizontal search mechanisms to improve ABC’s optimization performance. Furthermore, the proposed method, CCABC, was used to find the appropriate threshold values in COVID-19 X-ray images based on Kapur’s entropy. The proposed method was compared with a set of advanced and classical MAs. The evaluation results show that the CCABC is an excellent method in most quality measurements.
Liu et al. [59] proposed a novel multi-level thresholding segmentation technique based on a modified evolution algorithm, called MDE, and applied it to a breast cancer image segmentation task. In this work, the slime mold foraging behavior is introduced to enhance DE’s optimization accuracy and convergence speed. Furthermore, three image quality metrics, including PSNR, SSIM, and FSIM, are used to evaluate its segmentation results to show its segmentation performance.
Li et al. [60] proposed a partitioned and cooperative quantum-behaved particle swarm optimization namely SCQPSO for processing stomach CT images. In this work, SCQPSO was used to determine the optimal threshold values according to Otsu’s method. Additionally, the inter-class variance of Otsu and its variance was used to show SCQPSO’s segmentation quality.
In the literature, we find that more and more scholars presented variants of the multi-level thresholding image segmentation method using MAs to determine optimal threshold values instead of the traditional exhaustive method. Consequently, we confirm that the meta-heuristic-based multi-level thresholding method is a promising work and interesting theories need further research. However, as real-world problems grow in complexity and difficulty, these methods will inevitably fail to find the optimal threshold values during segmentation. Therefore, image segmentation using MAs has more room for improvement to obtain better segmentation quality.

3. Background Knowledge

3.1. Remora Optimization Algorithm (ROA)

ROA [52] is a well-known bionics-based meta-heuristic algorithm inspired by the parasitic behavior of remora during foraging in the ocean. Unlike other fishes, remora usually attach to other hosts (humpback whales or sailfishes) to complete long and short-distance movement in the ocean. Like other MAs, ROA also has three different phases: initialization, exploration, and exploitation.

3.1.1. Initialization

Like other various meta-heuristic algorithms, ROA initializes the search agents using a random approach in the search space, which is calculated by:
X i = l b + r a n d × ( u b l b ) ;   i { 1 , 2 ,     , N }
where rand denotes a random variable between [0, 1]. ub and lb indicate the search space’s upper and lower bounds. i represents the number of Remora, and N denotes population size.

3.1.2. Free Travel (Exploration Phase)

ROA achieves long-distance and small global movements, respectively, in the search space based on the free travel method. This method includes two main sub-methods: SFO strategy and experience attack.
  • SFO strategy
Sailfish: one of the fastest fish in the ocean. Thus, remora attaches to them to achieve quick long-distance movement, and its position update is the same as its host, which is calculated by:
X ( t + 1 ) = X b e s t ( t ) ( r a n d × ( X b e s t ( t ) + X r a n d ( t ) 2 ) X r a n d ( t ) )
where Xbest(t) indicates global best position of remora and Xrand(t) represents the random position.
  • Experience attack
To determine whether the remora changes the host, it needs to take a small step around it. This activity is considered an experience attack and the formula is expressed by:
X a t t ( t + 1 ) = X ( t ) + r a n d n × ( X ( t ) X p r e ( t ) )
where Xpre(t) represents the position of the previous generation. randn indicates typically distributed random numbers between 0 to 1.

3.1.3. Eat Thoughtfully (Exploitation Phase)

  • WOA strategy
When remora is attached to the humpback whale, its attack strategy is the same as the bubble-net attack strategy in WOA. The mathematical formula can be described as follows:
X ( t + 1 ) = D × e a × cos ( 2 π α ) + X ( t )
α = r a n d × ( a 1 ) + 1
a = ( 1 + t T )
D = | X b e s t ( t ) X ( t ) |
where D denotes the distance between the best position and current position. α is a random value between [−1, 1].
  • Host feeding
This method is a subdivision in the exploitation process. In this phase, remora moving around the host can be thought of as a small step, which can be calculated as follows:
X ( t + 1 ) = X ( t ) + A
A = B × ( X ( t ) C × X b e s t ( t ) )
B = 2 × V × r a n d V
V = 2 × ( 1 t T )
where A denotes a small step movement of remora, which is related to the volume of host and remora. A remora factor (C) is utilized to limit the position of remora to distinguish between the host and remora.
Figure 1 vividly illustrates the main process of ROA, and the pseudo-code of ROA is presented in Algorithm 1.
Algorithm 1 the pseudo-code of ROA
  • Inputs: N and T
  • Outputs: best solution
  • Initialize the positions of remora Xi (i = 1, 2, …, N);
  • While (tT)
  • Check if any search agent goes beyond the search space and amend it;
  • Calculate the fitness of each remora;
  • For each remora indexed by i
  • H(i) = round(rand);
  • If H(i) == 0 then
  • Using Equation (4) to update the position of attached whales;
  • Elseif H(i) == 1 then
  • Using Equation (2) to update the position of attached sailfishes;
  • End if
  • Make a one-step prediction by experience attack (Equation (3));
  • If f(Xatt) < f(X) then
  • X = Xatt;
  • H(i) = round(rand);
  • Else
  • Using Equation (8) to update the position of search agents;
  • End if
  • End for
  • End While

3.2. Brownian Motion

Brownian motion (BM) is a stochastic process in which step length is drawn from the probability function defined by a normal distribution with zero mean (μ = 0) and unit variance (σ2 = 1) [61,62]. The probability density function at point x for BM is calculated via:
f B ( x ; μ , σ ) = 1 2 π σ 2 exp ( ( x μ ) 2 2 σ 2 ) = 1 2 π exp ( x 2 2 )
where x indicates a point following this motion and the distribution, 2D, and 3D trajectory of BM as shown in Figure 2. As the related figures show, we can see that BM can over areas of the domain with more uniform and controlled steps in the search space. On the other hand, BM’s trajectory can trace and explore distant areas of the neighborhood.

3.3. Lens Opposition-Based Learning

Lens opposition-based learning (LOBL) [63] is a novel strategy that integrates both opposition-based learning (OBL) [64] strategy and lens imaging principle to enhance the searchability of MAs. The LOBL strategy shows more effective performance during the optimization process than OBL when finding optimal or near-optimal solutions in the search space. The mathematical principle of LOBL is described as follows:
Lens imaging theory illustrates that when the distance from the object to the lens exceeds two times the focal length, an inverted and constricted real image will be formed between 1–2 times the focal length on the other side of the lens. As shown in Figure 3, where O represents the midpoint of the interval [lb, ub] and the y-axis is treated as a convex lens. In addition, an object of height h is located at point x, which is twice the lens’s focal length. Through lens imaging, the coordinates of the image vertices become (x*, h*). The formula is listed as follows:
( l b + u b ) / 2 x x * ( l b + u b ) / 2 = h h *
Let k = h/h*, Equation (14) can be simplified as follows:
x * = l b + u b 2 + l b + u b 2 k x k
In general, the optimization tasks are multi-dimensional, so the Equation (15) can be extended as follows:
x i * = l b i + u b i 2 + l b i + u b i 2 k x i k
where x i * denotes the opposite point of x i in i-th dimension. Interestingly, if k = 1, Equation (16) is the same as the OBL strategy. Therefore, the LOBL can be considered as a variant of OBL. The difference is that LOBL used an adjusting parameter k to realize dynamic search behavior when solving optimization tasks, which further improves the probability of the algorithm escaping from the local optimal solution.

3.4. Cross Entropy

In 1968, cross-entropy was proposed by Kullback [65]. As an essential concept in Shannon’s information theory, cross-entropy is mainly used to measure the difference in information between two probability distributions [66,67]. Let P = {p1, p2, …, pn} and Q = {q1, q2, …, qn} be two probability distributions defined over the same set of values. The cross-entropy between P and Q can be calculated as follows:
D ( P ,   Q ) = i = 1 N p i log p i q i
The minimum cross-entropy algorithm determines the threshold value by minimizing the cross-entropy between the original image and the thresholded image [68]. The lower the cross-entropy value, the lower the uncertainty and the greater the homogeneity. Let the original image be I, and h(i), i = 1, 2, …, L, is the corresponding histogram with L being the number of grey levels. Then the thresholded image Ith is created by the threshold value th using the following equations:
I t h = { μ ( 1 ,   t h ) ,   i f   I ( x , y ) < t h μ ( t h ,   L + 1 ) ,     i f   I ( x , y ) t h
μ ( a , b ) = i = a b 1 i h ( i ) i = a b 1 h ( i )
The thresholded image is generated based on Equation (17), then we can calculate the cross-entropy by rewriting Equation (17) as an objective function (also called fitness), which is listed below:
f c r o s s ( t h ) = i = 1 t h 1 i h ( i ) log ( i μ ( 1 ,   t h ) ) + i = t h L i h ( i ) log ( i μ ( t h ,   L + 1 ) )
The above objective function considers a single threshold value to segment a given image, i.e., bi-level thresholding, it also can be extended to a multi-level approach for image segmentation, called multi-level thresholding image segmentation. Thus, the Equation (19) can be expressed as:
f c r o s s ( t h ) = i = 1 L i h ( i ) log ( i ) i = 1 n t H i
where th = [th1, th2, …, thnt] indicates an array containing nt different threshold values. Hi is defined as follows:
H 1 = i = 1 t h 1 1 i h ( i ) log ( μ ( 1 ,   t h 1 ) )
H k = i = t h k 1 t h k 1 i h ( i ) log ( μ ( t h k 1 ,   t h k ) ) ,   1 < k < n t
H n t = i = t h n t L i h ( i ) log ( μ ( t h n t ,   L + 1 ) )

4. The Proposed Algorithm

4.1. The Details of MROA

Although ROA is easy to implement, suitable for solving optimization tasks widely, and provides more reliable results than other Mas, it has insufficient searchability (exploration and exploitation) in solving complex optimization problems; for example, it easily stagnates into optimal local solutions and has a slow convergence speed. Motivated by these considerations, this paper proposes a modified ROA algorithm called MROA for global optimization and multi-level thresholding image segmentation. There are two major strategies for modifying. First, Brownian motion is introduced to improve the exploration ability and provide a greater opportunity to find the optimal or near-optimal solution in the search space. Second, LOBL is used to improve the exploitation ability and accelerate the convergence rate of ROA. Therefore, the exploration phase of ROA is modified using Brownian motion, which is expressed as:
X ( t + 1 ) = Brownian × X b e s t ( t ) ( r a n d × ( X b e s t ( t ) + X r a n d ( t ) 2 ) X r a n d ( t ) )
where Brownian indicates Brownian motion.
The pseudo-code of MROA is presented in Algorithm 2, and Figure 4 illustrates the flowchart of MROA.
Algorithm 2 the pseudo-code of MROA
  • Inputs: N and T
  • Outputs: best solution
  • Initialize the positions of remora Xi (i = 1,2, …, N);
  • While (tT)
  •   Check if any search agent goes beyond the search space and amend it;
  •   Apply LOBL strategy to generate lens-opposite solution by Equation (15);
  •   Evaluate candidate solution and select the best position by greedy strategy.
  •   For each remora indexed by i
  •     If H(i) == 0 then
  •       Using Equation (4) to update the position of attached whales;
  •     Elseif H(i) == 1 then
  •       Using Equation (24) to update the position of attached sailfishes;
  •     End if
  •     If f(Xatt) < f(X) then
  •       X = Xatt;
  •       H(i) = round(rand);
  •     Else
  •       Using Equation (8) to update the position of search agents;
  •     End if
  •   End for
  • End While

4.2. The Proposed MROA for Solving Multi-Level Thresholding Image Segmentation Task

In this subsection, the MROA is employed to solve the multi-level thresholding image segmentation task and cross-entropy is used to determine optimal threshold values. We present the different steps of this method as below:
Step 1: Read image I and convert it to grayscale Ig, calculate the histogram h of Ig.
Step 2: Initialize the parameters of MROA: N, T, C, k, and random array H.
Step 3: Initialize the location of search agents with N population size and nTh dimensions.
Step 4: If any search agent goes beyond the search space and amend it.
Step 5: Evaluate the cross-entropy with Equation (20) for each search agent.
Step 6: Apply LOBL strategy to generate lens opposite solution and select the best one between the original and its opposite solution according to the fitness value.
Step 7: Select the appropriate host to update the location of the search agent according to the value of the H.
Step 8: The t index is increased in 1, if the stop criteria (tT) are satisfied then output the best threshold values, otherwise jump to Step 5.
Step 9: Generate the segmented image Seg with the best threshold values obtained by MROA.

4.3. Computation Complexity of MROA

In the initialization phase, MROA produces the search agents randomly in the search space, so the computational complexity of this phase is O(N × D), where N denotes the number of population and D denotes the dimension size. Afterward, MROA evaluates each individual’s fitness during the whole iteration with the complexity O(T × N × D), where T indicates the number of iterations. Finally, for position updating, the complexity is summarized as O(T × N × D). In summary, the total computational complexity of MROA is O(T × N × D), which is the same as the original.

5. Experimental Results for Global Optimization

This section evaluates the optimization performance of the proposed MROA using 23 benchmark functions. First, the definitions of the 23 benchmark functions are introduced. Second, the experimental setup and comparison group including other well-known MAs are described in detail. Finally, the experimental results are analyzed and discussed.

5.1. Definition of 23 Benchmark Functions

To evaluate the searchability of the proposed MROA, 23 benchmark functions are introduced in this paper [15]. These functions are categorized into: (1) unimodal functions (F1–F7), (2) multimodal functions (F8–F14), and (3) fixed-dimension multimodal functions. These functions are defined in Table 1, where D, UM, and MM indicate the dimension, unimodal, and multimodal functions, respectively. Figure 5 shows the visualization of these functions. We can see that the unimodal functions have only one global optimal solution, which is available to evaluate the exploitation ability of MAs. By contrast, multimodal and fixed-dimension multimodal functions have many local optimal and only one global optimal solution, which is available to evaluate the exploration and ability to escape from the local optimal solution of MAs.

5.2. Experimental Setup

As stated above, the 23 benchmark functions are utilized to evaluate the optimization performance of the proposed work. To show the representativeness of experimental results, the proposed algorithm MROA is compared with the original ROA [52] and other well-known MAs, including RSA [69], AOA [47], AO [35], SSA [31], SCA [45], and GWO [27]. For fair evaluation, we set the number of maximum iteration T = 500, population size N = 30, and dimension size D = 30, respectively. Moreover, all the tests are run 30 times independently. The best results are highlighted in bold. Table 2 represents the parameter setting of each algorithm and the details of these algorithms are listed as follows:
  • ROA [52]: Simulates remora’s parasitic behavior on hosts, such as whales and swordfish, to finish the predation behavior.
  • RSA [69]: mimics the encircling and hunting mechanism of a crocodile swarm during foraging.
  • AOA [47]: simulates the behavior of arithmetic operators in nature, which include multiplication (×), division (÷), addition (+), subtraction (−).
  • AO [35]: inspired by the different hunting behavior of Aquila hawks in nature.
  • SSA [31]: simulates the swarming behavior of salps during moving and foraging in the ocean.
  • SCA [45]: simulates the common mathematical models, sine and cosine functions.
  • GWO [27]: mimics the hunting strategy and leadership hierarchy of gray wolves.
Table 2. Parameter settings for each algorithm.
Table 2. Parameter settings for each algorithm.
AlgorithmParameters
MROAc = 0.1; k = 10,000
ROA [52]c = 0.1
RSA [69]α = 0.1; β = 0.1
AOA [47]α = 5; μ = 0.5;
AO [35]U = 0.00565; c = 10; ω = 0.005; α = 0.1; δ = 0.1;
SSA [31]c1 = [1, 0]; c2 ∈ [0, 1]; c3 ∈ [0, 1]
SCA [45]A = 2
GWO [27]a = [2, 0]

5.3. Statistical Results on 23 Benchmark Functions

This subsection compared MROA with seven basic algorithms on 23 benchmark functions in terms of the mean value (Mean) and standard deviation (Std). The experimental results are given in Table 3. By observing this table, we can see that MROA has the smallest Mean and Std values among the most functions. Specifically, for F1–F4, MROA obtains the theoretical optimal solution, where ROA only finds the optimal solution, which shows the LOBL strategy effectively enhances the exploitation ability of MROA, and the lens opposite solution increases the diversity of the population, helping the search agents to jump out of the local optimal solution. For MM functions, MROA also provides more competitive results than seven well-known algorithms; the Brownian motion strategy improves the ROA’s exploration capability and provides search agents with a greater opportunity to find the optimal or near-optimal solution in the search space. For F8, although MROA only finds the near-optimal solution, its convergence accuracy is also better than others. For F9, F10, and F11, MROA obtains the theoretically optimal solution. For F12, F13, and F4, the accuracy of MROA is weaker than AO and SSA, respectively. For F15–F23, MROA also provides a higher convergence accuracy than others. Moreover, Figure 6 shows the MROA and competitor algorithms’ ranking in UM and MM functions. We can see from the results that the proposed MROA outperforms in most benchmark functions.
Furthermore, owing to the MAs being stochastic algorithms, in this paper, we consider the Wilcoxon rank-sum test to supplement our statistical analysis and evaluate the statistical significance of the results [70]. The Wilcoxon rank-sum test is a non-parametric test used to compare the results of each pair of algorithms. The test is based on two hypotheses: null and alternative. The null hypothesis indicates that there is no difference between the two samples. The alternative hypothesis indicates that there is a significant difference between the two samples. We set the significance level as 5%; according to this criterion, if the p-value is higher than 0.05, which indicates the null hypothesis is true, the significant difference is not obvious. If the p-value is less than 0.05, which indicates there is a significant difference, the null hypothesis is rejected. Furthermore, NaN denotes that there is no difference between the two samples. Table 4 shows the p-values obtained using the Wilcoxon rank-sum test, which is calculated by the fitness values between MROA and each of the algorithms included, presenting six different pairs, MROA vs. ROA, MROA vs. RSA, MROA vs. AOA, MROA vs. AO, MROA vs. SSA, MROA vs. SCA, and MROA vs. GWO. This table shows that the results of MROA are statistically significant for most benchmark functions, which indicates that the performance of MROA is not random. Consequently, based on the above analysis, we conclude that the MROA outperforms other algorithms for most benchmark functions in terms of convergence accuracy and statistical significance.

5.4. Boxplot Behavior Analysis

To further validate the excellent performance of the proposed MROA, Figure 7 presents the boxplot behavior of MROA and other MAs for some functions, including UM and MM types. Since boxplots indicate the data distribution, it is available to describe the consistency between data. MROA is very narrow compared with other algorithms and maintains the lowest values for most functions. It is observed that ROA outperforms the other algorithms in terms of data distribution, but only on F6, F8, and F15; the performance is mediocre for data distribution.

5.5. Convergence Behavior Analysis

This subsection analyzed the convergence behavior of the proposed MROA with compared algorithms. The convergence curve of these algorithms on 23 benchmark functions is shown in Figure 8. From this figure, we can conclude that MROA outperforms other well-known MAs in terms of convergence speed and optimization accuracy. For unimodal functions F1, F2, and F4, MROA achieves the fastest convergence speed and obtains the theoretically optimal solutions. However, for F6, the performance of MROA is not excellent and it may be trapped into the local optimal. For F7, although MROA cannot obtain the theoretical optimal solution, it also finds the most near-optimal solution and shows the fastest convergence speed than others. For MM functions, MROA also shows excellent optimization performance than other popular methods. For F8, MROA has a significant convergence capability that exceeds two ROA and SSA methods. For F9 and F11, MROA finds the theoretical optimal solution and has a faster convergence speed than others. We concluded that MROA has an excellent convergence ability in UM and MM functions.

6. Experimental Results for Multi-Level Thresholding Image Segmentation

This section used MROA to minimize the cross-entropy for solving multi-level thresholding image segmentation tasks. First, we selected eight images from the Berkeley segmentation dataset [71] as the benchmark images and the experimental setup described in detail. Second, three image measurements, including PSNR, SSIM, and FSIM are introduced to measure the segmentation performance (quality, consistency, and accuracy) of MROA and other algorithms. Finally, the experimental results are discussed and analyzed.

6.1. Berkeley Segmentation Dataset

The proposed MROA with other algorithms is evaluated on eight images from the Berkeley segmentation dataset [71]. In this work, eight benchmark images are grayscale. These images and their histogram are presented in Figure 9. The purpose of selecting these images is only to test the algorithms’ segmentation performance and quality.

6.2. Experimental Setup

The proposed MROA is compared with seven well-known algorithms such as ROA [52], RSA [69], AOA [47], AO [35], SSA [31], SCA [45], and GWO [27]. The described parameters and settings of these algorithms are described in Section 4.2. For fair evaluation, the number of maximum iteration T = 500, the population size N = 30, and all the algorithms were run 30 times independently. The best values are highlighted in bold. All the images are segmented using different threshold values nTh = [4, 8, 12, 16].

6.3. Segmentation Quality Evaluation Measurements

6.3.1. Peak Signal-to-Noise Ratio (PSNR)

PSNR is used to calculate the ratio between the maximum possible signal power and the distorted noise power that affects the quality of its representation. The signal is regarded as the original data, and the noise is the error caused by segmentation. The PSNR is the most commonly used quality assessment technique to compare an original image and its segmented images using root mean square error (RMSE) per pixel. A higher PSNR indicates more similarity between the images, which is reflected in a better segmentation process [71,72]. It can be described as follows:
P S N R = 20 l o g 10 255 R M S E
R M S E = i = 1 M j = 1 N ( ( I g ( i , j ) S e g ( i , j ) ) 2 ) M × N
where Ig and Seg denote the original and segmented image, respectively. M and N are the sizes of the image.

6.3.2. Structural Similarity (SSIM)

The SSIM is used to measure the degree of distortion of the images, and it can also measure the similarity from three perspectives, including luminance, contrast, and structural content between original and segmented images. Unlike mean-square error (MSE) and PSNR, SSIM measures absolute error, which is more in line with the intuitive feeling of the human eye [72,73]. SSIM is computed by the following equation:
S S I M ( I g , S e g ) = ( 2 μ I g μ S e g + c 1 ) ( 2 σ I g , S e g + c 2 ) ( μ I g 2 + μ S e g 2 + c 1 ) ( σ I g 2 + σ S e g 2 + c 2 )
where μI and μSeg indicate the mean intensities of the original and its segmented image, respectively. σI and σSeg are standard deviations of original and its segmented images. σI, Seg is the covariance between original and segmented images. c1 and c2 are two constants equal to 0.065.

6.3.3. Feature Similarity (FSIM)

FSIM is a variant of SSIM that considers low-level features such as the gradient magnitude and phase congruency to measure the similarity of two images [74,75]. It can be described as follows:
F S I M = ω Ω S L ( ω ) P C m ( ω ) ω Ω P C m ( ω )
S L ( ω ) = S P C ( ω ) S G ( ω )
S P C ( ω ) = 2 P C 1 ( ω ) P C 2 ( ω ) + T 1 P C 1 2 ( ω ) + P C 2 2 ( ω ) + T 1
S G ( ω ) = 2 G 1 ( ω ) G 2 ( ω ) + T 2 G 1 2 ( ω ) + G 2 2 ( ω ) + T 2
where Ω indicates the entire domain of the image. T1 and T2 are constants which equal to 0.85 and 160, respectively. G indicates the gradient magnitude of an image, and PC denotes the phase congruence.

6.4. Experimental Results Analysis of Benchmark Images

We compared MROA with other well-known algorithms for solving multi-level thresholding image segmentation tasks in this subsection. Cross-entropy is used as the objective function. Table A1 (in Appendix A) denotes the best threshold values obtained by algorithms at different threshold values. Table A2 contains the segmented images obtained from the proposed MROA at nTh = [4, 8, 12, 16]. Table A3, Table A4, Table A5 and Table A6 show the mean and standard deviation values for fitness, PSNR, SSIM, and FSIM obtained by algorithms, respectively. The higher mean values obtained indicate more optimization accuracy and efficiency of an algorithm.
On the other hand, the smaller the standard deviation values, the more stable the algorithm. Table A7 shows the Wilcoxon rank-sum results for fitness, the significance level is the same as Section 4.3. Moreover, Figure 10, Figure 11, Figure 12 and Figure 13 show the summary average results obtained by algorithms in terms of fitness, PSNR, SSIM, and FSIM, respectively. Figure 14 summarized the best cases in terms of fitness, PSNR, SSIM, and FSIM, respectively.
It can be seen from Table A1 that in most cases, when the threshold level is 4 and 8, similar values are obtained by algorithms, especially for ROA, AO, SSA, and GWO. When the threshold level is 12 and 16, the difference is significant. According to Figure 10 and Table A3, the proposed work outperforms others in terms of average fitness and summarized. We conduct that MROA achieves the smallest fitness value in most cases at threshold values 8, 12, and 16.
Table A4 and Figure 11 show the mean PSNR values and a summary obtained by algorithms with different threshold values, respectively. It is available for evaluating the quality between the segmented image and the original. From this figure and table, we can see that the MROA has a significant advantage against ROA, AO, SSA, and GWO in most cases. The RSA, AOA, and SCA do not show excellent PSNR values. In the case of ROA, compared with other algorithms, as shown in Table A4, the PSNR values obtained for image 1, image 6, and image 8 present higher results at all levels. While segmenting image 2, image 3, image 4, and image 7, MROA outperforms others at three different levels.
Table A5 and Figure 12 indicate the mean SSIM values and a summary obtained by algorithms at different threshold values, respectively. SSIM measures the similarity between two images. It can be seen that MROA outperforms others in terms of PSNR values. Specifically, for image 4, image 5, and image 8, MROA produces higher results at all levels. For image 1, image 2, image 6, and image 7, MROA outperforms others at three levels.
Table A6 and Figure 13 present the mean FSIM values and a summary obtained by algorithms at different threshold values, respectively. FSIM is a variant of SSIM used to evaluate the feature similarity of two images. We notice that MROA obtains more competitive results than others from these results. MROA produces better results at all threshold levels for image 1, image 2, image 5, and image 8, respectively. While for image 4, image 6, and image 7, MROA outperforms others at three levels. For image 3, MROA only provides high accuracy for image 3 at two levels.
Figure 14 summarizes the number of best cases obtained by algorithms in terms of fitness, PSNR, SSIM, and FSIM. According to this figure, MROA is significantly improved upon its original. AO and GWO are ranked second and third, respectively.
Table A7 shows the p-value obtained by algorithms using the Wilcoxon rank-sum test with a 5% significance level. According to this table, we can see that MROA produces statistically significant results compared with other algorithms. Especially when compared with RSA, AOA, and SCA, MROA has a significant difference between them, which indicates MROA has been improved considerably.
Table A8 shows the CPU time of each algorithm under different thresholds. As it is possible to observe, in most cases a shorter time is obtained using RSA, AOA, and SCA, respectively. However, as shown in Table A3, Table A4, Table A5 and Table A6, RSA, AOA, and SCA may not produce threshold values for segmenting given images well. For the proposed MROA, although MROA increases the CPU time over its original when segmenting images, considering the NFL theorem, it is acceptable to increase some CPU time cost to obtain a better solution.
Overall, the proposed method, MROA, can obtain higher fitness, PSNR, SSIM, and FSIM for most images at different levels, showing that the MROA-based multi-level thresholding segmentation technique can obtain better evaluation results, which further indicates that it can better perform the segmentation of complex grayscale images. It is significantly improved upon the basic version of ROA. Second, it also shows significantly superior performance in terms of fitness, PSNR, SSIM, and FSIM compared with other selected well-known algorithms. It is an excellent approach for solving multi-level thresholding image segmentation, as the BM and LOBL strategies significantly enhance the performance of ROA in determining the optimal threshold values, exhibiting excellent searchability and jumping out of the local optimal solution when solving multi-level thresholding segmentation tasks, and we will apply it to handle other datasets such as medical images, remote sensing images, etc.

7. Conclusions and Future Works

This paper introduces a modified remora optimization algorithm, namely MROA, for global optimization and multi-level thresholding image segmentation tasks. The original ROA has some fatal issues such as premature convergence and easy stagnation into local optimal solutions, which means it cannot solve complex real-world problems. In this work, Brownian motion improves the ROA’s exploration ability and provides a higher opportunity to find the optimal or near-optimal solution in the search space. Second, the Lens opposition-based learning enhances the exploitation capability and promotes search agents to jump out the local optimal solution.
We first evaluate the optimization performance of the proposed MROA using 23 benchmark functions. The experimental results demonstrate that MROA has broader searchability and a greater opportunity to obtain high-quality solutions than the original ROA. In addition, based on the analysis and comparison of MROA with other well-known MAs, it can be seen that MROA also has a better overall ability than its peers in optimization accuracy, convergence speed, and significant difference. Therefore, we consider that MROA is an excellent meta-heuristic for function optimization.
Subsequently, the MROA is applied to solve multi-level thresholding image segmentation tasks and obtain optimal threshold values by cross-entropy. The performance of the proposed work is also compared with its peers. Experimental evaluation metrics including PSNR, SSIM, FSIM are used to test the segmented image quality. Furthermore, a statistical test is used to evaluate the significant difference between the two methods. Experimental results proved the following: (1) the MROA significantly improves the original ROA’s performance for the image thresholding segmentation task. (2) MROA produces excellent results in terms of PSNR, SSIM, and FSIM compared with the other compared MAs. (3) Wilcoxon rank-sum results indicate that MROA produces a significant performance compared with other selected MAs. Finally, we conclude that the proposed MROA can generate high-quality segmented images, outperforms other selected MAs in terms of segmentation accuracy, and is more stable and promising.
While the proposed MROA has significant improvements over the original ROA and shows excellent results compared with other selected MAs in terms of benchmark function optimization and image segmentation, the CPU time also increases when solving the image segmentation task. In the future, we will: (1) Reduce the CPU time without degrading MROA’s performance; we can design a new effective strategy or hybrid the proposed method with other MAs. (2) Extend the image dataset to include images such as medical images and remote sensing images and increase threshold values to further prove the performance of MROA. (3) Since MROA is a high-performance optimizer, MROA must be applied in other fields such as feature selection, engineering design problems, and maximum power point tracking.

Author Contributions

Q.L., Conceptualization, methodology, software, formal analysis, investigation, data curation, visualization, writing-original draft preparation, funding acquisition. N.L., conceptualization, methodology, resources, review and editing, supervision, funding acquisition. H.J., conceptualization, supervision, methodology, review and editing, funding acquisition. Q.Q., supervision, methodology, review and editing, funding acquisition. L.A., review and editing, supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Innovative Research Project for Graduate Students of Hainan Province under grant No. Qhys2021-190, the Natural Science Foundation Project of Hainan Province for High Level Talents under grant No. 621RC511, the National Natural Science Foundation of China under grant No. 11861030, the National Key Research and Development Program of China under grant No. 2018YFB1404400, the Natural Science Foundation of Hainan Province under grant No. 2019RC176, the Natural Science Foundation of Fujian Province under grant No. 2021J011128, the Sanming University National Natural Science Foundation Breeding Project under grant No. PYT2105, and the Sanming University Introduces High Level Talents to Start Scientific Research Funding Support Project under grant No. 20YG14.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding authors.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. The best threshold values obtained by algorithms over all images.
Table A1. The best threshold values obtained by algorithms over all images.
ImagenThMROAROARSAAOAAOSSASCAGWO
Image 1448 84 123 17648 84 123 17647 72 113 15860 88 144 21048 84 123 17648 84 123 17645 76 119 16748 84 123 176
834 52 73 94
115 138 164 201
34 52 73 94
115 138 164 201
6 8 28 42 61
104 132 195
29 54 80 93
102 138 211 243
34 52 73 94
115 138 164 201
35 53 74 95
116 139 166 202
37 54 69 100
116 131 163 191
34 52 73 94
115 138 164 201
1229 40 53 68
82 96 111 126
143 162 187 216
30 41 54 68
83 97 111 126
143 162 187 216
39 59 63 82
88 101 103 115
144 175 199 225
30 39 43 48
77 99 111 125
159 199 210 245
29 40 53 68
83 97 111 126
142 162 187 216
29 40 53 68
82 96 111 127
144 164 190 218
1 9 26 42
51 63 84 102
113 138 168 191
23 33 45 59
74 90 105 121
138 157 182 212
1627 35 44 54
65 76 87 97
108 119 131 144
159 177 199 223
13 28 37 47
58 70 82 93
105 117 130 143
158 176 198 222
12 12 13 22
31 42 59 74
82 94 105 115
124 150 173 209
12 27 41 53
67 71 79 92
109 120 137 139
149 167 191 254
27 35 44 54
65 76 87 98
110 122 136 148
162 179 200 223
26 34 43 54
66 78 90 102
114 127 141 156
173 193 213 232
15 22 23 33
46 61 73 92
110 119 140 161
181 212 227 231
2 3 28 37
47 58 70 83
95 108 121 135
151 170 194 220
Image 2458 88 141 19658 88 141 19643 75 126 17564 81 144 19358 88 141 19658 88 141 19655 87 144 19758 88 141 196
847 61 78 99
126 162 193 220
46 60 76 98
124 157 191 221
46 72 97 113
118 160 200 235
53 79 122 166
174 192 200 219
47 62 81 103
130 165 193 218
46 60 77 98
125 161 192 219
10 48 61 84
106 149 185 207
47 61 78 99
126 162 193 220
1243 53 63 75
89 104 122 146
170 190 207 231
43 53 63 75
90 105 124 148
172 190 208 231
9 32 40 44
58 76 92 104
132 158 200 215
13 27 42 61
78 94 117 141
143 164 198 218
43 53 63 75
87 102 118 139
165 190 209 233
45 57 69 83
100 119 140 163
183 197 212 234
10 37 48 65
88 99 109 128
132 153 189 214
1 45 57 69
83 100 119 143
168 189 207 231
1640 48 56 64
73 84 96 107
120 137 156 174
189 202 216 236
41 50 59 68
80 92 105 119
134 152 167 181
192 204 217 236
4 13 19 38
44 52 58 67
74 86 103 121
147 151 191 222
21 31 45 59
64 75 76 95
129 143 164 193
200 213 246 247
16 42 51 60
69 81 94 107
123 143 164 179
191 202 216 235
5 40 48 56
64 73 84 96
108 122 141 162
184 199 215 238
4 30 43 55
65 75 83 86
106 134 160 187
191 213 217 229
2 37 44 51
57 65 74 86
98 110 126 147
171 191 208 231
Image 3460 90 129 17560 90 129 17560 90 129 17563 84 137 16960 90 129 17560 90 129 17560 90 124 17660 90 129 175
846 62 79 98
121 143 159 188
46 62 79 98
120 142 159 188
45 62 79 98
120 142 159 188
41 51 57 85
117 141 198 225
46 62 79 98
120 142 159 188
46 62 80 100
123 146 165 202
40 51 67 82
89 125 149 172
45 62 79 98
120 142 159 188
1240 52 63 75
87 100 115 131
146 158 172 199
41 53 64 76
88 101 116 132
146 158 172 199
10 23 30 50
64 67 85 103
125 139 165 188
8 28 55 66
74 88 98 112
141 169 186 203
40 52 64 75
87 101 116 131
145 157 170 197
40 52 63 75
87 100 115 131
146 159 176 208
32 51 58 70
79 97 107 128
145 175 211 255
38 49 59 70
82 95 110 127
144 157 171 198
1637 46 55 63
72 81 90 100
110 122 134 145
154 163 177 203
32 40 50 58
67 76 85 96
107 120 134 145
154 163 177 203
1 20 33 39
43 57 61 73
75 95 115 124
147 155 161 206
20 34 52 68
76 78 82 93
105 118 142 153
188 200 238 245
12 38 47 56
65 74 83 93
105 119 132 144
154 164 182 208
17 40 51 61
72 83 96 110
127 143 154 164
179 199 218 242
1 2 40 59
65 72 82 92
97 104 119 140
161 178 193 255
36 43 50 58
65 73 82 92
102 114 127 139
149 159 173 200
Image 4435 66 97 13735 66 97 13720 57 91 12731 60 114 13135 66 97 13735 66 97 13734 65 93 13435 66 97 137
823 42 61 77
94 114 137 168
23 42 61 77
94 114 137 168
4 15 23 48
63 87 109 151
39 62 72 100
110 124 149 179
23 42 61 77
93 113 136 167
23 42 61 77
94 114 137 168
17 30 51 70
94 122 155 188
23 42 61 77
94 114 137 168
1218 29 42 55
66 77 89 102
117 133 150 175
17 27 40 53
65 77 89 102
117 133 150 175
11 19 24 31
49 55 60 70
96 113 129 157
20 29 51 62
92 104 117 125
151 168 183 232
17 27 39 51
63 74 85 98
113 129 149 175
22 39 54 68
81 94 110 128
146 168 193 226
1 2 12 17
33 51 66 84
99 111 145 168
17 28 41 54
65 76 88 101
116 132 149 174
1614 21 30 39
49 58 67 75
84 93 105 118
132 147 166 188
15 23 32 42
52 61 69 77
85 94 105 118
132 147 166 188
8 9 21 24
42 43 55 56
60 69 85 99
119 128 144 157
1 10 23 32
34 50 59 76
83 105 120 127
161 188 206 247
13 21 29 39
49 58 67 75
84 93 105 117
131 147 164 185
14 21 31 42
53 63 73 83
95 108 122 138
153 172 189 205
1 1 1 5
15 20 27 32
45 60 74 90
98 119 150 167
8 15 23 32
42 53 63 73
82 92 104 118
132 147 166 188
Image 5431 71 118 17131 71 118 17126 48 89 14435 95 137 17531 71 118 17131 71 118 17130 72 118 17831 71 118 171
818 33 54 78
104 132 163 206
18 33 54 78
104 132 163 206
18 26 41 47
71 100 130 172
25 60 97 114
124 151 167 189
18 34 56 81
107 135 166 208
20 38 62 89
116 143 173 215
23 39 51 71
98 121 160 216
20 38 62 88
115 143 173 214
1215 24 37 52
69 87 105 123
142 162 186 221
14 23 36 52
69 87 105 123
142 162 186 221
12 16 30 38
40 52 76 108
113 158 181 228
14 25 43 64
103 105 132 162
169 194 235 255
15 24 36 51
66 83 101 120
140 161 186 222
15 25 39 55
73 92 111 129
149 171 196 230
10 14 20 35
51 65 91 116
127 151 189 220
15 25 39 55
72 89 106 124
143 163 187 222
1613 19 27 37
49 61 74 87
101 114 128 143
158 175 195 226
10 15 21 30
41 53 66 80
94 109 124 139
155 173 194 225
5 9 16 24
38 52 54 75
81 83 89 98
105 131 151 196
3 15 26 42
71 71 97 121
142 176 187 190
224 236 240 250
13 19 28 39
52 64 78 92
105 117 130 143
158 174 194 224
14 22 34 49
64 78 93 107
121 135 150 165
180 197 218 235
1 9 15 26
26 40 53 65
88 116 140 144
163 178 198 236
10 14 19 26
34 44 56 70
84 99 115 131
149 168 190 223
Image 6452 91 139 18152 91 139 18131 59 101 16149 74 113 17852 91 139 18152 91 139 18154 90 134 17452 91 139 181
835 53 75 102
131 159 183 198
34 51 72 99
129 157 182 198
6 42 55 65
95 114 158 188
36 67 87 105
137 186 195 212
35 53 74 100
129 157 182 198
34 50 71 98
128 157 182 198
32 51 79 104
137 176 198 199
36 53 75 102
131 159 183 198
1229 40 52 67
84 104 125 146
166 183 195 203
30 41 53 68
85 105 126 146
166 183 195 203
23 36 37 40
50 52 70 106
128 158 187 194
28 32 44 59
66 99 102 136
145 177 178 199
32 45 59 77
98 117 137 156
172 185 195 203
31 44 58 77
97 117 139 160
179 191 200 210
1 3 41 49
69 88 118 144
167 172 194 204
29 38 49 62
79 99 120 141
162 181 194 202
1626 35 44 53
64 76 91 107
123 140 156 171
184 193 199 205
25 34 42 50
60 70 84 99
116 131 147 162
175 187 196 203
30 41 61 63
79 91 112 122
146 164 176 189
196 228 245 254
25 32 40 46
68 68 97 119
127 129 149 152
180 182 183 201
29 38 48 61
76 91 106 119
130 141 152 164
176 187 196 203
25 33 41 49
59 71 86 101
118 135 154 170
184 194 200 206
1 13 26 34
45 52 72 90
90 116 142 159
180 186 198 238
9 11 27 37
46 57 70 84
101 118 137 156
172 186 196 203
Image 7455 82 113 14955 82 113 14940 60 90 12454 90 142 18255 82 113 14955 82 113 14952 76 109 14255 82 113 149
843 58 73 88
106 127 148 169
43 58 73 88
106 127 148 169
5 43 60 62
85 89 119 148
6 46 79 90
116 133 153 183
43 58 73 88
106 127 148 169
44 59 74 89
107 128 149 170
3 46 67 86
95 121 150 176
43 58 73 87
104 125 147 168
1239 49 59 69
78 88 99 112
127 141 156 175
39 49 59 69
78 88 99 113
127 141 156 175
23 36 37 39
51 61 71 81
99 114 140 150
52 72 94 103
112 123 138 142
154 174 195 201
39 49 60 71
80 90 101 114
128 143 157 176
41 53 65 76
87 100 116 133
150 166 187 248
1 2 32 39
51 64 69 86
109 137 160 185
39 48 58 68
78 88 99 113
128 142 157 175
1637 44 52 60
68 75 82 90
99 109 121 132
143 155 168 185
37 45 53 61
69 77 84 93
103 114 126 137
148 159 170 186
14 18 28 36
49 56 64 65
78 83 107 129
146 150 168 210
11 45 46 54
58 60 67 84
98 114 129 153
181 189 209 235
37 45 53 61
69 76 83 91
98 107 118 130
142 155 169 186
2 39 49 59
68 77 86 96
107 118 130 142
154 167 183 212
1 1 2 38
51 60 74 90
97 114 124 145
156 181 213 238
12 15 38 47
56 65 74 83
92 102 114 127
140 153 166 183
Image 8447 93 115 13247 93 115 13243 65 99 12254 112 130 15947 93 115 13247 93 115 13238 78 113 13047 93 115 132
818 37 66 94
110 121 130 140
17 35 64 93
110 121 130 140
11 17 50 61
81 97 121 134
12 25 68 108
119 134 161 236
18 39 69 94
111 122 131 140
18 37 66 94
112 125 137 237
3 16 33 64
96 98 114 132
18 34 59 84
100 114 126 137
1215 25 38 58
78 93 104 113
121 128 134 142
15 26 39 60
80 93 104 113
121 128 134 142
4 16 20 27
32 63 74 75
90 100 114 131
12 26 44 58
84 97 103 120
134 134 230 235
16 26 38 58
78 92 103 112
121 129 136 143
16 29 48 72
90 102 113 122
129 136 143 197
1 7 13 15
16 31 62 87
104 117 128 134
17 26 38 59
78 92 103 112
120 127 133 141
1614 23 31 44
60 75 85 94
103 110 116 122
127 132 138 145
13 20 27 35
47 62 78 89
98 106 113 120
126 131 137 144
10 14 17 21
28 34 39 47
50 67 81 93
111 120 135 177
15 37 66 68
71 95 97 110
116 126 137 144
161 181 210 233
12 18 25 30
36 44 53 69
84 97 107 115
123 130 137 144
14 24 36 55
74 87 98 109
117 124 132 141
202 233 248 253
1 1 1 10
14 18 27 41
62 81 91 103
107 120 128 135
4 13 20 27
37 54 70 82
93 103 112 119
125 131 137 144
Table A2. The segmented images obtained by MROA using cross-entropy.
Table A2. The segmented images obtained by MROA using cross-entropy.
ImagenTh = 4nTh = 8nTh = 12nTh = 16
Image 1 Mathematics 10 01014 i001 Mathematics 10 01014 i002 Mathematics 10 01014 i003 Mathematics 10 01014 i004
Mathematics 10 01014 i005 Mathematics 10 01014 i006 Mathematics 10 01014 i007 Mathematics 10 01014 i008
Image 2 Mathematics 10 01014 i009 Mathematics 10 01014 i010 Mathematics 10 01014 i011 Mathematics 10 01014 i012
Mathematics 10 01014 i013 Mathematics 10 01014 i014 Mathematics 10 01014 i015 Mathematics 10 01014 i016
Image 3 Mathematics 10 01014 i017 Mathematics 10 01014 i018 Mathematics 10 01014 i019 Mathematics 10 01014 i020
Mathematics 10 01014 i021 Mathematics 10 01014 i022 Mathematics 10 01014 i023 Mathematics 10 01014 i024
Image 4 Mathematics 10 01014 i025 Mathematics 10 01014 i026 Mathematics 10 01014 i027 Mathematics 10 01014 i028
Mathematics 10 01014 i029 Mathematics 10 01014 i030 Mathematics 10 01014 i031 Mathematics 10 01014 i032
Image 5 Mathematics 10 01014 i033 Mathematics 10 01014 i034 Mathematics 10 01014 i035 Mathematics 10 01014 i036
Mathematics 10 01014 i037 Mathematics 10 01014 i038 Mathematics 10 01014 i039 Mathematics 10 01014 i040
Image 6 Mathematics 10 01014 i041 Mathematics 10 01014 i042 Mathematics 10 01014 i043 Mathematics 10 01014 i044
Mathematics 10 01014 i045 Mathematics 10 01014 i046 Mathematics 10 01014 i047 Mathematics 10 01014 i048
Image 7 Mathematics 10 01014 i049 Mathematics 10 01014 i050 Mathematics 10 01014 i051 Mathematics 10 01014 i052
Mathematics 10 01014 i053 Mathematics 10 01014 i054 Mathematics 10 01014 i055 Mathematics 10 01014 i056
Image 8 Mathematics 10 01014 i057 Mathematics 10 01014 i058 Mathematics 10 01014 i059 Mathematics 10 01014 i060
Mathematics 10 01014 i061 Mathematics 10 01014 i062 Mathematics 10 01014 i063 Mathematics 10 01014 i064
Table A3. The best fitness values obtained by algorithms over all images.
Table A3. The best fitness values obtained by algorithms over all images.
ImagenThMROAROARSAAOAAOSSASCAGWO
MeanStdMeanStdMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
Image 140.784100.784101.07710.15651.12330.18720.784100.784100.82200.08210.78410
80.242900.242900.46220.06900.48360.07370.242900.24300.00010.37230.04760.24290
120.12070.00370.12130.00500.25670.03720.28780.05530.12220.00640.12820.00810.22680.03640.13370.0144
160.07130.00160.07440.00490.17600.02270.17840.02690.07310.00480.08410.00970.16590.02390.08140.0096
Image 240.621600.621600.78380.08360.85720.08850.621600.621600.63890.00840.62160
80.20520.01070.20750.01490.37990.05780.38680.05200.20540.01070.20810.01210.30640.03810.20910.0179
120.10060.00630.09880.00330.20580.02660.21520.03500.10190.00850.10600.00780.19700.01850.11180.0116
160.05860.00210.06220.00520.14760.02070.14990.02490.06400.00760.06660.00420.13400.01550.07360.0085
Image 340.514600.514600.68020.07790.72920.10520.514600.514600.52760.00700.51460
80.160900.16780.01570.31590.05450.36880.05220.16250.00760.16760.01420.25280.04700.16790.0157
120.07880.00370.07980.00650.18660.03460.21860.03670.08060.00680.09270.01090.16590.02250.08940.0084
160.04620.00150.04810.00640.13220.01580.14390.02670.04900.00470.06310.00930.12070.01670.05850.0050
Image 440.742100.742101.00300.13261.14310.16460.742100.742100.75800.01040.74210
80.233200.233200.44830.05610.52390.10760.23320.00010.25680.03600.36320.05570.23520.0104
120.11790.00020.11800.00030.27140.04290.31570.04890.11910.00340.15170.01710.23480.03290.12430.0081
160.06990.00040.07070.00220.18460.02930.21720.03890.07110.00200.10220.00960.17540.02440.07950.0057
Image 541.111901.111901.38080.17141.50840.24391.111901.111901.13240.01281.11190
80.35300.00030.35290.00020.56020.08590.61330.07560.35330.00040.35390.00040.48620.05500.35630.0158
120.16390.00010.16480.00510.32970.03480.38190.03450.16470.00090.18110.01340.30130.03560.17080.0110
160.09740.00020.09770.00130.23040.03520.23940.03470.09870.00130.11450.00870.20930.02560.10740.0076
Image 640.315500.315500.42610.08310.46970.04870.315500.315500.32840.00630.31550
80.113500.11450.00540.19800.02080.22380.03600.11670.00900.11670.00930.17010.01540.11360.0001
120.05830.00010.06160.00540.12830.01900.13450.02120.06130.00380.06460.00450.11750.01540.06570.0055
160.03530.00030.03630.00230.09110.01210.09190.01310.03870.00330.04180.00280.08380.01020.04260.0040
Image 740.435900.435900.70590.11290.75710.13610.435900.435900.47960.07090.43590
80.15000.00650.15600.01460.31890.04330.36560.07340.15360.01240.16000.01690.24520.03810.15500.0135
120.07310.00210.07970.00880.20300.03250.23010.04030.07410.00340.09100.00770.16610.02620.08330.0083
160.04350.00220.04580.00390.13440.02150.14870.02650.04760.00690.07120.01040.12440.02210.05520.0059
Image 840.222300.222300.33450.05250.38550.06720.222300.222300.24710.03180.22230
80.07730.00150.07760.00160.17440.03270.19590.03720.08320.00720.10080.01810.14310.02530.08020.0098
120.03900.00130.03950.00260.11210.02040.12430.01810.04700.00590.05910.00920.10010.01510.04330.0039
160.02380.00050.02410.00120.07820.01360.09210.01970.03060.00370.03950.00600.07210.01330.03000.0039
Table A4. The PSNR values obtained by algorithms over all images.
Table A4. The PSNR values obtained by algorithms over all images.
ImagenThMROAROARSAAOAAOSSASCAGWO
MeanStdMeanStdMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
Image 1419.3376019.3376018.30180.524918.10870.702619.3376019.3376019.21210.350919.33760
824.14910.000424.11630.102221.77780.774921.79910.792024.14750.011724.12810.025422.72410.667924.12080.0330
1227.12460.050627.11750.127824.50010.810024.22000.956427.09690.127827.08280.233625.00320.817426.99680.2814
1629.31170.306829.18760.137726.15450.734726.26960.870929.12900.195828.97160.442326.46060.871829.26710.4458
Image 2418.0038018.0038017.33951.034616.60570.879818.0038018.0038017.88710.404418.00380
822.92340.294022.98310.227921.40670.925020.77841.450022.98350.155722.94210.244121.94601.032323.01980
1226.24920.813225.96590.070624.19380.984424.38991.096725.90850.279226.10720.582324.54980.976825.96650.1556
1628.67020.900328.51020.496026.05871.059125.94701.518028.09760.528128.58080.906126.16461.101928.22400.3534
Image 3419.2818019.2818019.66281.071618.38151.312419.2818019.2818019.38640.433419.28180
825.08310.198225.10370.127923.17080.990022.16171.544425.07720.140525.06450.275323.88521.111925.08250.0498
1228.31610.335928.31350.123425.92991.049725.04631.240328.23510.246927.96320.639826.04841.051128.20850.1756
1630.76710.588330.46440.326027.56990.820626.64351.456130.38970.258530.04810.736427.64471.072930.32870.2183
Image 4421.5871021.5871020.07170.626919.60250.770021.5871021.58730.001521.48090.112121.58710
826.55890.006926.55890.006923.55890.565323.08390.943026.55710.011226.12290.610424.60630.693626.51220.1869
1229.64290.027529.63860.026425.66950.726825.45870.745629.52740.187728.65630.489626.53540.683329.34600.2953
1631.83820.045231.82890.095327.26670.699227.11550.927631.84840.185830.52780.413127.91950.713531.30870.3760
Image 5419.2461019.2461017.95380.676718.17940.793419.2461019.2461019.17120.143619.24610
823.95140.093123.85990.053321.86290.887621.89070.694723.94890.132424.11040.109222.72690.455523.89730.1859
1227.29290.018927.25130.141724.08020.630124.24500.620327.18370.112427.35530.160524.69150.542027.00840.2825
1629.45400.065029.44780.052025.57300.745525.96130.805729.36350.164229.49270.248326.25970.729629.01700.3157
Image 6421.3607021.3607018.18141.873219.28811.630421.3607021.3607021.06570.531921.36070
826.72720.093926.66530.188122.97151.321022.95151.440726.55350.348626.64760.318624.76931.049126.69110.0932
1230.15610.092830.01080.302826.13091.216326.20531.410329.91280.388729.82470.391826.34761.210229.73590.4487
1632.74560.169832.70020.218627.70641.400828.28151.149232.23480.696131.84800.402828.39611.037331.86060.5641
Image 7420.6291020.6291019.71230.766819.44051.153320.6291020.6291020.45340.459320.62910
824.97440.329025.06480.464323.75910.730322.79881.409824.92770.197824.78880.506223.61550.890124.93330.1963
1227.87780.793427.68870.691726.04480.977024.77381.493927.33200.369626.94120.868725.91861.100927.55520.7761
1630.10651.051629.87791.472627.95100.819126.62081.748429.05730.878228.78371.348727.59471.479829.87871.1793
Image 8425.3640025.36280.006922.70341.218821.20111.645525.3640025.3640024.45531.068925.36400
830.47850.143530.47550.148026.00741.332625.85191.073030.12590.586629.08710.942327.26031.130130.27030.5812
1234.19240.280434.11840.347828.35941.136028.30500.923932.98630.804832.03250.826028.87061.175833.42890.5617
1636.72450.303136.61730.484329.86481.145329.62251.303935.10340.683333.83460.843830.87301.113335.20550.7466
Table A5. The SSIM values obtained by algorithms over all images.
Table A5. The SSIM values obtained by algorithms over all images.
ImagenThMROAROARSAAOAAOSSASCAGWO
MeanStdMeanStdMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
Image 140.650600.650600.64750.01980.59620.03330.650600.650600.64480.01450.65060
80.81410.00720.812500.78670.03110.74210.03490.81240.00020.81140.00160.78010.02250.81150.0014
120.88630.00910.88100.00910.85420.02380.82210.03520.87920.00470.87110.01140.84530.02600.88580.0126
160.91790.01050.91180.00330.89460.01350.86520.03640.90980.00330.90500.01270.88710.02210.92380.0125
Image 240.624900.624900.67770.03200.60100.05810.624900.624900.62700.02090.62490
80.79480.01040.79690.00690.79580.02980.74890.05800.79660.00380.79510.00820.78480.03670.79750
120.87370.01910.86780.00140.84420.03350.84000.03450.86590.00640.87040.01420.84640.02840.86780.0028
160.91340.01410.91150.00750.87970.02130.87030.03240.90540.00870.91180.01440.87740.02540.90740.0056
Image 340.728000.728000.77000.02760.68310.06060.728000.728000.71740.01460.72800
80.84980.00250.85140.00690.84730.02830.78560.04940.84910.00210.84970.00610.83470.03200.84880.0011
120.90610.00520.90650.00220.88190.01180.85180.03090.90540.00400.90000.01120.87240.02250.90540.0023
160.93690.00650.93450.00360.90190.01380.87950.03020.93320.00330.92670.00870.89610.02230.93240.0030
Image 440.747800.747800.76780.01460.71990.03270.747800.747800.74690.00850.74780
80.83990.00060.83990.00060.83270.00990.78450.02290.83910.00050.83590.00710.82530.01100.83980.0008
120.89140.00100.89110.00110.86040.00920.83140.01930.89060.00190.87040.00850.85810.01310.89110.0031
160.92320.00100.92300.00190.88390.00940.86330.01360.92280.00190.89770.00720.88230.01000.92150.0034
Image 540.685000.685000.67880.01390.63600.02970.685000.685000.68270.00400.68500
80.82190.00120.82150.00070.80760.02460.76780.02880.82080.00170.81900.00120.81360.01910.82070.0032
120.89450.00020.88710.00090.87200.02190.82530.02890.88650.00100.87940.00690.86320.02190.89080.0058
160.94010.00910.92040.00190.90240.01640.87230.02360.92030.00140.91050.00610.90450.01770.93870.0123
Image 640.874100.874100.88580.01100.85680.03640.874100.874100.87540.00630.87410
80.88660.00150.88590.00210.91730.01550.88010.02820.88430.00470.88620.00410.90280.02120.88630.0013
120.92060.00140.92000.00270.91590.02300.89940.02250.91900.00330.91740.00430.90360.02300.92040.0038
160.94050.00180.94010.00190.91540.01970.91690.01380.93760.00550.93420.00360.91900.01230.93590.0043
Image 740.689600.689600.71460.02810.65980.05240.689600.689600.68480.01170.68960
80.82370.01250.82610.01700.83520.02350.78000.05620.82140.00770.81720.01550.79180.02930.82200.0078
120.88910.01660.88490.01430.87290.02120.82610.04560.87710.00740.86640.01890.85070.03300.88150.0155
160.92140.01490.91720.02000.90220.01410.86020.04030.90640.01340.90040.02320.88030.03210.91850.0173
Image 840.856700.85660.00050.86570.02270.83520.03820.856700.856700.84680.01120.85670
80.91130.00320.91070.00350.88710.01450.87190.01660.90650.00490.89730.00780.88570.01060.90960.0054
120.94000.00230.93950.00260.90530.01360.89360.01010.93030.00630.92250.00680.89910.01370.93490.0042
160.95710.00220.95660.00310.91530.01130.90960.01080.94580.00480.93690.00600.91930.01090.94820.0049
Table A6. The FSIM values obtained by algorithms over all images.
Table A6. The FSIM values obtained by algorithms over all images.
ImagenThMROAROARSAAOAAOSSASCAGWO
MeanStdMeanStdMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
Image 140.769300.769300.73330.01730.72470.02190.769300.769300.76300.01050.76930
80.890200.88950.00250.83420.01840.82590.01840.89020.00010.89020.00030.85270.01350.89010.0002
120.93480.00040.93470.00140.88470.01280.87360.02020.93460.00180.93350.00250.89430.01340.93290.0034
160.95480.00250.95380.00050.91290.00990.91040.01270.95400.00160.95350.00310.91750.01120.95440.0037
Image 240.766600.766600.77400.01370.76180.01580.766600.766600.77050.00780.76660
80.87040.00260.87010.00360.84100.01240.83030.02090.87020.00260.87040.00260.84980.01540.87070
120.92110.00640.92020.00100.88580.01330.88400.01510.91900.00370.92050.00410.88850.01370.92010.0026
160.94740.00580.94610.00310.90860.01350.90650.01820.94320.00460.94690.00540.91180.01390.94490.0023
Image 340.790300.790300.79330.07790.75290.02780.790300.790300.78600.00490.79030
80.88940.00290.89010.00330.86920.05450.83100.02760.88910.00210.88910.00470.86900.02230.88910.0008
120.93140.00430.93270.00160.90340.03460.88210.02160.93170.00320.92930.00760.90120.01560.93150.0018
160.95540.00570.95130.00280.92260.01580.90760.02250.95090.00250.95000.00630.92210.01530.95010.0021
Image 440.798200.798200.80320.00750.78420.01310.798200.79820.00020.79910.00440.79820
80.86390.00010.86390.00010.83710.00820.82230.01460.86400.00030.85980.00660.84310.00890.86360.0016
120.90550.00040.90540.00050.86300.01080.85410.01000.90340.00320.89020.00620.86640.00850.90120.0037
160.92770.00070.92740.00100.88090.00870.87650.01210.92740.00210.91290.00550.88380.00920.92530.0032
Image 540.826400.826400.79210.01850.78610.02350.826400.826400.82390.00330.82640
80.90630.00240.90570.00130.87500.01330.86860.01570.90610.00350.91160.00240.88620.00890.90730.0036
120.94740.00020.94710.00110.91010.00850.90430.01000.94680.00080.94740.00200.91560.00740.94570.0021
160.96490.00050.96420.00090.92700.00930.92580.00810.96460.00090.96420.00200.93620.00650.96470.0024
Image 640.866900.866900.86310.00660.84960.01350.866900.866900.86590.00210.86690
80.89030.00070.89000.00090.89240.00710.87490.01400.88970.00210.89030.00180.89050.01150.89030.0006
120.92230.00080.92050.00340.90390.01280.89450.01180.91900.00390.91830.00380.90100.01160.91780.0037
160.93970.00100.93960.00120.91270.01180.91140.00920.93790.00430.93500.00260.91510.00850.93560.0037
Image 740.822300.822300.80360.01130.77930.02500.822300.822300.81160.00760.82230
80.88890.00360.89020.00510.87190.01230.84830.01670.88850.00150.88440.00480.86850.01210.88850.0021
120.92260.00770.92140.00640.90050.01270.88050.01760.91760.00220.91540.00850.88980.01150.92080.0093
160.94240.00900.94210.01170.92270.00940.90510.01500.93430.00380.93540.01110.91410.01510.94110.0084
Image 840.858700.85860.00050.86230.00890.85520.01200.858700.858700.85480.00600.85870
80.88890.00430.88790.00460.87100.00960.86710.01160.88350.00500.87610.00580.86840.00650.88710.0058
120.91990.00390.91880.00390.88120.01030.87740.00930.90770.00780.89980.00750.87790.01030.91200.0059
160.94460.00340.94370.00520.89000.00890.88790.00980.92810.00740.91590.00770.89500.01030.92930.0077
Table A7. The p-values obtained by algorithms over all images.
Table A7. The p-values obtained by algorithms over all images.
ImagenThROARSAAOAAOSSASCAGWO
Image 146.54 × 10−013.15 × 10−123.15 × 10−124.59 × 10−024.59 × 10−013.15 × 10−122.89 × 10−01
83.34 × 10−011.21 × 10−121.21 × 10−124.19 × 10−024.38 × 10−121.21 × 10−122.89 × 10−05
127.19 × 10−017.87 × 10−127.87 × 10−122.25 × 10−087.87 × 10−127.87 × 10−121.19 × 10−11
162.24 × 10−032.92 × 10−112.92 × 10−118.70 × 10−112.92 × 10−112.92 × 10−118.70 × 10−11
Image 24NaN1.21 × 10−121.21 × 10−12NaNNaN1.21 × 10−12NaN
81.76 × 10−011.25 × 10−111.25 × 10−112.69 × 10−011.30 × 10−061.25 × 10−114.69 × 10−02
125.08 × 10−032.37 × 10−112.37 × 10−111.07 × 10−067.87 × 10−112.37 × 10−112.88 × 10−09
161.80 × 10−023.01 × 10−113.01 × 10−115.48 × 10−113.01 × 10−113.01 × 10−113.01 × 10−11
Image 34NaN1.21 × 10−121.21 × 10−12NaNNaN1.21 × 10−12NaN
81.10 × 10−014.10 × 10−124.10 × 10−123.99 × 10−015.00 × 10−084.10 × 10−127.38 × 10−07
122.30 × 10−012.83 × 10−112.83 × 10−112.42 × 10−059.33 × 10−112.83 × 10−117.65 × 10−11
165.45 × 10−063.01 × 10−113.01 × 10−112.15 × 10−103.01 × 10−113.01 × 10−113.01 × 10−11
Image 44NaN1.21 × 10−121.21 × 10−12NaNNaN1.21 × 10−12NaN
81.98 × 10−021.04 × 10−111.04 × 10−118.90 × 10−011.05 × 10−071.04 × 10−114.98 × 10−02
124.65 × 10−029.04 × 10−129.04 × 10−122.10 × 10−059.04 × 10−129.04 × 10−129.04 × 10−12
168.92 × 10−032.95 × 10−112.95 × 10−112.62 × 10−093.26 × 10−112.95 × 10−112.95 × 10−11
Image 54NaN1.21 × 10−121.21 × 10−12NaNNaN1.21 × 10−12NaN
84.98 × 10−011.76 × 10−111.76 × 10−112.68 × 10−024.15 × 10−091.76 × 10−112.37 × 10−02
122.13 × 10−012.20 × 10−112.20 × 10−111.51 × 10−036.01 × 10−112.20 × 10−111.94 × 10−08
165.72 × 10−022.86 × 10−112.86 × 10−114.00 × 10−092.86 × 10−112.86 × 10−117.73 × 10−10
Image 64NaN1.21 × 10−121.21 × 10−12NaNNaN1.21 × 10−12NaN
83.38 × 10−022.82 × 10−112.82 × 10−113.03 × 10−046.93 × 10−012.82 × 10−112.15 × 10−01
121.79 × 10−023.00 × 10−113.00 × 10−119.02 × 10−083.00 × 10−113.00 × 10−113.98 × 10−09
163.33 × 10−013.02 × 10−113.02 × 10−113.01 × 10−073.02 × 10−113.02 × 10−111.56 × 10−08
Image 74NaN1.21 × 10−121.21 × 10−12NaNNaN1.21 × 10−12NaN
84.36 × 10−028.87 × 10−128.87 × 10−124.91 × 10−029.87 × 10−088.87 × 10−121.42 × 10−04
124.56 × 10−022.22 × 10−112.22 × 10−115.89 × 10−041.22 × 10−102.22 × 10−113.05 × 10−10
164.49 × 10−022.96 × 10−112.96 × 10−112.06 × 10−082.96 × 10−112.96 × 10−114.00 × 10−11
Image 847.08 × 10−015.14 × 10−125.14 × 10−121.78 × 10−021.05 × 10−015.14 × 10−124.59 × 10−01
85.63 × 10−012.94 × 10−112.94 × 10−116.64 × 10−027.78 × 10−082.94 × 10−112.90 × 10−02
122.51 × 10−023.01 × 10−113.01 × 10−118.88 × 10−103.01 × 10−113.01 × 10−113.33 × 10−11
163.87 × 10−013.02 × 10−113.02 × 10−118.15 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
Table A8. The CPU time for the MROA and comparison algorithms.
Table A8. The CPU time for the MROA and comparison algorithms.
ImagenThMROAROARSAAOAAOSSASCAGWO
Image 141.83791.27740.76730.58871.15240.69320.64090.6846
81.90901.29220.78950.59151.18070.70110.54790.6018
122.13671.42430.98460.60581.19190.70980.68520.6668
162.29431.58391.09030.64251.21710.71630.66300.7468
Image 241.88841.34000.79260.65561.10920.65370.64340.6246
81.91551.25440.73680.60921.00520.62360.55400.6029
122.19561.49250.88880.68441.11970.62100.58610.6386
162.37951.53261.04080.67021.25120.65860.68190.6626
Image 341.84301.30690.77110.59501.13760.69220.65400.6255
81.91251.31680.74360.55380.99630.60330.58040.6135
122.11901.52180.88060.59971.17440.64860.58270.6043
162.29361.53521.03340.70551.20350.72960.64590.6791
Image 441.86301.36880.79570.67221.13090.67260.69450.6642
81.93961.31000.74780.58431.03900.54980.57350.6127
122.14221.40250.91280.64301.07080.60950.59390.6114
162.56281.52491.05770.71741.16870.66390.71920.6939
Image 541.86421.32740.82880.62421.13960.67300.63440.6666
81.87931.39110.77550.58291.05910.54510.56880.5600
122.17811.43720.91060.58941.07380.58730.58870.5966
162.26331.52031.05750.63011.17800.64320.71610.6593
Image 641.85821.41310.83800.61211.12320.68060.65700.6517
82.05191.27560.73700.55011.00230.62400.53830.5697
122.11201.45080.88680.59591.10900.69990.59640.6241
162.30201.56461.02700.64631.27600.70850.64450.6864
Image 741.86901.37770.78550.62321.12370.69440.63290.6491
81.92701.32810.74360.53530.99740.59330.53320.5566
122.18311.38440.88670.58381.08960.65090.59470.6173
162.32131.50321.15250.64191.19590.74080.65770.6742
Image 841.86111.33400.78800.63871.17180.71020.64070.6419
81.90481.33560.73450.53701.06520.55390.54550.5838
122.11671.38820.88580.57771.17870.61690.58210.6676
162.45021.52551.04840.64791.28390.73110.65000.6957

References

  1. Bhandari, A.K. A novel beta differential evolution algorithm-based fast multilevel thresholding for color image segmentation. Neural Comput. Appl. 2020, 32, 4583–4613. [Google Scholar] [CrossRef]
  2. He, L.; Huang, S. An efficient krill herd algorithm for color image multilevel thresholding segmentation problem. Appl. Soft Comput. 2020, 89, 106063. [Google Scholar] [CrossRef]
  3. Bhandari, A.K.; Rahul, K. A novel local contrast fusion-based fuzzy model for color image multilevel thresholding using grasshopper optimization. Appl. Soft Comput. 2019, 81, 105515. [Google Scholar] [CrossRef]
  4. Bhattacharyya, S.; Maulik, U.; Dutta, P. Multilevel image segmentation with adaptive image context based thresholding. Appl. Soft Comput. 2011, 11, 946–962. [Google Scholar] [CrossRef]
  5. Anitha, J.; Pandian, I.A.; Agnes, S.A. An efficient multilevel color image thresholding based on modified whale optimization algorithm. Expert Syst. Appl. 2021, 178, 115003. [Google Scholar] [CrossRef]
  6. Lei, B.; Fan, J. Multilevel minimum cross entropy thresholding: A comparative study. Appl. Soft Comput. 2020, 96, 106588. [Google Scholar] [CrossRef]
  7. Lin, S.; Jia, H.; Abualigah, L.; Altalhi, M. Enhanced slime mould algorithm for multilevel thresholding image segmentation using entropy measures. Entropy 2021, 23, 1700. [Google Scholar] [CrossRef]
  8. Kotte, S.; Pullakura, R.K.; Injeti, S.K. Optimal multilevel thresholding selection for brain MRI image segmentation based on adaptive wind driven optimization. Measurement 2018, 130, 340–361. [Google Scholar] [CrossRef]
  9. Houssein, E.H.; Emam, M.M.; Ali, A.A. An efficient multilevel thresholding segmentation method for thermography breast cancer imaging based on improved chimp optimization algorithm. Expert Syst. Appl. 2021, 185, 115651. [Google Scholar] [CrossRef]
  10. Zhou, Y.; Yang, X.; Ling, Y.; Zhang, J. Meta-heuristic moth swarm algorithm for multilevel thresholding image segmentation. Multimed. Tools Appl. 2018, 77, 23699–23727. [Google Scholar] [CrossRef]
  11. Jiang, Y.; Yeh, W.; Hao, Z.; Yang, Z. A cooperative honey bee mating algorithm and its application in multi-threshold image segmentation. Inform. Sci. 2016, 369, 171–183. [Google Scholar] [CrossRef]
  12. Sarkar, S.; Das, S. Multilevel Image Thresholding Based on 2D Histogram and Maximum Tsallis Entropy—A Differential Evolution Approach. IEEE Trans. Image Process 2013, 22, 4788–4797. [Google Scholar] [CrossRef] [PubMed]
  13. Ahmadi, M.; Kazemi, K.; Aarabi, A.; Niknam, T.; Helfroush, M.S. Image segmentation using multilevel thresholding based on modified bird mating optimization. Multimed. Tools Appl. 2017, 78, 23003–23027. [Google Scholar] [CrossRef]
  14. Abualigah, L.; Diabat, A.; Sumari, P.; Gandomi, A.H. A novel evolutionary arithmetic optimization algorithm for multilevel thresholding segmentation of COVID-19 CT images. Processes 2021, 9, 1155. [Google Scholar] [CrossRef]
  15. Chen, H.; Li, W.; Yang, X. A whale optimization algorithm with chaos mechanism based on quasi-opposition for global optimization problems. Expert Syst. Appl. 2020, 158, 113612. [Google Scholar] [CrossRef]
  16. Rao, R.V.; Vakharia, D.P. Teaching-learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  17. Wang, S.; Liu, Q.; Liu, Y.; Jia, H.; Abualigah, L.; Zheng, R.; Wu, D. A hybrid SSA and SMA with mutation opposition-based learning for constrained engineering problems. Comput. Intell. Neurosci. 2021, 2021, 6379469. [Google Scholar] [CrossRef]
  18. Dinkar, S.K.; Deep, K.; Mirjalili, S.; Thapliyal, S. Opposition-based Laplacian Equilibrium Optimizer with application in image segmentation using multilevel thresholding. Expert Syst. Appl. 2021, 174, 114766. [Google Scholar] [CrossRef]
  19. Zheng, R.; Jia, H.; Abualigah, L.; Liu, Q.; Wang, S. Deep ensemble of slime mold algorithm and arithmetic optimization algorithm for global optimization. Processes 2021, 9, 1774. [Google Scholar] [CrossRef]
  20. Wang, S.; Jia, H.; Liu, Q.; Zheng, R. An improved hybrid aquila optimizer and harris hawks optimization for global optimization. Math. Biosci. Eng. 2021, 18, 7076–7109. [Google Scholar] [CrossRef]
  21. Houssein, E.H.; Mahdy, M.A.; Blondin, M.J.; Shebl, D.; Mohamed, W.M. Hybrid slime mould algorithm with adaptive guided differential evolution algorithm for combinatorial and global optimization problems. Expert Syst. Appl. 2021, 174, 114689. [Google Scholar] [CrossRef]
  22. Zheng, R.; Jia, H.; Abualigah, L.; Liu, Q.; Wang, S. An improved arithmetic optimization algorithm with forced switching mechanism for global optimization problems. Math. Biosci. Eng. 2021, 19, 473–512. [Google Scholar] [CrossRef] [PubMed]
  23. Wang, S.; Jia, H.; Abualigah, L.; Liu, Q.; Zheng, R. An improved hybrid aquila optimizer and harris hawks algorithm for solving industrial engineering optimization problems. Processes 2021, 9, 1551. [Google Scholar] [CrossRef]
  24. Li, Y.; Zhao, Y.; Liu, J. Dynamic sine cosine algorithm for large-scale global optimization problems. Expert Syst. Appl. 2021, 177, 114950. [Google Scholar] [CrossRef]
  25. Zhang, Z.; Ding, S.; Jia, W. A hybrid optimization algorithm based on cuckoo search and differential evolution for solving constrained engineering problems. Eng. Appl. Artif. Intel. 2019, 85, 254–268. [Google Scholar] [CrossRef]
  26. Khare, A.; Rangnekar, S. A review of particle swarm optimization and its applications in solar photovoltaic system. Appl. Soft Comput. 2013, 13, 2997–3006. [Google Scholar] [CrossRef]
  27. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  28. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  29. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  30. Abdel-Basset, M.; Hessin, A.N.; Abdel-Fatah, L. A comprehensive study of cuckoo-inspired algorithms. Neural Comput. Appl. 2018, 29, 345–361. [Google Scholar] [CrossRef]
  31. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp swarm algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  32. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  33. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  34. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl. Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  35. Abualigah, L.; Yousri, D.; Abd, E.M.; Ewees, A.A. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  36. Katoch, S.; Chauhan, S.S.; Kumar, V. A review on genetic algorithm: Past, present, and future. Multimed. Tools Appl. 2021, 80, 8091–8126. [Google Scholar] [CrossRef]
  37. Rakshit, P.; Konar, A.; Das, S. Noisy evolutionary optimization algorithms—A comprehensive survey. Swarm Evol. Comput. 2017, 33, 18–45. [Google Scholar] [CrossRef]
  38. Slowik, A.; Kwasnicka, H. Evolutionary algorithms and their applications to engineering problems. Neural Comput. Appl. 2020, 32, 12363–12379. [Google Scholar] [CrossRef] [Green Version]
  39. Nguyen, S.; Mei, Y.; Zhang, M. Genetic programming for production scheduling: A survey with a unified framework. Complex Intell. Syst. 2017, 3, 41–66. [Google Scholar] [CrossRef] [Green Version]
  40. Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef] [Green Version]
  41. Hansen, N.; Ostermeier, A. Completely Derandomized Self-Adaptation in Evolution Strategies. Evol. Comput. 2001, 9, 159–195. [Google Scholar] [CrossRef]
  42. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  43. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inform. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  44. Ahmadianfar, I.; Haddad, O.B.; Chu, X. Gradient-based optimizer: A new metaheuristic optimization algorithm. Inform. Sci. 2020, 540, 131–159. [Google Scholar] [CrossRef]
  45. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  46. Tanyildizi, E.; Demir, G. Golden sine algorithm: A novel math-inspired algorithm. Golden sine algorithm: A novel math-inspired algorithm. Adv. Electr. Comput. Eng. 2017, 17, 71–78. [Google Scholar] [CrossRef]
  47. Abualigah, L.; Diabat, A.; Mirjalili, S.; Elaziz, A.E.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  48. Neggaz, N.; Houssein, E.H.; Hussain, K. An efficient henry gas solubility optimization for feature selection. Expert Syst. Appl. 2020, 152, 113364. [Google Scholar] [CrossRef]
  49. Sun, P.; Liu, H.; Zhang, Y.; Meng, Q.; Tu, L.; Zhao, J. An improved atom search optimization with dynamic opposite learning and heterogeneous comprehensive learning. Appl. Soft Comput. 2021, 103, 107140. [Google Scholar] [CrossRef]
  50. Liu, Y.; Tian, P. A multi-start central force optimization for global optimization. Appl. Soft Comput. 2015, 27, 92–98. [Google Scholar] [CrossRef]
  51. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural. Comput. Appl. 2015, 27, 495–513. [Google Scholar] [CrossRef]
  52. Jia, H.; Peng, X.; Lang, C. Remora optimization algorithm. Expert Syst. Appl. 2021, 185, 115665. [Google Scholar] [CrossRef]
  53. Zheng, R.; Jia, H.; Abualigah, L.; Wang, S.; Wu, D. An improved remora optimization algorithm with autonomous foraging mechanism for global optimization problems. Math. Biosci. Eng. 2022, 19, 3994–4037. [Google Scholar] [CrossRef]
  54. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  55. Jia, H.; Lang, C.; Oliva, D.; Song, W.; Peng, X. Dynamic harris hawks optimization with mutation mechanism for satellite image segmentation. Remote Sens. 2019, 11, 1421. [Google Scholar] [CrossRef] [Green Version]
  56. Ewees, A.A.; Abualigah, L.; Yousri, D.; Sahlol, A.T.; Al-qaness, A.A.; Alshathri, S.; Elaziz, M.A. Modified artificial ecosystem-based optimization for multilevel thresholding image segmentation. Mathematics 2021, 9, 2363. [Google Scholar] [CrossRef]
  57. Houssein, E.H.; Hussain, K.; Abualigah, L.; Elaziz, M.A.; Alomoush, W.; Dhiman, G.; Djenouri, Y.; Cuevas, E. An improved opposition-based marine predators algorithms for global optimization and multilevel thresholding image segmentation. Knowl.-Based Syst. 2021, 229, 107348. [Google Scholar] [CrossRef]
  58. Su, H.; Zhao, D.; Yu, F.; Heidari, A.A.; Zhang, Y.; Chen, H.; Li, C.; Pan, J.; Quan, S. Horizontal and vertical search artificial bee colony for image segmentation of COVID-19 X-ray images. Comput. Biol. Med. 2021, 142, 105181. [Google Scholar] [CrossRef]
  59. Liu, L.; Zhao, D.; Yu, F.; Heidari, A.A.; Ru, J.; Chen, H.; Mafarja, M.; Turabieh, H.; Pan, Z. Performance optimization of differential evolution with slime mould algorithm for multilevel breast cancer image segmentation. Comput. Biol. Med. 2021, 138, 104910. [Google Scholar] [CrossRef]
  60. Li, Y.; Bai, X.; Jiao, L.; Xue, Y. Partitioned-cooperative quantum-behaved particle swarm optimization based on multilevel thresholding applied to medical image segmentation. Appl. Soft Comput. 2017, 56, 345–356. [Google Scholar] [CrossRef]
  61. Sun, K.; Jia, H.; Li, Y.; Jiang, Z. Hybrid improved slime mould algorithm with adaptive β hill climbing for numerical optimization. J. Intell. Fuzzy Syst. 2021, 40, 1667–1679. [Google Scholar] [CrossRef]
  62. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine predators algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  63. Tizhoosh, H.R. Opposition-based learning: A new scheme for machine intelligence. In Proceedings of the Computational Intelligence for Modelling, Control & Automation, Vienna, Austria, 28–30 November 2005. [Google Scholar]
  64. Chauhan, S.; Vashishtha, G.; Kumar, A. A symbiosis of arithmetic optimizer with slime mould algorithm for improving global optimization and conventional design problem. J. Supercomput. 2022, 78, 6234–6274. [Google Scholar] [CrossRef]
  65. Kullback, S. Information Theory and Statistics; Dover: New York, NY, USA, 1968. [Google Scholar]
  66. Oliva, D.; Hinojosa, S.; Cuevas, E.; Pajares, G.; Avalos, O.; Galvez, J. Cross entropy based thresholding for magnetic resonance brain images using Crow Search Algorithm. Expert Syst. Appl. 2017, 79, 164–180. [Google Scholar] [CrossRef]
  67. Esparza, E.R.; Calzada, L.A.Z.; Oliva, D.; Heidari, A.A.; Zaldivar, D.; Cisneros, M.P.; Foong, L.K. An efficient harris hawks-inspired image segmentation method. Expert Syst. Appl. 2020, 155, 113428. [Google Scholar] [CrossRef]
  68. Gill, H.S.; Khehra, B.S.; Singh, A.; Kaur, L. Teaching-learning-based optimization algorithm to minimize cross entropy for Selecting multilevel threshold values. Egypt. Inform. J. 2019, 20, 11–25. [Google Scholar] [CrossRef]
  69. Abualigah, L.; Elaziz, M.A.; Sumari, P.; Geem, Z.; Gandomi, A.H. Reptile search algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  70. Houssein, E.H.; Neggaz, N.; Hosney, M.E.; Mohamed, W.M.; Hassaballah, M. Enhanced Harris hawks optimization with genetic operators for selection chemical descriptors and compounds activities. Neural Comput. Appl. 2021, 33, 13601–13618. [Google Scholar] [CrossRef]
  71. Bao, X.; Jia, H.; Lang, C. A Novel Hybrid Harris Hawks Optimization for Color Image Multilevel Thresholding Segmentation. IEEE Access 2019, 7, 76529–76546. [Google Scholar] [CrossRef]
  72. Sara, U.; Akter, M.; Uddin, M.S. Image quality assessment through FSIM, SSIM, MSE and PSNR—A comparative study. J. Comput. Commun. 2019, 7, 8–18. [Google Scholar] [CrossRef] [Green Version]
  73. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  74. Jia, H.; Ma, J.; Song, W. Multilevel Thresholding Segmentation for Color Image Using Modified Moth-Flame Optimization. IEEE Access 2019, 7, 44097–44134. [Google Scholar] [CrossRef]
  75. Xing, Z. An improved emperor penguin optimization based multilevel thresholding for color image segmentation. Knowl. Based Syst. 2020, 194, 105570. [Google Scholar] [CrossRef]
Figure 1. The different process of ROA.
Figure 1. The different process of ROA.
Mathematics 10 01014 g001
Figure 2. Brownian distribution, 2D Brownian trajectory, and 3D Brownian trajectory.
Figure 2. Brownian distribution, 2D Brownian trajectory, and 3D Brownian trajectory.
Mathematics 10 01014 g002
Figure 3. Diagram of LOBL.
Figure 3. Diagram of LOBL.
Mathematics 10 01014 g003
Figure 4. Flowchart of the proposed MROA.
Figure 4. Flowchart of the proposed MROA.
Mathematics 10 01014 g004
Figure 5. View of 23 benchmark functions.
Figure 5. View of 23 benchmark functions.
Mathematics 10 01014 g005aMathematics 10 01014 g005b
Figure 6. The radar graphs of algorithms on 23 benchmark functions.
Figure 6. The radar graphs of algorithms on 23 benchmark functions.
Mathematics 10 01014 g006
Figure 7. Boxplot behavior of algorithms on some functions.
Figure 7. Boxplot behavior of algorithms on some functions.
Mathematics 10 01014 g007
Figure 8. Convergence behavior of algorithms on some functions.
Figure 8. Convergence behavior of algorithms on some functions.
Mathematics 10 01014 g008aMathematics 10 01014 g008b
Figure 9. Berkeley image dataset and its histogram.
Figure 9. Berkeley image dataset and its histogram.
Mathematics 10 01014 g009
Figure 10. Average fitness values obtained by the algorithms over all images.
Figure 10. Average fitness values obtained by the algorithms over all images.
Mathematics 10 01014 g010
Figure 11. Average PSNR values obtained by the algorithms over all images.
Figure 11. Average PSNR values obtained by the algorithms over all images.
Mathematics 10 01014 g011
Figure 12. Average SSIM values obtained by the algorithms over all images.
Figure 12. Average SSIM values obtained by the algorithms over all images.
Mathematics 10 01014 g012
Figure 13. Average FSIM values obtained by the algorithms over all images.
Figure 13. Average FSIM values obtained by the algorithms over all images.
Mathematics 10 01014 g013
Figure 14. The number of best cases based on the fitness, PSNR, SSIM, FSIM obtained by algorithms over all images.
Figure 14. The number of best cases based on the fitness, PSNR, SSIM, FSIM obtained by algorithms over all images.
Mathematics 10 01014 g014
Table 1. Definition of 23 benchmark functions.
Table 1. Definition of 23 benchmark functions.
NoNameDRangeTypefmin
F1Sphere30[−100, 100]UM0
F2Schwefel 2.2230[−10, 10]UM0
F3Schwefel 1.230[−100, 100]UM0
F4Schwefel 2.2130[−100, 100]UM0
F5Rosenbrock30[−30, 30]UM0
F6Step30[−100, 100]UM0
F7Quartic30[−1.28, 1.28]UM0
F8Schwefel30[−500, 500]MM−12,569.487
F9Rastrigin30[−5.12, 5.12]MM0
F10Ackley30[−32, 32]MM0
F11Griewank30[−600, 600]MM0
F12Penalized30[−50, 50]MM0
F13Penalized 230[−50, 50]MM0
F14Foxholes2[−65.536, 65.536]MM0.998004
F15Kowalik4[−5, −5]MM0.0003075
F16Six-hump camel-back2[−5, −5]MM−1.0316285
F17Branin2[−5, −5]MM0.398
F18Goldstein-Price2[−2, 2]MM3
F19Hartman 33[−1, 2]MM−3.862782
F20Hartman 66[0, 1]MM−3.32236
F21Shekel 54[0, 10]MM−10.1532
F22Shekel 74[0, 10]MM−10.4029
F23Shekel 104[0, 10]MM−10.5364
Table 3. Statistical results of algorithms on 23 benchmark functions.
Table 3. Statistical results of algorithms on 23 benchmark functions.
FunctionMROAROARSAAOAAOSSASCAGWO
F1Mean0.00 × 10+008.88 × 10−3140.00 × 10+005.22 × 10−066.58 × 10−1022.03 × 10−071.76 × 10+011.23 × 10−27
Std0.00 × 10+000.00 × 10+000.00 × 10+002.58 × 10−063.59 × 10−1012.30 × 10−075.31 × 10+012.21 × 10−27
F2Mean0.00 × 10+001.01 × 10−1680.00 × 10+001.86 × 10−031.72 × 10−592.02 × 10+002.95 × 10−021.19 × 10−16
Std0.00 × 10+000.00 × 10+000.00 × 10+001.77 × 10−039.40 × 10−591.56 × 10+005.07 × 10−028.16 × 10−17
F3Mean0.00 × 10+001.15 × 10−2680.00 × 10+001.10 × 10−033.11 × 10−1011.67 × 10+031.01 × 10+041.60 × 10−04
Std0.00 × 10+000.00 × 10+000.00 × 10+008.16 × 10−041.64 × 10−1008.80 × 10+026.49 × 10+038.16 × 10−04
F4Mean0.00 × 10+004.59 × 10−1640.00 × 10+001.86 × 10−022.13 × 10−531.07 × 10+013.79 × 10+016.77 × 10−07
Std0.00 × 10+000.00 × 10+000.00 × 10+001.30 × 10−029.90 × 10−534.18 × 10+001.10 × 10+016.21 × 10−07
F5Mean2.74 × 10+012.63 × 10+012.13 × 10+012.81 × 10+017.19 × 10−033.22 × 10+026.92 × 10+042.70 × 10+01
Std6.30 × 10−014.96 × 10+001.30 × 10+012.22 × 10−011.18 × 10−026.84 × 10+021.81 × 10+056.69 × 10−01
F6Mean5.72 × 10−011.05 × 10−017.35 × 10+003.10 × 10+001.23 × 10−042.33 × 10−071.96 × 10+017.63 × 10−01
Std2.48 × 10−017.73 × 10−022.90 × 10−012.37 × 10−011.84 × 10−043.20 × 10−072.77 × 10+013.27 × 10−01
F7Mean5.95 × 10−051.64 × 10−041.02 × 10−047.00 × 10−051.12 × 10−041.68 × 10−019.58 × 10−021.80 × 10−03
Std6.00 × 10−051.88 × 10−041.16 × 10−047.00 × 10−058.14 × 10−056.99 × 10−028.56 × 10−029.21 × 10−04
F8Mean−1.24 × 10+04−1.23 × 10+04−5.34 × 10+03−5.50 × 10+03−8.43 × 10+03−7.10 × 10+03−3.80 × 10+03−6.09 × 10+03
Std2.60 × 10+024.96 × 10+023.88 × 10+024.21 × 10+023.55 × 10+038.77 × 10+022.56 × 10+028.92 × 10+02
F9Mean0.00 × 10+000.00 × 10+000.00 × 10+001.60 × 10−060.00 × 10+005.90 × 10+012.98 × 10+013.34 × 10+00
Std0.00 × 10+000.00 × 10+000.00 × 10+001.51 × 10−060.00 × 10+002.59 × 10+012.55 × 10+014.09 × 10+00
F10Mean8.88 × 10−168.88 × 10−168.88 × 10−164.71 × 10−048.88 × 10−162.49 × 10+001.42 × 10+011.07 × 10−13
Std0.00 × 10+000.00 × 10+000.00 × 10+001.40 × 10−040.00 × 10+009.35 × 10−018.86 × 10+001.60 × 10−14
F11Mean0.00 × 10+000.00 × 10+000.00 × 10+003.13 × 10−030.00 × 10+001.97 × 10−021.17 × 10+005.43 × 10−03
Std0.00 × 10+000.00 × 10+000.00 × 10+001.19 × 10−020.00 × 10+001.53 × 10−027.83 × 10−018.53 × 10−03
F12Mean4.41 × 10−021.11 × 10−021.55 × 10+007.46 × 10−012.73 × 10−066.64 × 10+004.05 × 10+043.83 × 10−02
Std2.34 × 10−021.09 × 10−023.31 × 10−013.10 × 10−023.94 × 10−063.35 × 10+001.54 × 10+051.52 × 10−02
F13Mean2.44 × 10+002.13 × 10−013.85 × 10−012.96 × 10+006.09 × 10−051.81 × 10+013.82 × 10+056.87 × 10−01
Std1.07 × 10+001.29 × 10−019.99 × 10−011.99 × 10−021.03 × 10−041.52 × 10+011.23 × 10+061.90 × 10−01
F14Mean9.57 × 10+003.22 × 10+004.54 × 10+001.07 × 10+012.82 × 10+001.10 × 10+001.66 × 10+005.50 × 10+00
Std4.85 × 10+003.89 × 10+003.51 × 10+003.15 × 10+003.97 × 10+003.03 × 10−019.48 × 10−014.28 × 10+00
F15Mean4.32 × 10−044.33 × 10−042.85 × 10−031.29 × 10−024.57 × 10−042.81 × 10−031.22 × 10−033.75 × 10−03
Std1.59 × 10−041.96 × 10−041.63 × 10−032.88 × 10−029.57 × 10−055.96 × 10−033.42 × 10−047.56 × 10−03
F16Mean−1.03 × 10+00−1.03 × 10+00−1.03 × 10+00−1.03 × 10+00−1.03 × 10+00−1.03 × 10+00−1.03 × 10+00−1.03 × 10+00
Std2.30 × 10−083.47 × 10−081.97 × 10−032.22 × 10−113.91 × 10−041.81 × 10−144.59 × 10−052.85 × 10−08
F17Mean3.98 × 10−013.98 × 10−014.17 × 10−014.03 × 10−013.98 × 10−013.98 × 10−013.99 × 10−013.98 × 10−01
Std7.33 × 10−065.30 × 10−061.59 × 10−022.50 × 10−021.88 × 10−041.12 × 10−141.15 × 10−039.65 × 10−07
F18Mean3.00 × 10+003.00 × 10+006.68 × 10+003.06 × 10+013.04 × 10+003.00 × 10+003.00 × 10+003.00 × 10+00
Std1.23 × 10−047.58 × 10−059.55 × 10+003.49 × 10+014.04 × 10−022.32 × 10−131.92 × 10−043.62 × 10−05
F19Mean−3.86 × 10+00−3.86 × 10+00−3.76 × 10+00−3.77 × 10+00−3.86 × 10+00−3.86 × 10+00−3.85 × 10+00−3.86 × 10+00
Std1.27 × 10−042.73 × 10−031.65 × 10−015.23 × 10−017.21 × 10−032.50 × 10−128.81 × 10−033.81 × 10−03
F20Mean−3.30 × 10+00−3.24 × 10+00−2.71 × 10+00−3.27 × 10+00−3.16 × 10+00−3.23 × 10+00−2.89 × 10+00−3.27 × 10+00
Std4.51 × 10−028.27 × 10−022.93 × 10−015.93 × 10−021.09 × 10−015.83 × 10−023.16 × 10−017.81 × 10−02
F21Mean−1.02 × 10+01−1.01 × 10+01−5.06 × 10+00−7.71 × 10+00−1.01 × 10+01−7.23 × 10+00−2.53 × 10+00−8.98 × 10+00
Std3.00 × 10−031.45 × 10−023.61 × 10−072.69 × 10+001.02 × 10−023.48 × 10+001.81 × 10+002.44 × 10+00
F22Mean−1.04 × 10+01−1.04 × 10+01−5.09 × 10+00−8.26 × 10+00−1.04 × 10+01−8.76 × 10+00−3.26 × 10+00−1.04 × 10+01
Std2.02 × 10−032.45 × 10−027.16 × 10−072.92 × 10+001.45 × 10−023.05 × 10+001.85 × 10+002.40 × 10−03
F23Mean−1.05 × 10+01−1.05 × 10+01−5.13 × 10+00−7.35 × 10+00−1.05 × 10+01−8.85 × 10+00−3.65 × 10+00−1.01 × 10+01
Std2.19 × 10−031.97 × 10−021.44 × 10−063.38 × 10+002.93 × 10−022.90 × 10+001.86 × 10+001.85 × 10+00
Table 4. Statistical results of algorithms on 23 benchmark functions using Wilcoxon rank-sum test.
Table 4. Statistical results of algorithms on 23 benchmark functions using Wilcoxon rank-sum test.
FunctionMROA vs.
ROARSAAOAAOSSASCAGWO
F14.19 × 10−02NaN1.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12
F21.21 × 10−12NaN1.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12
F31.21 × 10−12NaN1.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12
F41.21 × 10−12NaN1.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12
F52.46 × 10−012.50 × 10−021.70 × 10−083.02 × 10−115.57 × 10−103.02 × 10−113.18 × 10−01
F63.20 × 10−091.79 × 10−114.08 × 10−113.02 × 10−113.02 × 10−113.02 × 10−118.77 × 10−02
F72.13 × 10−053.92 × 10−026.41 × 10−018.50 × 10−023.02 × 10−113.02 × 10−113.02 × 10−11
F89.06 × 10−083.01 × 10−113.02 × 10−119.33 × 10−028.99 × 10−113.02 × 10−113.02 × 10−11
F9NaNNaN1.21 × 10−12NaN1.21 × 10−121.21 × 10−121.18 × 10−12
F10NaNNaN1.21 × 10−12NaN1.21 × 10−121.21 × 10−121.02 × 10−12
F11NaNNaN1.21 × 10−12NaN1.21 × 10−121.21 × 10−121.10 × 10−02
F123.16 × 10−101.10 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.30 × 10−03
F133.16 × 10−106.71 × 10−059.51 × 10−063.02 × 10−112.75 × 10−033.02 × 10−111.03 × 10−06
F143.37 × 10−063.34 × 10−043.91 × 10−018.07 × 10−072.18 × 10−103.20 × 10−074.56 × 10−05
F153.18 × 10−015.49 × 10−111.22 × 10−016.36 × 10−056.70 × 10−111.86 × 10−095.69 × 10−01
F161.77 × 10−035.57 × 10−103.02 × 10−115.57 × 10−103.01 × 10−115.57 × 10−101.08 × 10−02
F173.71 × 10−016.07 × 10−113.02 × 10−111.17 × 10−092.86 × 10−113.34 × 10−112.90 × 10−01
F185.08 × 10−031.09 × 10−013.99 × 10−043.02 × 10−113.02 × 10−117.98 × 10−027.24 × 10−02
F199.06 × 10−083.02 × 10−111.86 × 10−036.07 × 10−113.02 × 10−113.02 × 10−112.25 × 10−04
F204.01 × 10−023.69 × 10−116.77 × 10−053.50 × 10−033.26 × 10−013.47 × 10−102.05 × 10−03
F213.01 × 10−073.02 × 10−113.37 × 10−047.30 × 10−041.86 × 10−013.02 × 10−115.26 × 10−04
F221.17 × 10−093.02 × 10−119.53 × 10−071.47 × 10−073.99 × 10−043.02 × 10−116.05 × 10−07
F238.48 × 10−093.02 × 10−111.27 × 10−025.19 × 10−079.51 × 10−063.02 × 10−111.75 × 10−05
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, Q.; Li, N.; Jia, H.; Qi, Q.; Abualigah, L. Modified Remora Optimization Algorithm for Global Optimization and Multilevel Thresholding Image Segmentation. Mathematics 2022, 10, 1014. https://doi.org/10.3390/math10071014

AMA Style

Liu Q, Li N, Jia H, Qi Q, Abualigah L. Modified Remora Optimization Algorithm for Global Optimization and Multilevel Thresholding Image Segmentation. Mathematics. 2022; 10(7):1014. https://doi.org/10.3390/math10071014

Chicago/Turabian Style

Liu, Qingxin, Ni Li, Heming Jia, Qi Qi, and Laith Abualigah. 2022. "Modified Remora Optimization Algorithm for Global Optimization and Multilevel Thresholding Image Segmentation" Mathematics 10, no. 7: 1014. https://doi.org/10.3390/math10071014

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop