Next Article in Journal
No Post-Activation Performance Enhancement Following a Single Set of Plyometric or Flywheel Exercises in National Team Rugby Players
Previous Article in Journal
Properties and Production Assumptions of Organic Biofertilisers Based on Solid and Liquid Waste from the Food Industry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving Prostate Image Segmentation Based on Equilibrium Optimizer and Cross-Entropy

by
Omar Zarate
1,
Salvador Hinojosa
2,* and
Daniel Ortiz-Joachin
2
1
Information Technology Department, School of Engineering and Science, Universidad Tecnologíca de Jalisco, Guadalajara 44979, Mexico
2
Depto. de Computación, Escuela de Ingeniería y Ciencias, Tecnologico de Monterrey, Zapopan 45121, Mexico
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(21), 9785; https://doi.org/10.3390/app14219785
Submission received: 10 September 2024 / Revised: 13 October 2024 / Accepted: 16 October 2024 / Published: 25 October 2024

Abstract

:
Over the past decade, the development of computer-aided detection tools for medical image analysis has seen significant advancements. However, tasks such as the automatic differentiation of tissues or regions in medical images remain challenging. Magnetic resonance imaging (MRI) has proven valuable for early diagnosis, particularly in conditions like prostate cancer, yet it often struggles to produce high-resolution images with clearly defined boundaries. In this article, we propose a novel segmentation approach based on minimum cross-entropy thresholding using the equilibrium optimizer (MCE-EO) to enhance the visual differentiation of tissues in prostate MRI scans. To validate our method, we conducted two experiments. The first evaluated the overall performance of MCE-EO using standard grayscale benchmark images, while the second focused on a set of transaxial-cut prostate MRI scans. MCE-EO’s performance was compared against six stochastic optimization techniques. Statistical analysis of the results demonstrates that MCE-EO offers superior performance for prostate MRI segmentation, providing a more effective tool for distinguishing between various tissue types.

1. Introduction

Image processing of medical images has become an exciting research topic due to its importance in the detection, diagnosis, and staging of health conditions. Medical images cover a wide range of technologies designed to generate a visual representation of internal morphology or function with the specific purpose of assisting health care practitioners. Many technologies utilize non-invasive approaches, such as magnetic resonances (MR), ultrasonography, and positron emission tomography (PET), among others. From these alternatives, MR images (MRIs) are often used to analyze anatomical structures as they provide high spatial distribution. However, the acquisition of high-quality MRIs is quite challenging, since MRIs are known to suffer from movements of the patient and the presence of artifacts [1], which can compromise the quality of the scans and subsequently the accuracy of diagnoses and treatment plans. These challenges necessitate the use of advanced techniques for image segmentation, which is a crucial step in interpreting medical images and assisting in diagnosis, particularly in diseases like prostate cancer, which is the second most common and deadly cancer in men after lung cancer [2]. Prostate cancer is known for its silent development, and it is often diagnosed at an advanced stage. According to global cancer statistics from 2022, 1,466,680 new cases of prostate cancer were diagnosed that year [3]. Early detection is crucial, as it significantly improves survival rates; however, challenges in this area remain, due to the subtle nature of early-stage tumors in imaging. Accurate analysis of prostate MRIs is therefore essential to improving diagnostic accuracy and reducing the need for invasive biopsies.
The use of computer-aided diagnostic tools (CAD) as a complementary means to medical imaging can help health practitioners to avoid or reduce biopsy rates while also saving time [4]. Research around the design of CAD centers on three main topics: denoising, segmentation, and classification.
Segmentation is a fundamental task in medical image processing and plays a critical role in separating different regions of interest within an image, especially when the boundaries are not clearly defined [5]. Traditional segmentation methods, such as thresholding, offer simplicity but face limitations in the context of complex medical images, particularly when distinguishing between subtle tissue variations in MRI scans. Thresholding stands as a simple yet powerful methodology meant to separate two or more regions of an image. This approach first transforms a given image into a frequency representation over an intensity histogram. The histogram is the input of thresholding methods, while the core idea is to find a threshold value ( t h ) that accurately divides the histogram into two or more classes. Each class will homogeneously share the same intensity value. Traditionally, thresholding approaches have been named bilevel or multilevel thresholding (MTH) according to the number of thresholds that the implementation possesses. The solution of a MTH problem involves the finding of the optimal threshold values ( t h ) that segment a given image properly. As the number of thresholds increases, so does the computational complexity of the task; thus, the efficient selection of threshold values is a non-trivial task. In the literature, we can find numerous approaches to determine the best threshold values for an image. First, the work proposed by Ostu used the between-class variance as a non-parametric criterion [6]. Entropy-based approaches take information theory elements to determine if a given partition of a histogram is good enough. This area started with the basic formulation of the Shannon entropy [7], followed by a multitude of variants, including Kapur [8], Tsallis [9], Renyi [10], and cross-entropy [11]. Cross-entropy has been largely adopted by the image processing community since it was proposed by Li and Lee, and it is often referred to as minimum cross-entropy criterion (MCE). The approaches that make use of MCE can determine if a particular set of thresholds minimizes the cross-entropy of the histogram’s partition. Therefore, MCE is just an entropic criterion, not a whole methodology for thresholding by itself. MCE could be used in an exhaustive search to evaluate all possible combinations of threshold values, but this is, of course, impractical. To alleviate the computational burden of such a kind of search, stochastic optimization algorithms (SOAs) are often applied to this task. SOAs use stochasticity in their operators to help to find optimal solutions to complex problems in reasonable times. We can see in the literature multiple approaches that employ randomness and domain-specific rules that help to identify optimal solutions quickly, and they are often referred to as metaheuristic algorithms. We can find examples of SOAs applied to the multilevel thresholding of images in [12,13,14,15,16,17,18]. Specific to the field of health care, we can find a plethora of implementations. For instance, we can find a modification of the Slime Mould Algorithm (SMA) used to perfom image thresholding over breast thermograms [19]. Ref. [20] uses a variant of the manta ray foraging optimization algorithm (MRFO) to segment computer tomographies (CT) from COVID-19 patients using Otsu as the objective function. Avalos et al. proposed an accurate cluster chaotic optimization approach for digital mammograms, lymphoblastic leukemia images, and brain MRIs using cross-entropy as the non-parametric criterion [21]. Panda et al. presented a hybrid algorithm called the hybrid adaptive cuckoo search-squirrel search algorithm (ACS-SSA) to segment T-2 weighted axial brain MR images [22]. Houssein et al. used a modification of the chimp optimization algorithm (COA) for the segmentation of breast thermographic images [23].
Despite these advances, challenges remain in achieving both high accuracy and computational efficiency in medical image segmentation. The balance between exploration and exploitation in SOAs is critical for their success in specific applications, but it is not always straightforward to adapt general-purpose algorithms to the domain of medical imaging. Moreover, the performance of these algorithms is highly dependent on the problem at hand, making it necessary to assess their applicability in the context of specific medical conditions and imaging modalities. SOAs and specifically metaheuristic algorithms are often presented in the academic literature using a set of standard benchmark functions designed to stress the algorithm and to determine if the proposed approach is better than previous approaches in a general sense. Although usual comparisons give us a glimpse of the algorithm potential, the actual performance can only be assessed in application to a specific problem [24]. Unfortunately, it is impractical to evaluate every new algorithm on all possible applications. However, guided by the general results of the proposal, if we analyze how the algorithm has been applied to other areas, we can identify potential candidates for successful implementations.
The equilibrium optimizer (EO) is a SOA inspired by control volume mass balance models [25]. EO has been successfully applied over a wide range of applications, including topics in energy, manufacturing design, artificial intelligence, and even image processing. In electrical and energy applications, the EO has been applied to solve the optimal power flow in hybrid AC/DC grids, where many objectives are considered, such as the generation cost, environmental emissions, power losses, and the deviation of bus voltages [26]. In [27], the EO is used to model a proton exchange fuel cell; the model is tested on various commercial fuel cells with competitive results. In [28], the EO helps to determine critical parameters of Shottky barrier diodes. Ref. [29] introduces the use of EO for solar photovoltaic parameter estimation. For industrial applications, the EO is applied to optimize the structural design of vehicles, specifically the vehicle’s seat bracket [30]. The EO has been applied in the laser cutting process of polymethylmethacrylate sheets, where the EO enhances the accuracy of a random vector functional link network at estimating the laser-cutting properties of a given design [31]. In the field of artificial intelligence and machine learning, the EO has been used to train CNNs. In [32], the training process of a CNN involves the EO. The model is evaluated using real-time traffic data from IoT sensors in Minnesota, providing effective results. Even more, EO modifications have been applied to perform feature selection [33]. EO has been explored widely since its recent publication. For instance, Ref. [34] explores the effect of the incorporation of a chaotic map into the structure of EO with promising results. Following a similar idea, the authors also proposed an improved version of the EO by controlling the parameter t [35]. In [36], a mutation strategy was incorporated into the EO to modify the balance between exploration and exploitation. Lastly, EO was modified to address multi-objective problems in [37]. In [38] the authors proposed an EO-based thresholding approach for image segmentation using Kapur’s entropy as an objective function over the Berkeley image segmentation dataset. Their results indicate that EO is able to outperform seven prominent metaheuristics algorithms, proving that EO is suitable for the segmentation task. Then, in [39], the authors presented an adaptive EO for image thresholding and tested their proposal with the BSDS 500 dataset.
Given the above research, the relevance of EO in many areas is clear, making it a strong candidate for a large number of implementations. This evidence helped us to select the EO as the basis of the multilevel cross-entropy thresholding technique for the segmentation of prostate MRIs (MCE-EO). The proposed approach is designed to select an optimal set of threshold values using cross-entropy as the objective function. The proposed method is analyzed to evaluate its effectiveness over relevant images. The methodology contains two experiments. The first is designed to determine if the proposed MCE-EO is superior to related techniques over a set of generally used images in the field of thresholding, then we move towards a more in-depth analysis of the performance of MCE-EO on the specific domain of prostate image segmentation with magnetic resonance transaxial-cut prostate images. In both cases, the experiments consider six other metaheuristic algorithms: the sunflower optimization (SFO) algorithm [40], the sine cosine algorithm (SCA) [41], particle swarm optimization (PSO) [13], differential evolution (DE) [42], the genetic algorithm (GA) [43], and the Hirschberg—Sinclair algorithm (HS) [44]. The significance of the results is statistically analyzed, providing strong evidence of the suitability of MCE-EO for the segmentation of prostate MRIs.
The article is structured as follows. Section 2 and Section 3 present the theoretical foundations of the MCE-EO, including the formulation of cross-entropy in Section 2 and the description of the original EO in Section 3. Then, in Section 4 we propose the MCE-EO approach and all of its implications. Section 5 describes the experimental methodology and results. Finally, Section 6 concludes this article.

2. Image Segmentation and Minimum Cross-Entropy

Kullback [45] introduced the concept of entropy in the context of comparing two probability distributions, defined over the same set. The measure of information-theoretic distance between these two distributions is known as cross-entropy or divergence.
In image thresholding, the primary objective is to identify threshold values that effectively segment regions within an image. The image histogram can be interpreted as a probability distribution representing the occurrence of pixel intensity values. Statistical methods are then applied to distinguish between different regions or classes within the image. One such method is the parametric criterion based on cross-entropy, as proposed by Kullback [45]. This criterion compares two distributions, J = { j 1 , j 2 , , j N } and G = { g 1 , g 2 , , g N } , and calculates cross-entropy using the following expression:
D ( J , G ) = i = 1 N j i log j i g i
In 1993, Li and Lee [11] introduced the use of cross-entropy, a concept rooted in information theory, for solving binary segmentation problems. Their approach, known as the minimum cross-entropy thresholding (MCET) method, processes the image histogram by dividing it into subsets and computing the cross-entropy for each subset. The goal is to determine the set of threshold values that minimizes the cross-entropy, resulting in optimal segmentation. For a grayscale 8-bit image, pixel intensity values range from 0 to 255, with the maximum value denoted by L = 255 . A given threshold t h divides the image into two regions as follows:
I t ( x , y ) = μ ( 1 , t h ) , I ( x , y ) < t h , μ ( t h , L + 1 ) , I ( x , y ) t h ,
where
μ ( a , b ) = i = a b 1 i h G r ( i ) / i = a b 1 h G r ( i )
Following this idea, the expression can be rewritten in the form of an objective function:
f C r o s s ( t h ) = i = 1 t h 1 i h G r ( i ) log i μ ( 1 , t h ) + i = t h L i h G r ( i ) log i μ ( t h , L + 1 )
Originally, the minimum cross-entropy thresholding (MCET) method was developed to handle a single threshold value, which divides the image histogram into two distinct classes. However, in many cases, segmenting an image into only two regions is insufficient for accurately representing more complex scenes. To overcome this limitation, the MCET problem can be extended into a multilevel formulation, which allows for partitioning the histogram into more than two classes, as shown below:
f C r o s s ( t h ) = i = 1 L i h G r ( i ) log i i = 1 t h 1 i h G r ( i ) log μ ( 1 , t h ) i = t h L i h G r ( i ) log μ ( t h , L + 1 )
The multilevel version of the MCET takes a set of k thresholds in the form of a vector th = t h 1 , t h 2 , , t h k :
f C r o s s ( th ) = i = 1 L i h G r ( i ) log i i = 1 n t H i
where q corresponds to the entropies and thresholds to calculate.
H 1 = i = 1 t h 1 1 i h G r ( i ) log μ ( 1 , t h 1 )
H q = i = t h q 1 t h q 1 i h G r ( i ) log μ ( t h q 1 , t h q ) , 1 < q < k
H k = i = t h k L i h G r ( i ) log μ ( t h q , L + 1 )

3. Equilibrium Optimizer

The EO is a nature-inspired optimization algorithm that models the process of reaching equilibrium in a control volume, akin to the balancing of mass in a physical system [25]. In this article, each particle within the search space represents a potential segmentation of the prostate region in an MRI scan. EO iteratively updates these segmentation solutions using three key mechanisms: the equilibrium concentration, which corresponds to the best segmentation found so far; the deviation of a particle (segmentation) from this equilibrium; and a refinement process that further improves the accuracy of the segmentation. These components work in tandem to balance exploration (trying new segmentation possibilities) and exploitation (refining existing segmentations), ensuring efficient convergence on an optimal prostate segmentation. The mathematical model guarantees that the EO efficiently narrows down the best segmentation, making it suitable for complex medical image analysis tasks such as MRI prostate segmentation.
Diving deeper, the whole model could be described as a first-order differential equation in which the rate of change of mass is equal to the mass entering plus the amount of mass already in the system minus the mass leaving the system. This is defined in the following equation:
V d C d t = Q C e q Q C + G
When an equilibrium is reached, the rate of change is equal to 0. The equation can then be rearranged to obtain the concentration in the control volume (C), as in Equation (9):
C = C e q + C 0 C e q F + G λ V 1 F
where λ = Q V is the turnover rate, and F is computed as:
F = exp λ t t 0
The concentration of particles in the EO is updated using the three terms described in Equation (9). The first term is called the equilibrium concentration, which is of the best-so-far solutions randomly selected from the equilibrium pool, the second is the difference between the concentration and the equilibrium state of a particle, and lastly, the third term acts as a exploiter, or solution refiner. The initial concentration is based on the number of particles and dimensions with random initialization in the search space. This process is modelled as:
C i i n i t i a l = C m i n + r a n d i C m a x C m i n i = 1 , 2 , , n
The equilibrium state is the final convergence state of the algorithm. At the beginning of the process, there is no knowledge of the equilibrium, and the candidates are based on trial and error. Based on different kinds of experimentation, one might use a different number of candidates (in this case, there were four candidates plus an average).
The next key component in the concentration update rule is the exponential term F. Defining this term precisely will help the EO maintain an effective balance between exploration and exploitation. Given that the turnover rate in a real control volume can fluctuate over time, lambda is modeled as a random vector within the range [0, 1].
F = e λ ( t t 0 )
From Equation (10), time t is defined as a function of the current iteration I t e r where the maximum budget of iterations to find the best solution is M a x _ i t e r .
t = 1 I t e r M a x _ i t e r a 2 I t e r M a x _ i t e r
where a 2 is a constant value used to represent exploitation ability. The EO also considers:
t 0 = 1 λ ln a 1 s i g n r 0.5 1 e λ t + t
for slowing down the search speed. a 1 and a 2 are 2 and 1, respectively, and r is a random vector between 1 and 0.
The generation rate (G) function is used to provide the exact solution by improving the exploitation phase. One multipurpose model adjusted for the EO results is represented by:
G = G 0 e λ t t 0 = G 0 F
where:
G 0 = G C P C e q λ C
G C P = 0.5 r 1 r 2 G P 0 r 2 < G P
where r 1 and r 2 are random numbers between [0, 1]. Finally, the concentration can be calculated as follows:
C = C e q + C C e q . F + G λ V 1 F

4. Minimum Cross-Entropy by EO for MRIs

This section describes the application of the equilibrium optimizer (EO) combined with minimum cross-entropy (MCE) for prostate MRI segmentation. One of the key challenges in diagnosing prostate cancer is the subtlety of early-stage abnormalities, which are often difficult to detect with conventional imaging techniques. EO’s ability to balance exploration and exploitation during optimization enables it to identify optimal threshold values, enhancing image clarity and precision, especially on older imaging equipment. This improves the delineation of prostate boundaries and helps detect small lesions. Early detection of prostate cancer is critical, as it significantly improves treatment outcomes by enabling less invasive interventions and better patient prognoses. EO’s contribution to providing clearer, more accurate segmentations aids radiologists in distinguishing between soft tissues, leading to more informed clinical decisions. From a practical perspective, incorporating EO and MCE thresholding as a preprocessing step in computer-aided diagnostic tools can streamline workflow efficiency in radiology departments.
The key elements of the proposed algorithm are detailed in the following subsections.

4.1. Problem Formulation

The proposed method adopts a multilevel thresholding approach for image segmentation, where a prostate image is segmented based on pixel intensity, dividing the histogram into a finite number of classes. This partitioning is achieved by selecting a set of threshold values across the histogram. The equilibrium optimizer (EO) is employed to generate candidate segmentation configurations, while the minimum cross-entropy (MCE) criterion evaluates the effectiveness of these configurations. Through an iterative process, EO and MCE collaborate to identify optimal threshold values that yield effective image segmentation results.
As outlined in Section 2, the minimum cross-entropy thresholding can be formulated as an optimization problem, which is expressed as:
a r g m i n t h f C r o s s th subjectto th X
This corresponds to the MCE formulation from Equation (6). The constraints for the feasible solution space are defined by the possible pixel intensity values in an 8-bit representation, where the pixel values range from 0 to 255. The set of feasible thresholds is given by: X = t h R n | 0 t h j 255 , j = 1 , 2 , , n .

4.2. Encoding

In stochastic optimization algorithms, how solutions are encoded plays a crucial role. While the equilibrium optimizer (EO) typically uses particles referred to as concentrations C i , in this context, the set of thresholds th will be used to represent particles instead. Therefore, the population TH is defined as a set of particles th , where each particle consists of k threshold values. The population and candidate solutions are described as follows:
TH = t h 1 , t h 2 , t h N , t h i = t h 1 , t h 2 , , t h k

4.3. Initialization

In a population made up of a number of N particles, each particle consists of k dimensions that are initialized randomly within the boundaries of gray levels of the image as follows:
t h i = L m i n + r a n d i ( 0 , 1 ) × ( L m a x L m i n )
where L m i n and L m i n are the minimum and maximum gray level values in the image histogram (0 and 255, respectively) and r a n d ( 0 , 1 ) is a random number in the range [0, 1].

4.4. MCE-EO Implementation

The equilibrium optimization (EO) algorithm is implemented for the solution of thresholding problems with both standard and prostate MRI images. The entire process is summarized as follows. First, the image I ( x , y ) is loaded into memory, and its grayscale histogram h G r ( I ) is computed. Then, the configuration parameters of the MCE-EO are selected. Afterwards, the iterative process of the EO is started with a randomly generated population and the solutions are iteratively enhanced through the operators of EO and the evaluation of the fitness function MCE. The MCE-EO approach stops iterating until a stop criterion is met; in this case, a fixed number of generations. Finally, the optimal set of thresholds th best is used to generate the segmented image. A graphical representation of the MCE-EO approach is depicted in Figure 1. The detailed implementation of MCE-EO is described in the pseudo-code of Figure 2.

4.5. Thresholded Image

Once the EO identifies the optimal set of threshold values for the image I ( x , y ) , it applies multilevel thresholding to generate the segmented image I t h ( x , y ) . The process for performing this segmentation is outlined in Equation (22). The overall MCE-EO methodology is illustrated in the flowchart shown in Figure 1.
I t h x , y = I x , y i f I x , y t h 1 t h j 1 i f t h j 1 I x , y < t h j I x , y i f I x , y > t h n ,   j = 2 , 3 , , n 1

4.6. Computational Complexity

The proposed MCE-EO uses two algorithms with different properties. First, the calculation of the minimum cross-entropy (MCE) for t thresholds is O ( L t + 1 ) [12]. The second algorithm is the equilibrium optimizer (EO) with a reported polynomial complexity of O ( i d n + c i n ) , where c is the cost of the objective function (in this case MCE), i is the number of iterations, d indicates the dimensions, and n indicates the number of solutions at each generation [25].

5. Experiments

In this section, the proposed method is analyzed to evaluate its effectiveness over relevant images. The methodology contains two experiments. The first is designed to determine if the proposed approach is superior to related techniques over a set of eleven generally used images in the field of thresholding. After we concluded that the MCE-EO is competent for general image thresholding, we moved towards a more in-depth analysis of the performance of MCE-EO on the specific domain of prostate image segmentation with six magnetic resonance transaxial-cut prostate images. In both cases and for comparative purposes, the experiments consider six metaheuristic algorithms: the sunflower optimization (SFO) algorithm [40], the sine cosine algorithm (SCA) [41], particle swarm optimization (PSO), differential evolution (DE) [42], the genetic algorithm (GA) [43], and the Hirschberg—Sinclair algorithm (HS) [44].
In this study, all of the metaheuristic (stochastic) algorithms used the minimum cross-entropy criterion as an objective function with the same solution representation. It is important to note that the proposed thresholding-based segmentation method does not rely on ground truth data, as is often required for metrics like pixel accuracy (PA) or intersection over union (IoU). Instead, we evaluate segmentation effectiveness using no-reference image quality metrics such as peak signal-to-noise ratio (PSNR) [14], structural similarity index measure (SSIM) [46], and feature similarity index measure (FSIM) [47]. These metrics provide a robust analysis of image fidelity, structure preservation, and overall segmentation quality, which are crucial for medical imaging applications where precise anatomical boundaries are essential for diagnosis.
All experiments were performed with MATLAB R2016a using a device equipped with an 2.4 GHz Intel Core i5 CPU and 12 GB of RAM. This approach allowed us to assess the performance of the segmentation process without the need for manually annotated reference images.

5.1. Parameter Settings

Stochastic optimization algorithms (SOA) are quite sensitive to the parameter configuration of their mechanisms. The correct selection of parameters is a time-consuming task that is often avoided by using the original parameters selected by the creators of the algorithm. Most of the time, the parameters provided by the original authors are good enough for most tasks and can give us an overall idea of the performance of an algorithm on a given task. Thus, in this paper, the parameters are selected as the original authors recommended (see Table 1).
Another element having a significant impact on the performance of an algorithm is the maximum number of iterations that an algorithm is allowed to use. In this study, we have chosen 500 iterations, as we observed a convergence on most approaches prior to this limit. As their name indicates, SOAs show variability in their result. To properly compare the performance of each methodology, we used thirty runs of each experiment to generate statistically valid data.

5.2. Evaluating Image Quality

Image quality after processing can be assessed through both objective and subjective methods. Objective evaluations include numerical metrics, either with or without a reference image, where the reference is often referred to as ground truth. While having a ground truth is ideal in many cases, creating annotated datasets is time-intensive and can sometimes introduce subjectivity. To address this limitation, many methods employ no-reference quality metrics such as PSNR, SSIM, and FSIM. In MRI, a high PSNR indicates that critical anatomical details are preserved, which is crucial since noise can obscure important structures. A high SSIM value ensure that important clinical features remain intact after processing. Given the importance of features such as tissue boundaries in MRI images, a high FSIM value indicates that essential diagnostic details have been retained.

5.2.1. PSNR

The peak signal-to-noise ratio (PSNR) measures the amount of distortion noise relative to the signal power [14]. PSNR is commonly used to evaluate image quality after processing, comparing the original image with its processed (or distorted) version on a logarithmic scale. Higher PSNR values indicate better image quality. The formula to calculate PSNR is given by:
P S N R = 20 log 10 255 R M S E , ( dB )
R M S E = i = 1 r o j = 1 c o I G r i , j I t h i , j r o × c o

5.2.2. SSIM

The structural similarity index method (SSIM) is another no-reference quality metric, similar to PSNR, but it incorporates aspects of human visual perception. An SSIM model images distortion by assessing changes in structural information [46]. It evaluates luminance, contrast, and structural similarity, measuring the correlation between the original and processed images. The SSIM value ranges from 0 to 1, with higher values indicating better image quality. The SSIM is computed as follows:
S S I M ( I G r , I t h ) = 2 μ I G r μ I t h + C 1 2 σ I G r I t h + C 2 μ I G r 2 + μ I t h 2 + C 1 σ I G r 2 + σ I t h 2 + C 1
σ I 0 I G r = 1 N 1 i = 1 N I G r i + μ I G r I t h i + μ I t h
where μ I G r and μ I t h are the mean value of the original and the umbralized image, respectively, and for each image, the values of σ I G r and σ I t h correspond to the standard deviation. C 1 and C 2 are constants used to avoid the instability when μ I G r 2 + μ I t h 2 0 ; experimentally, in Agrawal et al. [48]), both values are C1 = C2 = 0.065.

5.2.3. FSIM

The feature similarity index method (FSIM) evaluates the quality of an image by comparing the original and processed versions based on the features present within the image. Features, such as edges and corners, are regions that contain significant information. Preserving these features during image processing is crucial for accurately interpreting the image. FSIM identifies features using two common methods: phase congruency ( P C ) and gradient magnitude ( G M ) [47]. The FSIM value ranges from 0 to 1, with higher values indicating better image quality. The formula for calculating FSIM is as follows:
F S I M = w Ω S L w P C m w w Ω P C m w
where Ω denotes the domain of the image
S L w = S P C w S G w
S G w = 2 G 1 w G 2 w + T 2 G 1 2 w + G 2 2 w + T 2
G = G x 2 + G y 2
P C ( w ) = E ( w ) ε + n A n ( w ) .

5.3. Results from Standard Test Images

This experiment is designed to determine if the proposed approach is superior to related approaches over a set of eleven generally used images in the field of thresholding. For this purpose, we will examine the fitness and standard deviation results of the first experimental dataset, which combines eleven well-known benchmark images (Cameraman, Lenna, Baboon, Man, Jet, Peppers, Living Room, Blonde, Walk Bridge, Butterfly, and Lake).
For the experiments, the threshold levels considered for the first set of images are n t = 2, 3, 4, 5, and 8. The qualitative reults can be observed in Table 2.
As shown in Table 3, the EO algorithm consistently outperformed the other six algorithms on average, with the performance gap becoming more apparent as the number of thresholds increased. Additionally, EO exhibited near-zero standard deviation in many tests, indicating its stability. In related studies, threshold searches have typically focused on a maximum of five levels, as increasing the number of thresholds leads to a substantial rise in computational complexity [49].
The numerical results demonstrate that EO significantly outperformed other stochastic optimization algorithms when applied to the image thresholding problem using cross-entropy. Figure 3 provides an example of a segmented image produced by the proposed method, showing the results for six different threshold levels for qualitative comparison.
So far, the results confirm EO’s excellent performance in image thresholding. However, since the primary focus of this work is the application of EO combined with cross-entropy for prostate MRI segmentation, the image quality analysis is presented in the second experiment.

5.4. Results from Magnetic Resonance Prostate Images

In this subsection, we will discuss the experiment designed to evaluate the performance of EO with cross-entropy for the segmentation of prostate MRI images. To this end, we use a group of reference images formed by a set of six prostate MRI images; see Figure 4. All the images from the group were extracted from the Ferenc Jolesz National Center for Image-Guided Therapy, Harvard Medical School, or Brigham Health Hospital datasets with no additional preprocessing [50]. Prostate MRI images are primarily used for disease diagnosis or to establish treatment for prostate-related diseases such as prostatitis, benign prostatic hyperplasia (BPH), and prostate cancer, among other diseases or medical conditions. In the context of this article, the images were used to test the efficiency of the equilibrium optimizing algorithm and compare it with the other six chosen algorithms. The segmentation of MRIs is carried out over four different thresholds levels: t h = 3, 4, 5, and 8. Due to the nature of the images, there was a limited number of different tissues in the images; thus, there was no point in evaluating a larger number of t h .

5.5. Statistical Analysis of Standard Test Images

Within stochastic optimization algorithms (SOAs), it is possible that two different approaches will produce the same results. If this is true, we could say that the two methods are not statistically different. To determine if this happens and to what extent, we use the Kruskal–Wallis test, a non-parametric method that discerns if the median of two or more distributions is statistically different [51]. We could say that Kruskal–Wallis is the non-parametric counterpart of the one-way analysis of variance (ANOVA) [52]. In this article, the analysis is applied to the prostate MRIs, where each experiment considers thirty independent runs of the same configuration. Each configuration includes an algorithm and a specific number of thresholds. All statistical significance tests require the definition of a significance level to accept or reject the null hypothesis, which states that all data are coming from the same distribution. The test outputs a p-value, and if it is lower than the null hypothesis, we can be sure that the algorithms are significantly different. In Table 4, the column named Ranking indicates the rank of each method in terms of fitness; the left-most algorithm is considered the best, while the right-most is the worst. In the notation, we can observe horizontal lines originating from two or more algorithms; this indicates that such algorithms are not significantly different.
Table 5 presents the segmentation of the MRIs using EO for a qualitative inspection. From Figure 5, it is clear that two lumps in the prostate have been highlighted by the thresholding process. Prostatic MRIs present noisy conditions, which makes it difficult to visualize the thresholding with the naked eye, so in Figure 5 we present the thresholded image as well as the histogram with the values of the thresholds generated by the EO. It can be observed in the histogram that the thresholds present an adequate distribution, even though this particular image has impulsive noise and a simple shape. Our findings indicate that four thresholds are typically sufficient for this application, which corresponds to identifying five different tissue types in the image. A smaller threshold value may result in a lack of sufficient contrast to highlight relevant anatomical structures, such as the prostate capsule. In contrast, a higher number of thresholds may lead to the incorrect differentiation of anatomical regions that should be connected.

5.6. Segmentation Quality

The information in Table 4 indicates that EO outperformed its counterparts in terms of the fitness function for prostate MRIs. However, the excellent performance of the algorithm over the search-space does not necessarily guarantee that it will reflect on image quality. Thus, an evaluation not associated to the fitness function is required. Here, the objective quality of the segmented image is evaluated using the three non-referenced metrics described in Section 5.3: PSNR, SSIM, and FSIM. In all metrics, a higher value points to higher-quality segmentation. Figure 6 shows a graphical comparison of the PSNR value over the different methodologies represented as seven groups along the horizontal axis. Inside each group, the average PSNR value is given for all images, considering thirty runs for a given number of thresholds (color-coded). To facilitate the interpretation of results, the best value is marked with an *. In terms of PSNR, EO outperformed the other approaches at most of threshold levels. Only GA managed to have a slightly better score on three thresholds.
In Figure 7 we can observe the same type of graphical representation for the comparison of the SSIM value over the experiments over the horizontal axis. Each group takes the average SSIM value for all images for a given number of thresholds. In this case, the GA algorithm was able to outperform over five thresholds. In the rest of the experiments, EO consistently performed better than the other approaches. Following the same representation, the FSIM is analyzed in Figure 8, where GA has a marginal victory with five thresholds, while EO scores better in the other cases. In summary, we can observe that PSNR, SSIM, and FSIM objectively indicate that EO is better than the other approaches for the segmentation of prostate MRIs.

6. Conclusions and Future Work

This paper introduces a novel approach, MCE-EO, designed to determine an optimal set of threshold values for effectively segmenting prostate MRI images. The method is based on the equilibrium optimizer (EO), a stochastic optimization algorithm, and employs minimum cross-entropy as a non-parametric criterion. The efficacy of MCE-EO is assessed through two experiments. The first experiment evaluates the performance of EO in segmenting general-purpose images, while the second focuses on prostate MRI segmentation. In both cases, MCE-EO is compared against several other stochastic optimization algorithms, including the sunflower optimization (SFO) algorithm, sine cosine algorithm (SCA), particle swarm optimization (PSO), differential evolution (DE), the genetic algorithm (GA), and the Hirschberg–Sinclair algorithm (HS). Additionally, statistical significance was assessed using a post-hoc test. The segmentation quality for MRI images was evaluated using key objective quality metrics such as peak signal-to-noise ratio (PSNR), structural similarity index Measure (SSIM), and feature similarity index (FSIM).
In conclusion, the proposed MCE-EO method for prostate MRI segmentation outperformed several other metaheuristic algorithms, including SFO, SCA, and PSO. In experiments involving standard datasets and transaxial-cut prostate MRIs, MCE-EO achieved superior segmentation accuracy, with higher values of PSNR, SSIM, and FSIM, demonstrating its robustness and effectiveness in handling complex medical images. These results confirm the potential of MCE-EO in improving the early detection of prostate cancer, where accurate image segmentation is critical for diagnosis and treatment planning.

Future Work

The combination of the equilibrium optimizer (EO) and minimum cross-entropy (MCE) has demonstrated favorable outcomes for prostate MRI segmentation. However, there are a few limitations that require attention. A significant challenge is the computational time required by EO, particularly when working with large datasets or high-resolution MRI scans. While the iterative nature of the process is an effective approach, it can result in slower processing times, which may be a disadvantage in clinical settings where rapid analysis is essential. Further research could concentrate on optimizing EO through parallel computing, hybrid algorithms, or reducing the number of iterations without affecting accuracy.
A further limitation results from the the variability in segmentation quality across different imaging types. EO performs well in prostate MRI, but may require adjustments for other modalities, such as ultrasound or CT scans. Further work could investigate the potential of adapting EO to a range of imaging techniques by incorporating mechanisms that tailor the algorithm to the specific characteristics of each modality.

Author Contributions

Conceptualization, O.Z. and S.H.; methodology, O.Z. and D.O.-J.; software, D.O.-J.; writing—original draft preparation, O.Z. and S.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The results generated during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Krupa, K.; Bekiesińska-Figatowska, M. Artifacts in Magnetic Resonance Imaging. Pol. J. Radiol. 2015, 80, 93–106. [Google Scholar] [CrossRef] [PubMed]
  2. Ghose, S.; Oliver, A.; Martí, R.; Lladó, X.; Vilanova, J.C.; Freixenet, J.; Mitra, J.; Sidibé, D.; Meriaudeau, F. A survey of prostate segmentation methodologies in ultrasound, magnetic resonance and computed tomography images. Comput. Methods Programs Biomed. 2012, 108, 262–287. [Google Scholar] [CrossRef] [PubMed]
  3. Bray, F.; Laversanne, M.; Sung, H.; Ferlay, J.; Siegel, R.L.; Soerjomataram, I.; Jemal, A. Global cancer statistics 2022: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 2024, 74, 229–263. [Google Scholar] [CrossRef]
  4. Juneja, M.; Saini, S.K.; Gupta, J.; Garg, P.; Thakur, N.; Sharma, A.; Mehta, M.; Jindal, P. Survey of denoising, segmentation and classification of magnetic resonance imaging for prostate cancer. Multimed. Tools Appl. 2021, 80, 29199–29249. [Google Scholar] [CrossRef]
  5. Wang, G.; Li, Z.; Weng, G.; Chen, Y. An optimized denoised bias correction model with local pre-fitting function for weak boundary image segmentation. Signal Process. 2024, 220, 109448. [Google Scholar] [CrossRef]
  6. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  7. Menendez, M. Shannon’s entropy in exponential families: Statistical applications. Appl. Math. Lett. 2000, 13, 37–42. [Google Scholar] [CrossRef]
  8. Kapur, J.N.; Sahoo, P.K.; Wong, A.K. A new method for gray-level picture thresholding using the entropy of the histogram. Comput. Vis. Graph. Image Process. 1985, 29, 273–285. [Google Scholar] [CrossRef]
  9. Tsallis, C. Computational applications of nonextensive statistical mechanics. J. Comput. Appl. Math. 2009, 227, 51–58. [Google Scholar] [CrossRef]
  10. Beadle, E.; Schroeder, J.; Moran, B.; Suvorova, S. An overview of Renyi Entropy and some potential applications. In Proceedings of the 2008 42nd Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 26–29 October 2008; pp. 1698–1704. [Google Scholar]
  11. Li, C.H.; Lee, C. Minimum cross entropy thresholding. Pattern Recognit. 1993, 26, 617–625. [Google Scholar] [CrossRef]
  12. Tang, K.; Yuan, X.; Sun, T.; Yang, J.; Gao, S. An improved scheme for minimum cross entropy threshold selection based on genetic algorithm. Knowl.-Based Syst. 2011, 24, 1131–1138. [Google Scholar] [CrossRef]
  13. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  14. Avcibas, I.; Sankur, B.; Sayood, K. Statistical evaluation of image quality measures. J. Electron. Imaging 2002, 11, 206–223. [Google Scholar]
  15. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  16. Liu, Y.; Mu, C.; Kou, W.; Liu, J. Modified particle swarm optimization-based multilevel thresholding for image segmentation. Soft Comput. 2015, 19, 1311–1327. [Google Scholar] [CrossRef]
  17. Khairuzzaman, A.K.M.; Chaudhury, S. Multilevel thresholding using grey wolf optimizer for image segmentation. Expert Syst. Appl. 2017, 86, 64–76. [Google Scholar] [CrossRef]
  18. Miller, B.L.; Goldberg, D.E. Genetic algorithms, selection schemes, and the varying effects of noise. Evol. Comput. 1996, 4, 113–131. [Google Scholar] [CrossRef]
  19. Naik, M.K.; Panda, R.; Abraham, A. An entropy minimization based multilevel colour thresholding technique for analysis of breast thermograms using equilibrium slime mould algorithm. Appl. Soft Comput. 2021, 113, 107955. [Google Scholar] [CrossRef]
  20. Houssein, E.H.; Emam, M.M.; Ali, A.A. Improved manta ray foraging optimization for multi-level thresholding using COVID-19 CT images. Neural Comput. Appl. 2021, 33, 16899–16919. [Google Scholar] [CrossRef]
  21. Avalos, O.; Ayala, E.; Wario, F.; Pérez-Cisneros, M. An accurate Cluster chaotic optimization approach for digital medical image segmentation. Neural Comput. Appl. 2021, 33, 10057–10091. [Google Scholar] [CrossRef]
  22. Panda, R.; Samantaray, L.; Das, A.; Agrawal, S.; Abraham, A. A novel evolutionary row class entropy based optimal multi-level thresholding technique for brain MR images. Expert Syst. Appl. 2021, 168, 114426. [Google Scholar] [CrossRef]
  23. Houssein, E.H.; Emam, M.M.; Ali, A.A. An efficient multilevel thresholding segmentation method for thermography breast cancer imaging based on improved chimp optimization algorithm. Expert Syst. Appl. 2021, 185, 115651. [Google Scholar] [CrossRef]
  24. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  25. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  26. Abdul-hamied, D.T.; Shaheen, A.M.; Salem, W.A.; Gabr, W.I.; El-sehiemy, R.A. Equilibrium optimizer based multi dimensions operation of hybrid AC/DC grids. Alex. Eng. J. 2020, 59, 4787–4803. [Google Scholar] [CrossRef]
  27. Menesy, A.S.; Sultan, H.M.; Kamel, S. Extracting model parameters of proton exchange membrane fuel cell using equilibrium optimizer algorithm. In Proceedings of the 2020 International Youth Conference on Radio Electronics, Electrical and Power Engineering (REEPE), Moscow, Russia, 12–14 March 2020; pp. 1–7. [Google Scholar]
  28. Rabehi, A.; Nail, B.; Helal, H.; Douara, A.; Ziane, A.; Amrani, M.; Akkal, B.; Benamara, Z. Optimal estimation of Schottky diode parameters using a novel optimization algorithm: Equilibrium optimizer. Superlattices Microstruct. 2020, 146, 106665. [Google Scholar] [CrossRef]
  29. Abdel-Basset, M.; Mohamed, R.; Mirjalili, S.; Chakrabortty, R.K.; Ryan, M.J. Solar photovoltaic parameter estimation using an improved equilibrium optimizer. Sol. Energy 2020, 209, 694–708. [Google Scholar] [CrossRef]
  30. Yıldız, A.R.; Özkaya, H.; Yıldız, M.; Bureerat, S.; Yıldız, B.; Sait, S.M. The equilibrium optimization algorithm and the response surface-based metamodel for optimal structural design of vehicle components. Mater. Test. 2020, 62, 492–496. [Google Scholar] [CrossRef]
  31. Elsheikh, A.H.; Shehabeldeen, T.A.; Zhou, J.; Showaib, E.; Abd Elaziz, M. Prediction of laser cutting parameters for polymethylmethacrylate sheets using random vector functional link network integrated with equilibrium optimizer. J. Intell. Manuf. 2021, 32, 1377–1388. [Google Scholar] [CrossRef]
  32. Nguyen, T.; Nguyen, G.; Nguyen, B.M. EO-CNN: An enhanced CNN model trained by equilibrium optimization for traffic transportation prediction. Procedia Comput. Sci. 2020, 176, 800–809. [Google Scholar] [CrossRef]
  33. Gao, Y.; Zhou, Y.; Luo, Q. An efficient binary equilibrium optimizer algorithm for feature selection. IEEE Access 2020, 8, 140936–140963. [Google Scholar] [CrossRef]
  34. Zheng-Ming, G.; Juan, Z.; Su-Ruo, L.; Ru-Rong, H. The improved Equilibrium Optimization Algorithm with Tent Map. In Proceedings of the 2020 5th International Conference on Computer and Communication Systems (ICCCS), Shanghai, China, 15–18 May 2020; pp. 343–346. [Google Scholar]
  35. Zhao, J.; Gao, Z. The Improved Equilibrium Optimization Algorithm with Best Candidates. J. Phys. Conf. Ser. 2020, 1575, 012089. [Google Scholar] [CrossRef]
  36. Gupta, S.; Deep, K.; Mirjalili, S. An efficient equilibrium optimizer with mutation strategy for numerical optimization. Appl. Soft Comput. 2020, 96, 106542. [Google Scholar] [CrossRef]
  37. Abdel-Basset, M.; Mohamed, R.; Abouhawwash, M. Balanced multi-objective optimization algorithm using improvement based reference points approach. Swarm Evol. Comput. 2021, 60, 100791. [Google Scholar] [CrossRef]
  38. Abdel-Basset, M.; Chang, V.; Mohamed, R. A novel equilibrium optimization algorithm for multi-thresholding image segmentation problems. Neural Comput. Appl. 2021, 33, 10685–10718. [Google Scholar] [CrossRef]
  39. Wunnava, A.; Naik, M.K.; Panda, R.; Jena, B.; Abraham, A. A novel interdependence based multilevel thresholding technique using adaptive equilibrium optimizer. Eng. Appl. Artif. Intell. 2020, 94, 103836. [Google Scholar] [CrossRef]
  40. Gomes, G.F.; da Cunha, S.S.; Ancelotti, A.C. A sunflower optimization (SFO) algorithm applied to damage identification on laminated composite plates. Eng. Comput. 2019, 35, 619–626. [Google Scholar] [CrossRef]
  41. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  42. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  43. Goldberg, D.E.; Holland, J.H. Genetic algorithms and machine learning. Mach. Learn. 1988, 3, 95–99. [Google Scholar] [CrossRef]
  44. Peterson, G.L. An O (n log n) unidirectional algorithm for the circular extrema problem. ACM Trans. Program. Lang. Syst. (TOPLAS) 1982, 4, 758–762. [Google Scholar] [CrossRef]
  45. Kullback, S. Information Theory and Statistics; Courier Corporation: North Chelmsford, MA, USA, 1997. [Google Scholar]
  46. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  47. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [PubMed]
  48. Agrawal, S.; Panda, R.; Bhuyan, S.; Panigrahi, B.K. Tsallis entropy based optimal multilevel thresholding using cuckoo search algorithm. Swarm Evol. Comput. 2013, 11, 16–30. [Google Scholar] [CrossRef]
  49. Rodriguez-Esparza, E.; Zanella-Calzada, L.A.; Oliva, D.; Heidari, A.A.; Zaldivar, D.; Pérez-Cisneros, M.; Foong, L.K. An efficient Harris hawks-inspired image segmentation method. Expert Syst. Appl. 2020, 155, 113428. [Google Scholar] [CrossRef]
  50. The Ferenc Jolesz National Center for Image Guided Therapy, Harvard Medical School, B.H.H. Prostate MR Image Database. Available online: https://prostatemrimagedatabase.com/index.html (accessed on 15 March 2023).
  51. Theodorsson-Norheim, E. Kruskal-Wallis test: BASIC computer program to perform nonparametric one-way analysis of variance and multiple comparisons on ranks of several independent samples. Comput. Methods Programs Biomed. 1986, 23, 57–62. [Google Scholar] [CrossRef]
  52. Scheffe, H. The Analysis of Variance; John Wiley & Sons: Hoboken, NJ, USA, 1999; Volume 72. [Google Scholar]
Figure 1. Overview of the proposed MCE-EO methodology for prostate MRI segmentation. This figure illustrates the stages of the multilevel cross-entropy equilibrium optimizer (MCE-EO) approach, from MRI image acquisition and preprocessing to threshold selection and segmentation. The methodology optimizes threshold values using cross-entropy, enhancing segmentation accuracy.
Figure 1. Overview of the proposed MCE-EO methodology for prostate MRI segmentation. This figure illustrates the stages of the multilevel cross-entropy equilibrium optimizer (MCE-EO) approach, from MRI image acquisition and preprocessing to threshold selection and segmentation. The methodology optimizes threshold values using cross-entropy, enhancing segmentation accuracy.
Applsci 14 09785 g001
Figure 2. Pseudocode for the implementation of the MCE-EO algorithm. This figure presents the pseudocode outlining the key steps of the MCE-EO algorithm. It includes initialization of parameters, the optimization process using the equilibrium optimizer (EO), and the evaluation of threshold values based on the minimum cross-entropy criterion. The pseudocode highlights the iterative process that leads to the selection of optimal thresholds for accurate prostate MRI segmentation.
Figure 2. Pseudocode for the implementation of the MCE-EO algorithm. This figure presents the pseudocode outlining the key steps of the MCE-EO algorithm. It includes initialization of parameters, the optimization process using the equilibrium optimizer (EO), and the evaluation of threshold values based on the minimum cross-entropy criterion. The pseudocode highlights the iterative process that leads to the selection of optimal thresholds for accurate prostate MRI segmentation.
Applsci 14 09785 g002
Figure 3. Cameraman image segmented with 2, 3, 4, 5, 8, and 16 thresholds. This figure displays the segmentation results of the cameraman image using different numbers of thresholds, demonstrating the progressive refinement of image regions as the number of thresholds increases.
Figure 3. Cameraman image segmented with 2, 3, 4, 5, 8, and 16 thresholds. This figure displays the segmentation results of the cameraman image using different numbers of thresholds, demonstrating the progressive refinement of image regions as the number of thresholds increases.
Applsci 14 09785 g003
Figure 4. Eleven transaxial-cut prostate MRI images. This figure presents a set of eleven transaxial-cut magnetic resonance (MR) images of the prostate. These images serve as the input dataset for evaluating the segmentation performance of the proposed algorithm.
Figure 4. Eleven transaxial-cut prostate MRI images. This figure presents a set of eleven transaxial-cut magnetic resonance (MR) images of the prostate. These images serve as the input dataset for evaluating the segmentation performance of the proposed algorithm.
Applsci 14 09785 g004
Figure 5. MRI prostatic 01 segmented with 8 thresholds and corresponding histogram. The left image shows the segmentation of the MRI prostatic 01 image using 8 thresholds. On the right, the corresponding image histogram is displayed, with vertical lines marking the selected threshold values, illustrating how the thresholds divide the intensity levels for segmentation.
Figure 5. MRI prostatic 01 segmented with 8 thresholds and corresponding histogram. The left image shows the segmentation of the MRI prostatic 01 image using 8 thresholds. On the right, the corresponding image histogram is displayed, with vertical lines marking the selected threshold values, illustrating how the thresholds divide the intensity levels for segmentation.
Applsci 14 09785 g005
Figure 6. PSNR comparison across methodologies. This figure shows the PSNR values of seven segmentation methods. Each group represents average PSNR values for different thresholds, with the best marked by an asterisk (*).
Figure 6. PSNR comparison across methodologies. This figure shows the PSNR values of seven segmentation methods. Each group represents average PSNR values for different thresholds, with the best marked by an asterisk (*).
Applsci 14 09785 g006
Figure 7. Comparative study of SSIM across different methodologies. This figure presents a comparison of SSIM values for various segmentation methodologies. Each group along the horizontal axis represents different methods, with SSIM values evaluated over multiple thresholds. The results highlight the structural similarity performance of each approach, with higher SSIM values indicating better preservation of image structure during segmentation. The the best is marked by an asterisk (*).
Figure 7. Comparative study of SSIM across different methodologies. This figure presents a comparison of SSIM values for various segmentation methodologies. Each group along the horizontal axis represents different methods, with SSIM values evaluated over multiple thresholds. The results highlight the structural similarity performance of each approach, with higher SSIM values indicating better preservation of image structure during segmentation. The the best is marked by an asterisk (*).
Applsci 14 09785 g007
Figure 8. FSIM comparison across methodologies. This figure compares FSIM values for different segmentation methods, with each group representing a method and various threshold settings. Higher FSIM values indicate better feature preservation during segmentation. The the best is marked by an asterisk (*).
Figure 8. FSIM comparison across methodologies. This figure compares FSIM values for different segmentation methods, with each group representing a method and various threshold settings. Higher FSIM values indicate better feature preservation during segmentation. The the best is marked by an asterisk (*).
Applsci 14 09785 g008
Table 1. Parameter Settings for the algorithms. This table lists the key parameter configurations for the algorithms used in the experiments, including the equilibrium optimizer (EO) and comparison methods.
Table 1. Parameter Settings for the algorithms. This table lists the key parameter configurations for the algorithms used in the experiments, including the equilibrium optimizer (EO) and comparison methods.
AlgorithmParametersValue
Sunflower Optimization (SFO) AlgorithmNumber of Sunflowers60
Number of Experiments30
Pollination Values0.05
Mortality Rate, Best Values0.1
Survival Rate1 − (p + m)
Iterations/Generations1000
Sine Cosine Algorithm (SCA)Search Agents Number60
Number of Experiments30
Iterations1000
Particle Swarm Optimization (PSO)Social coefficient2
Cognitive coefficient2
Velocity clamp2
Maximum inertia value0.2
Minimum inertia value0.9
Equilibrium Optimization (EO)Number of runs20
Population size30
The maximum number of iteration1000
a21
a12
Differential Evolution (DE)Crossover Rate0.5
Scale factor0.2
Genetic algorithm (GA)CrossPercent70
MutatPercent20
Hirschberg–Sinclair algorithm (HS)Length of solution vector20
HM Accepting Rate0.95
Pitch Adjusting rate0.40
Table 2. Segmentation of benchmark images.
Table 2. Segmentation of benchmark images.
Imagent = 3nt = 4nt = 5nt = 8
CameramanApplsci 14 09785 i001Applsci 14 09785 i002Applsci 14 09785 i003Applsci 14 09785 i004
LennaApplsci 14 09785 i005Applsci 14 09785 i006Applsci 14 09785 i007Applsci 14 09785 i008
BaboonApplsci 14 09785 i009Applsci 14 09785 i010Applsci 14 09785 i011Applsci 14 09785 i012
ButterflyApplsci 14 09785 i013Applsci 14 09785 i014Applsci 14 09785 i015Applsci 14 09785 i016
JetApplsci 14 09785 i017Applsci 14 09785 i018Applsci 14 09785 i019Applsci 14 09785 i020
PeppersApplsci 14 09785 i021Applsci 14 09785 i022Applsci 14 09785 i023Applsci 14 09785 i024
Living RoomApplsci 14 09785 i025Applsci 14 09785 i026Applsci 14 09785 i027Applsci 14 09785 i028
BlondeApplsci 14 09785 i029Applsci 14 09785 i030Applsci 14 09785 i031Applsci 14 09785 i032
Walk bridgeApplsci 14 09785 i033Applsci 14 09785 i034Applsci 14 09785 i035Applsci 14 09785 i036
ManApplsci 14 09785 i037Applsci 14 09785 i038Applsci 14 09785 i039Applsci 14 09785 i040
LakeApplsci 14 09785 i041Applsci 14 09785 i042Applsci 14 09785 i043Applsci 14 09785 i044
Table 3. Fitness values of SOAs on the general-purpose image dataset. This table shows the fitness values of various stochastic optimization algorithms (SOAs) applied to the general-purpose image dataset, where lower values indicate better segmentation performance.
Table 3. Fitness values of SOAs on the general-purpose image dataset. This table shows the fitness values of various stochastic optimization algorithms (SOAs) applied to the general-purpose image dataset, where lower values indicate better segmentation performance.
SFOSCAPSOEODEGAHS
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
Cameraman21.41170.010961.40180.000151.66510.174731.40170.000001.80290.036121.40220.001141.44770.08109
30.84690.069880.76570.001180.95490.072210.76380.000001.01700.087350.76490.001230.87450.14120
40.60540.040510.54780.005340.81190.140550.53850.000000.74390.041960.54110.002470.58160.04190
50.45780.037700.42740.023730.63490.091150.40400.002480.61610.073410.41020.004880.44490.04243
80.26060.032000.27670.032860.38090.060410.20640.001790.36640.029450.21010.004810.25470.03730
160.10250.017720.12780.015750.15240.020660.06090.004280.15710.024060.07100.003660.11130.01438
320.04040.006010.05070.006500.05670.007830.01860.001700.05890.007740.02350.001510.03980.00498
SFOSCAPSOEODEGAHS
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
Lenna21.37730.012591.36640.000191.56930.148651.36630.000002.09040.018841.36660.000741.41170.05143
30.83730.143420.72040.002270.96360.055950.71740.000001.17250.043310.71900.001600.76100.05498
40.59890.085660.51330.083110.74600.109540.46870.000000.85080.047710.47370.004310.52560.06262
50.47300.099720.37510.048240.62130.114000.32720.000120.64930.076920.33770.004560.39010.05983
80.27750.044310.24780.033140.34370.050680.16090.009480.38790.040120.17710.009700.23330.04870
160.10040.012430.11590.014620.14250.023510.05580.004920.16250.018760.06490.004130.10730.01446
320.03540.005030.04440.004620.05260.006600.01750.001600.05760.006910.02080.000970.04040.00462
SFOSCAPSOEODEGAHS
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
Baboon21.21680.012051.20500.000181.35440.169551.20490.000001.34060.015671.20550.000841.23350.06351
30.86690.115540.74330.001840.94270.076900.74070.000000.86460.053440.74260.002390.79630.07087
40.60930.101670.51930.005570.76130.126570.50730.000010.63820.074730.51430.004930.55650.05481
50.48530.072320.41000.045720.61020.085250.36810.000170.48880.041380.38210.008290.41490.03846
80.27670.043550.26630.022770.37150.075440.18400.002720.28800.032910.20060.006720.25240.03568
160.09780.015560.11900.014550.14780.022840.05950.004150.11990.013720.06940.003210.10870.01361
320.03220.006740.04640.005460.05400.006550.01870.001490.04320.004540.02110.000840.04170.00496
SFOSCAPSOEODEGAHS
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
Man22.75790.019382.73540.000003.25640.524312.73540.000002.76390.056712.73600.001172.82140.11497
31.80460.153481.62430.001722.05780.348611.62220.000001.70080.076341.62510.004371.69960.13574
41.24260.164141.03880.007001.67540.283461.02550.000001.22380.185711.04430.016091.21760.14436
50.93230.091340.78200.013301.02230.132260.75090.000730.95120.104190.77240.020510.94940.13691
80.51770.053450.49690.051410.76090.141810.36120.005550.58610.077170.41010.018120.57360.09095
160.20470.038800.24340.029060.31000.061650.10850.006610.23730.039060.14660.011900.23630.03691
320.07330.011630.09950.015650.12440.020770.03240.003140.09370.012970.04730.005260.08980.01440
SFOSCAPSOEODEGAHS
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
Jet20.82610.006800.82090.000050.88030.047230.82090.000001.17440.012270.82100.000270.82870.00900
30.54080.032800.51020.000970.62800.084620.50860.000000.73620.020880.50890.000310.52230.01465
40.38550.041670.34280.003980.46280.068270.33690.000000.51040.034300.33860.001560.35990.02291
50.30320.043180.25550.025710.37400.058030.22920.000070.39870.036610.23270.002580.27210.03798
80.16550.029810.16440.022740.21940.035200.11070.004460.21640.020630.11470.004530.14200.02595
160.06800.009650.07560.011220.08910.014110.03470.002890.08460.007400.03960.001650.06120.00843
320.02210.003350.02880.003440.03090.004830.01130.000880.03100.002620.01270.000600.02380.00246
SFOSCAPSOEODEGAHS
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
Peppers21.74900.020171.73330.000122.05700.279661.73320.000001.75070.026511.73340.000321.78270.09471
31.22250.048871.16290.001501.40300.157421.16090.000001.21410.055781.16230.002511.19700.02819
40.84390.102180.73380.006880.95870.071840.72220.000000.85530.124490.72740.003640.79720.08052
50.65240.065520.59040.047390.84680.106400.54580.005010.67150.065560.55680.010810.63320.06993
80.34880.032340.36160.034610.47650.075460.27120.001370.38550.045210.28930.007290.35720.04407
160.11910.015990.16270.016410.20370.031380.08190.006150.16250.016910.09170.004340.14280.01652
320.03990.005400.06440.006270.07010.009750.02390.001440.05610.006370.02630.000960.05140.00679
SFOSCAPSOEODEGAHS
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
Living Room21.88630.013911.87350.000102.06150.139081.87350.000001.89340.029971.87410.001071.92060.06988
31.30890.093411.17190.001041.40430.170231.17040.000001.21330.040391.17140.000841.21820.08456
40.87770.087820.76740.005460.96370.065810.75710.000000.84410.074460.76210.005340.81260.07143
50.65260.061120.58840.063320.79110.109430.53910.000030.65600.060480.55330.005520.62140.08986
80.38270.049060.35600.039950.46710.074500.25480.003330.36180.036830.27260.005190.35180.05181
160.14500.020950.15800.017440.18000.021440.07900.005850.14580.018020.08770.003330.13580.01611
320.04380.006030.05910.006780.06770.008270.02300.001520.05480.006620.02520.001230.05030.00557
SFOSCAPSOEODEGAHS
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
Blonde21.52600.006061.51940.000021.61650.072921.51940.000001.94600.041281.52010.001191.59910.07200
30.84530.062090.78230.001200.94860.066970.78030.000001.02300.028810.78040.000170.82190.09166
40.61740.065850.53890.005170.76700.113540.52790.000000.73900.056860.52920.001730.56050.05526
50.47210.061310.40720.028120.60410.081570.37670.000010.56200.033760.38590.006230.43600.03942
80.26090.030990.24880.026020.32930.063670.16920.006330.29110.033810.18190.008740.23170.03366
160.09470.011820.11240.015490.13510.017920.05110.004140.10570.010570.06220.003840.09690.01390
320.03430.005570.04350.006470.04870.007850.01590.001330.03740.004340.01860.000890.03580.00458
SFOSCAPSOEODEGAHS
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
Walk Bridge22.45900.015342.44300.000002.60810.153542.44300.000002.45950.021022.44350.001482.49200.08550
31.63420.090681.47150.001591.79450.217121.46980.000001.50300.036121.47010.000561.53170.10586
41.16470.076381.02330.005061.30040.127641.01450.000021.10070.067741.01690.002041.06610.05113
50.90760.097790.75580.027610.96800.064400.72850.001820.84120.062390.73020.003320.78710.05790
80.47640.060960.44990.048010.59230.099270.33270.001760.46680.041210.34580.007090.43180.05728
160.16830.022170.18970.022850.20700.033830.09250.006750.17360.016630.09340.003750.15680.02784
320.05170.007730.06410.007470.06840.008800.02540.001700.06000.008610.02330.000770.04990.00617
SFOSCAPSOEODEGAHS
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
Butterfly21.18790.019581.17520.000001.40270.222171.17520.000001.77140.014601.17550.000651.20160.03019
30.78050.125720.62590.002270.87340.124160.62280.000001.02490.077510.62490.002340.68040.08202
40.54890.081620.43470.051960.69000.132350.41160.000000.72380.055610.42130.007420.47460.05711
50.41540.066130.35880.048800.54410.130450.30170.000000.57220.074800.31600.007680.37230.06380
80.25350.029650.22700.027080.33450.070680.14170.006610.32480.040420.16580.011940.22990.04721
160.10140.018450.11220.011720.13220.021400.05120.005170.13390.016360.06210.003130.10010.01548
320.03470.007230.04210.005490.05300.005810.01600.001690.04950.006180.01990.001100.04030.00382
SFOSCAPSOEODEGAHS
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
Lake21.44790.013001.43830.000111.62500.137681.43820.000001.58080.024281.43860.000691.48630.10538
31.01890.038510.96440.001380.99950.001870.96250.000001.07820.029960.96280.000370.99160.04004
40.73190.060240.64360.006870.84160.109980.63130.000030.77220.056880.63290.001320.68150.05902
50.55660.052560.48860.063730.70240.121290.43630.000200.55880.050480.43910.003290.48530.04585
80.30180.043950.28690.038320.42030.070430.20130.002770.32860.038650.21150.008610.26070.03733
160.10980.019760.13900.014970.15910.024220.06690.004070.13080.013410.07430.004090.11770.01813
320.03840.007010.05110.004350.05590.005600.02120.002010.04740.005420.02270.001090.04410.00431
Table 4. Mean fitness and standard deviation for magnetic resonance prostate images. This table presents the mean fitness values and standard deviations obtained from applying various algorithms to the segmentation of magnetic resonance prostate images. The results indicate the consistency and accuracy of each algorithm in optimizing the segmentation task.
Table 4. Mean fitness and standard deviation for magnetic resonance prostate images. This table presents the mean fitness values and standard deviations obtained from applying various algorithms to the segmentation of magnetic resonance prostate images. The results indicate the consistency and accuracy of each algorithm in optimizing the segmentation task.
SFOSCAPSOEODEGAHSRanking
ImagentMeanStdMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
130.84690.06990.76570.00120.95490.07220.76380.00001.01700.08730.76490.00120.87450.1412Applsci 14 09785 i045
40.60540.04050.54780.00530.81190.14060.53850.00000.74390.04200.54110.00250.58160.0419Applsci 14 09785 i046
50.45780.03770.42740.02370.63490.09110.40400.00250.61610.07340.41020.00490.44490.0424Applsci 14 09785 i047
80.26060.03200.27670.03290.38090.06040.20640.00180.36640.02950.21010.00480.25470.0373Applsci 14 09785 i048
230.83730.14340.72040.00230.96360.05590.71740.00001.17250.04330.71900.00160.76100.0550Applsci 14 09785 i049
40.59890.08570.51330.08310.74600.10950.46870.00000.85080.04770.47370.00430.52560.0626Applsci 14 09785 i050
50.47300.09970.37510.04820.62130.11400.32720.00010.64930.07690.33770.00460.39010.0598Applsci 14 09785 i051
80.27750.04430.24780.03310.34370.05070.16090.00950.38790.04010.17710.00970.23330.0487Applsci 14 09785 i052
330.86690.11550.74330.00180.94270.07690.74070.00000.86460.05340.74260.00240.79630.0709Applsci 14 09785 i053
40.60930.10170.51930.00560.76130.12660.50730.00000.63820.07470.51430.00490.55650.0548Applsci 14 09785 i054
50.48530.07230.41000.04570.61020.08520.36810.00020.48880.04140.38210.00830.41490.0385Applsci 14 09785 i055
80.27670.04360.26630.02280.37150.07540.18400.00270.28800.03290.20060.00670.25240.0357Applsci 14 09785 i056
431.80460.15351.62430.00172.05780.34861.62220.00001.70080.07631.62510.00441.69960.1357Applsci 14 09785 i057
41.24260.16411.03880.00701.67540.28351.02550.00001.22380.18571.04430.01611.21760.1444Applsci 14 09785 i058
50.93230.09130.78200.01331.02230.13230.75090.00070.95120.10420.77240.02050.94940.1369Applsci 14 09785 i059
80.51770.05340.49690.05140.76090.14180.36120.00550.58610.07720.41010.01810.57360.0910Applsci 14 09785 i060
530.54080.03280.51020.00100.62800.08460.50860.00000.73620.02090.50890.00030.52230.0147Applsci 14 09785 i061
40.38550.04170.34280.00400.46280.06830.33690.00000.51040.03430.33860.00160.35990.0229Applsci 14 09785 i062
50.30320.04320.25550.02570.37400.05800.22920.00010.39870.03660.23270.00260.27210.0380Applsci 14 09785 i063
80.16550.02980.16440.02270.21940.03520.11070.00450.21640.02060.11470.00450.14200.0260Applsci 14 09785 i064
631.22250.04891.16290.00151.40300.15741.16090.00001.21410.05581.16230.00251.19700.0282Applsci 14 09785 i065
40.84390.10220.73380.00690.95870.07180.72220.00000.85530.12450.72740.00360.79720.0805Applsci 14 09785 i066
50.65240.06550.59040.04740.84680.10640.54580.00500.67150.06560.55680.01080.63320.0699Applsci 14 09785 i067
80.34880.03230.36160.03460.47650.07550.27120.00140.38550.04520.28930.00730.35720.0441Applsci 14 09785 i068
731.30890.09341.17190.00101.40430.17021.17040.00001.21330.04041.17140.00081.21820.0846Applsci 14 09785 i069
40.87770.08780.76740.00550.96370.06580.75710.00000.84410.07450.76210.00530.81260.0714Applsci 14 09785 i070
50.65260.06110.58840.06330.79110.10940.53910.00000.65600.06050.55330.00550.62140.0899Applsci 14 09785 i071
80.38270.04910.35600.03990.46710.07450.25480.00330.36180.03680.27260.00520.35180.0518Applsci 14 09785 i072
830.84530.06210.78230.00120.94860.06700.78030.00001.02300.02880.78040.00020.82190.0917Applsci 14 09785 i073
40.61740.06590.53890.00520.76700.11350.52790.00000.73900.05690.52920.00170.56050.0553Applsci 14 09785 i074
50.47210.06130.40720.02810.60410.08160.37670.00000.56200.03380.38590.00620.43600.0394Applsci 14 09785 i075
80.26090.03100.24880.02600.32930.06370.16920.00630.29110.03380.18190.00870.23170.0337Applsci 14 09785 i076
931.63420.09071.47150.00161.79450.21711.46980.00001.50300.03611.47010.00061.53170.1059Applsci 14 09785 i077
41.16470.07641.02330.00511.30040.12761.01450.00001.10070.06771.01690.00201.06610.0511Applsci 14 09785 i078
50.90760.09780.75580.02760.96800.06440.72850.00180.84120.06240.73020.00330.78710.0579Applsci 14 09785 i079
80.47640.06100.44990.04800.59230.09930.33270.00180.46680.04120.34580.00710.43180.0573Applsci 14 09785 i080
1030.78050.12570.62590.00230.87340.12420.62280.00001.02490.07750.62490.00230.68040.0820Applsci 14 09785 i081
40.54890.08160.43470.05200.69000.13240.41160.00000.72380.05560.42130.00740.47460.0571Applsci 14 09785 i082
50.41540.06610.35880.04880.54410.13040.30170.00000.57220.07480.31600.00770.37230.0638Applsci 14 09785 i083
80.25350.02970.22700.02710.33450.07070.14170.00660.32480.04040.16580.01190.22990.0472Applsci 14 09785 i084
1131.01890.03850.96440.00140.99950.00190.96250.00001.07820.03000.96280.00040.99160.0400Applsci 14 09785 i085
40.73190.06020.64360.00690.84160.11000.63130.00000.77220.05690.63290.00130.68150.0590Applsci 14 09785 i086
50.55660.05260.48860.06370.70240.12130.43630.00020.55880.05050.43910.00330.48530.0458Applsci 14 09785 i087
80.30180.04390.28690.03830.42030.07040.20130.00280.32860.03860.21150.00860.26070.0373Applsci 14 09785 i088
Table 5. Segmentation of transaxial-cut prostate MRI images Using EO and cross-entropy. This table presents the segmentation results of transaxial-cut prostate MRI images using the equilibrium optimizer (EO) and cross-entropy. Each row corresponds to a distinct MRI image, while the columns nt represent the number of thresholds applied during segmentation. The results illustrate the performance of the EO algorithm across different threshold levels for each image.
Table 5. Segmentation of transaxial-cut prostate MRI images Using EO and cross-entropy. This table presents the segmentation results of transaxial-cut prostate MRI images using the equilibrium optimizer (EO) and cross-entropy. Each row corresponds to a distinct MRI image, while the columns nt represent the number of thresholds applied during segmentation. The results illustrate the performance of the EO algorithm across different threshold levels for each image.
Imagent = 3nt = 4nt = 5nt = 8
1Applsci 14 09785 i089Applsci 14 09785 i090Applsci 14 09785 i091Applsci 14 09785 i092
2Applsci 14 09785 i093Applsci 14 09785 i094Applsci 14 09785 i095Applsci 14 09785 i096
3Applsci 14 09785 i097Applsci 14 09785 i098Applsci 14 09785 i099Applsci 14 09785 i0100
4Applsci 14 09785 i0101Applsci 14 09785 i0102Applsci 14 09785 i0103Applsci 14 09785 i0104
5Applsci 14 09785 i0105Applsci 14 09785 i0106Applsci 14 09785 i0107Applsci 14 09785 i0108
6Applsci 14 09785 i0109Applsci 14 09785 i0110Applsci 14 09785 i0111Applsci 14 09785 i0112
7Applsci 14 09785 i0113Applsci 14 09785 i0114Applsci 14 09785 i0115Applsci 14 09785 i0116
8Applsci 14 09785 i0117Applsci 14 09785 i0118Applsci 14 09785 i0119Applsci 14 09785 i0120
9Applsci 14 09785 i0121Applsci 14 09785 i0122Applsci 14 09785 i0123Applsci 14 09785 i0124
10Applsci 14 09785 i0125Applsci 14 09785 i0126Applsci 14 09785 i0127Applsci 14 09785 i0128
11Applsci 14 09785 i0129Applsci 14 09785 i0130Applsci 14 09785 i0131Applsci 14 09785 i0132
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zarate, O.; Hinojosa, S.; Ortiz-Joachin, D. Improving Prostate Image Segmentation Based on Equilibrium Optimizer and Cross-Entropy. Appl. Sci. 2024, 14, 9785. https://doi.org/10.3390/app14219785

AMA Style

Zarate O, Hinojosa S, Ortiz-Joachin D. Improving Prostate Image Segmentation Based on Equilibrium Optimizer and Cross-Entropy. Applied Sciences. 2024; 14(21):9785. https://doi.org/10.3390/app14219785

Chicago/Turabian Style

Zarate, Omar, Salvador Hinojosa, and Daniel Ortiz-Joachin. 2024. "Improving Prostate Image Segmentation Based on Equilibrium Optimizer and Cross-Entropy" Applied Sciences 14, no. 21: 9785. https://doi.org/10.3390/app14219785

APA Style

Zarate, O., Hinojosa, S., & Ortiz-Joachin, D. (2024). Improving Prostate Image Segmentation Based on Equilibrium Optimizer and Cross-Entropy. Applied Sciences, 14(21), 9785. https://doi.org/10.3390/app14219785

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop