Next Article in Journal
Investigation of the Clinical Value of Three-Dimensional-Printed Personalised Vascular Models for the Education and Training of Clinicians When Performing Interventional Endovascular Procedures
Previous Article in Journal
Field Measurements of Indoor Environmental Quality in School Buildings Post-COVID-19: Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Retinal Vessel Segmentation Using Math-Inspired Metaheuristic Algorithms

by
Mehmet Bahadır Çetinkaya
1,* and
Sevim Adige
2,3
1
Department of Mechatronics Engineering, Faculty of Engineering, University of Erciyes, Kayseri 38039, Türkiye
2
Department of Electronics and Automation, Vocational School of Hendek, Sakarya University of Applied Sciences, Sakarya 54300, Türkiye
3
Graduate School of Natural and Applied Sciences, Mechatronics Engineering, University of Erciyes, Kayseri 38039, Türkiye
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(10), 5693; https://doi.org/10.3390/app15105693
Submission received: 4 April 2025 / Revised: 10 May 2025 / Accepted: 13 May 2025 / Published: 20 May 2025
(This article belongs to the Section Biomedical Engineering)

Abstract

Artificial intelligence-based biomedical image processing has become an important area of research in recent decades. In this context, one of the most important problems encountered is the close contrast values between the pixels to be segmented in the image and the remaining pixels. Among the crucial advantages provided by metaheuristic algorithms, they are generally able to provide better performances in the segmentation of biomedical images due to their randomized and gradient-free global search abilities. Math-inspired metaheuristic algorithms can be considered to be one of the most robust groups of algorithms, while also generally presenting non-complex structures. In this work, the recently proposed Circle Search Algorithm (CSA), Tangent Search Algorithm (TSA), Arithmetic Optimization Algorithm (AOA), Generalized Normal Distribution Optimization (GNDO), Global Optimization Method based on Clustering and Parabolic Approximation (GOBC-PA), and Sine Cosine Algorithm (SCA) were implemented for clustering and then applied to the retinal vessel segmentation task on retinal images from the DRIVE and STARE databases. Firstly, the segmentation results of each algorithm were obtained and compared with each other. Then, to compare the statistical performances of the algorithms, analyses were carried out in terms of sensitivity (Se), specificity (Sp), accuracy (Acc), standard deviation, and the Wilcoxon rank-sum test results. Finally, detailed convergence analyses were also carried out in terms of the convergence speed, mean squared error (MSE), CPU time, and number of function evaluations (NFEs) metrics.

1. Introduction

In recent years, optimization problems—especially those in the area of image processing—have become increasingly complex, leading to the requirement for more effective methods for their solution. New directions of theoretical research have appeared in the literature in accordance with this requirement, involving the improvement of existing conventional or metaheuristic algorithms, hybridizing different algorithms, and proposing new metaheuristic algorithms [1]. The results presented in the literature have also proved that, due to their non-random search characteristics, conventional gradient-based approaches cannot produce results as effective as metaheuristic algorithms.
Metaheuristic algorithms are usually classified into four main groups: evolutionary-based, swarm intelligence-based, physics-based, and human-inspired algorithms. Furthermore, inspired by the search strategies utilized, these algorithms can also be alternatively classified into two main groups. Simple local search and global search algorithms [2]: these search algorithms have the important disadvantages of becoming stuck inside local minima and dependence on the initial solutions. In contrast, population-based algorithms may provide higher optimization performance as a result of their global search ability, which is achieved through improving the population iteratively. In the literature, the majority of population-based algorithms have been improved in a manner inspired by swarm intelligence, physical laws, evolutionary strategies, and human behaviors. In addition to these well-established and frequently used algorithms, some novel population-based metaheuristic algorithms inspired by mathematical laws have also been proposed in recent years [1,2,3,4,5,6]. The most recently proposed math-inspired algorithms include the Circle Search Algorithm (CSA), Tangent Search Algorithm (TSA), Arithmetic Optimization Algorithm (AOA), Generalized Normal Distribution Optimization (GNDO) Algorithm, Global Optimization Method based on Clustering and Parabolic Approximation (GOBC-PA), and Sine Cosine Algorithm (SCA).
The SCA has been applied to several engineering problems in the existing literature, including the optimization of the cross-section of an aircraft wing [1], optimizing the combined economic and emission dispatch problem in power systems [7], optimizing the size and location of energy resources in radial distribution networks [8], optimizing the parameters of fractional order PID (FOPID) controller for load frequency control systems [9], designing an effective PID controller for power systems [10], designing a hybrid power generation system with minimized total annual cost and emissions [11], solving the unit commitment problem of an electric power system [12], solving the bend photonic crystal waveguides design problem [13], solving the short-term hydrothermal scheduling problem in power systems [14], tuning the FOPID/PID controller parameters for an automatic voltage regulator system [15], determining the optimal PID controller parameters for an automatic voltage regulator system [16], designing a PD-PID controller for the frequency control of a hybrid power system [17], implementation of a novel future selection approach for text categorization [18], solving the economic power-generation scheduling (EcGS) problem for thermal units [19], implementation of a parallel SCA for three communication strategies [20], implementation of an enhanced sine cosine algorithm (ESCA) to determine the threshold for use in the segmentation of color images [21], development of an optimum photovoltaic pumping system based on monitoring the maximum power point under certain meteorological conditions [22], achievement of an optimal thermal stress distribution in symmetric composite plates with non-circular holes under a uniform heat flux [23], and the implementation of an SCA-based intelligent approach for effective underwater visible light communication [24].
The AOA has also been applied to several engineering problems in the existing literature, including the implementation of the AOA for optimization in mechanical engineering design problems [4,25,26], designing skeletal structures (e.g., 72 bar space truss, 384 bar double-layer barrel vault, and a 3 bay, 15 story steel frame) [27], the implementation of a dynamic AOA for truss optimization under natural frequency constraints [28], optimizing the shapes of vehicle components using metaheuristic algorithms to minimize fuel emissions [29], applying enhanced hybrid AOAs to real-world engineering problems (e.g., welded beam, gear train, 3 bar truss, speed reducer, multiple disk clutch brake, step-cone pulley, 25 bar truss, and 4-stage gearbox design problems) [30], the enhancement of a hybrid AOA for DC motor regulation and fluid-level sequential tank system design [31], the implementation of an AOA-based efficient text document clustering approach [32], the implementation of a deep convolutional spiking neural network optimized using the AOA for lung disease detection from chest X-ray images [33], and the design of an electric vehicle fast-charging station connected with a renewable energy source and battery energy storage system [34].
The TSA only has a restricted number of real-world applications in the existing literature, including the adjustment of the optimal forecasting interval and enhancement of the interval forecasting performance in ensemble carbon emissions forecasting systems [35], the implementation of a hybrid method (named the Aquila optimizer–tangent search algorithm; AO-TSA) for global optimization problems [36], and the implementation of a basic TSA which integrates the fitness-weighted search strategy (FWSS) and opposition-based learning (OBL) concepts for the solution of complex optimization problems [37].
The GNDO algorithm has several real-world applications in the existing literature, including parameter extraction for photovoltaic models for energy conversion [5], estimating the parameters of the triple-diode model of a photovoltaic system [38], the design of truss structures with optimal weight [39], and enabling voltage and current control in an energy storage system and adjusting the optimum gain settings of the PI controller [40].
The GOBC-PA algorithm has been applied to engineering problems such as the optimization of the parameters of an osmotic dehydration process for enhanced performance in the drying of mushroom products [41], as well as the implementation of a novel search method consisting of deterministic and stochastic approaches and testing its performance over benchmark functions in addition to one- and two-dimensional optimization problems [42].
Finally, the CSA has also been applied to several engineering problems, including tracking global maxima power under partial shading conditions in solar photovoltaic systems [43] and the implementation of a maximum-power point-tracking control system to increase the efficiency of grid-connected photovoltaic systems [44].
In the recent literature, there are also studies that have detailed the application of approaches other than metaheuristic algorithms for retinal vessel segmentation. In [45], Saeed et al. propose a preprocessing module including four stages to assess the impact of retinal vessel coherence on retinal vessel segmentation and then applied the proposed method to retinal images from the DRIVE and STARE databases. In [46], Xian et al. propose a novel hybrid algorithm to segment dual-wavelength retinal images with high accuracy and enlarge the measurement range of hemoglobin oxygen saturation in retinal vessels. Wang and Li propose a novel approach based on gray relational analysis for the segmentation of retinal images under the constraint of a limited sample size and the resulting associated uncertainties [47]. Jiang et al. propose a deep-learning-based network model, called feature selection-UNet (FS-UNet), with the aim of optimizing attention mechanisms and task-specific adaptations when analyzing complex retinal images [48], and proved the efficiency of the model using images from the DRIVE, STARE, and CHASE databases. In [49], a novel deep-learning-based network model, called plenary attention mechanism-UNet (PAM-UNet), is proposed by Wang et al. to enhance the feature extraction capabilities of existing algorithms. The performance of the model was analyzed using retinal images from the DRIVE and CHASE DB1 databases. In [50], Ramesh and Sathiamoorthy propose a hybrid model which combines metaheuristic and deep learning approaches to detect diabetic retinopathy with high accuracy. Similarly, Sau et al. propose a deep learning architecture based on a metaheuristic algorithm for retinal vessel segmentation [51].
As can be seen from this literature review, math-inspired metaheuristic algorithms have only been utilized for applications within a restricted number of research areas. Retinal image segmentation has become one of the most common application areas in the field of image processing in recent years. In this context, the segmentation of retinal images with high accuracy is extremely important for the detection of eye diseases.
The highlights of this work can be expressed as follows:
  • The math-inspired metaheuristic algorithms of CSA, TSA, AOA, GNDO, GOBC-PA, and SCA are implemented for clustering and retinal vessel segmentation.
  • The segmentation results demonstrate that the considered math-inspired metaheuristic approaches are able to produce similar or better results when compared to evolutionary-based, swarm-intelligence-based, physics-based, and human-inspired algorithms [52,53].
  • The statistical analyses indicate that the considered math-inspired algorithms are able to produce effective results despite their simple algorithmic structures.

2. Materials and Methods

In retinal image segmentation, the images should first be enhanced in terms of their contrast by applying pre-processing operations. The most commonly used pre-processing operations in the relevant literature are band selection, bottom-hat transformation, and brightness correction. Retinal images consist of Red (R), Green (G), and Blue (B) layers, and the most effective layer (i.e., that which will provide the highest clustering performance) can be determined using band selection. After the most appropriate layer is determined, the contrast of the relevant layer is further increased via bottom-hat transformation and brightness correction pre-processing. The retinal images obtained as a result of these pre-processing operations, with higher contrast, are then subjected to clustering-based segmentation through the use of metaheuristic algorithms.
Figure 1a and Figure 1b shows healthy and diseased retinal images, respectively, taken from the Digital Retinal Images for Vessel Extraction (DRIVE) database [54]. Similarly, healthy and diseased retinal images taken from the Structured Analysis of the Retina (STARE) database [55] are shown in Figure 2a and Figure 2b, respectively.
The green layer of RGB retinal images is frequently able to produce better clustering performance due to its higher illuminance, contrast, and brightness levels. After applying band selection, the G layers of the DRIVE and STARE retinal images were obtained, as shown in Figure 3a and Figure 3c and Figure 4a and Figure 4c, respectively. Afterward, bottom-hat transformation and brightness correction were applied consecutively to increase the contrast of the retinal images. The enhanced retinal images are shown in Figure 3b,d for DRIVE images and in Figure 4b,d for STARE images.
In this study, field of view (FOV) mask images were also used to analyze the performance of the algorithms in detail. The FOV mask images include only the vessel pixels; namely, all other pixels except the vessel pixels were removed from the image. For both the DRIVE and STARE databases, pixel-based comparisons were carried out between the segmented images and the relevant mask images to analyze the performance of the algorithms.
Math-inspired metaheuristic algorithms provide robust approaches for optimization problems, reflecting the effectiveness of basic mathematical laws. These algorithms especially stand out due to their lower computational costs and simple algorithmic structures. In this work, the most recent math-inspired metaheuristic algorithms in the literature were applied for retinal vessel segmentation, after which their performances were compared in detail. The math-inspired metaheuristic algorithms were improved using MATLAB R2019a software. Furthermore, the mathematical and statistical analyses were also carried out using the same software.

2.1. Circle Search Algorithm

The CSA, which was proposed by Qais et al. in 2022, is an effective metaheuristic approach inspired by the unique characteristics of circles, such as their diameter, perimeter, center point, and tangent line. It may provide significantly improved performance, in terms of the exploration and exploitation phases, due to the integration of the geometric features of circles into the algorithm [3].
The detailed pseudo-code for the CSA is given as follows:
Randomly create an initial population by, X t =   l b +   r a n d · ( u b l b ) ,      
Determine the   c parameter
Cycle = 1
WHILE  C y c l e   M a x i m u m   C y c l e  
Calculate the values of the variables a , w and p
Calculate the value of θ
Update X t by using, X t =   X c +   ( X c X t ) · tan   ( θ )      
     If  Updated solutions are out of the boundaries
       Set the solutions equal to the boundaries
       Calculate the fitness value of f ( X t )
     End
Evaluate f ( X t ) with the current best solution f ( X c )
Update f ( X c ) and X c
Cycle = Cycle + 1
END
In the CSA, l b and u b represent the lower and upper limit values of the search space, respectively; the r a n d operator constitutes a random vector including randomly produced values within the interval of [0, 1]; and the parameter c is a constant ranging between [0, 1] that represents the percentage of maximum cycles. Furthermore, a , w , and p are variables set within the intervals of [ π , 0], [− π , 0] and [1, 0], respectively. These variables can be calculated as
a = π π · C y c l e M a x   C y c l e 2
w =   w · r a n d w
p = 1 ( 0.9 ) · C y c l e M a x   C y c l e 0.5
Furthermore, θ represents an angle that can be used while updating X t , which is calculated using the following equation:
θ   = w · r a n d ,           C y c l e > ( c · M a x   C y c l e )       w · p ,                           o t h e r w i s e

2.2. Tangent Search Algorithm

The TSA, which was proposed by Layeb in 2021, is a novel math-inspired metaheuristic algorithm inspired by the two important features of the tangent function [2]. The , variation interval of the tangent function provides the algorithm with an effective exploration ability in searching the solution space. Additionally, the inherently periodic structure of the tangent function provides a balance between the exploration and exploitation processes during optimization.
The detailed pseudo-code for the TSA is given as follows:
Randomly create uniformly distributed initial population, X ( i ) =   l b + ( u b l b ) · r a n d ( D ) ,           i = 1 , , T
Cycle = 1
WHILE   C y c l e M a x i m u m   C y c l e
Apply Switch procedure for each X ( i ) according to Pswitch
      If    r a n d < P s w i t c h : Intensification phase
        Produce a random local walk by, X i t + 1 = X i t + s t e p · tan ( θ ) · ( X i t o p t S i t )
         Exchange some variables of the obtained solution with the related variables of the
             current optimal solution by X i t o p t S i t to produce new possible solutions
         Check the boundaries of the recently produced solutions
         Repair the overflowed solutions
      Else  r a n d P s w i t c h : Exploration phase
          Apply X i t + 1 = X i t + s t e p · tan ( θ ) to each variable with a probability equal to 1/D
                 in order to expand the research capacity
      End
Apply Escape Local Minima procedure according a given probability value called Pesc
        If  r a n d < P e s c : Escape local minima phase
          Randomly select one of the current solutions and apply selection procedures of X i n e w
        End
         Replace X i n e w with a randomly selected solution having lower fitness value
Cycle = Cycle + 1
END
In the TSA,   l b and   u b denote the lower and upper bounds of the problem to be optimized, respectively; the r a n d operator produces uniformly distributed numbers in the interval of [0, 1]; and D is the dimension of the problem. P s w i t c h represents a probability value, which is used to control the phase of the algorithm, and s t e p · tan ( θ ) is a global step value, which directs the existing solutions toward the optimal solution. In order to check the boundaries and repair the overflowed solutions in the intensification phase, the following expressions can be used:
X ( X < l b ) = r a n d · ( u b l b ) + l b X ( X > l b ) = r a n d · ( u b l b ) + l b
In the escape from local minima phase, one of the following two procedures (which produce the optimal fitness value) can be applied to select the optimal solutions:
X i n e w = X i t +   R · [ o p t S i t   r a n d · ( o p t S i t X i t ) ]
X i n e w = X i t +   tan ( θ ) · ( u b l b )

2.3. Arithmetic Optimization Algorithm

The AOA, which was proposed by Abualigah et al. in 2021, simulates the distribution behavior of the arithmetic operators of multiplication, division, subtraction, and addition [4]. The simple algorithm structure of the AOA includes only two control parameters, providing the ability to maintain effective optimization performance in a wide range of engineering problems. In the AOA, a population consisting of possible solutions is evaluated according to some predetermined mathematical criteria in order to reach the global solution.
The detailed pseudo-code for the AOA is given as follows:
Randomly create an initial population consisting of positions of the solutions, x i   1 ,   2 , ,   N
Cycle = 1
WHILE  C y c l e M a x i m u m   C y c l e
Calculate the fitness value of each x i solution in the population
Determine the best solution so far
Update the Math Optimizer Accelerated (MOA) value
Update Math Optimizer probability (MOP) value
   For  i = 1 : N
    For  j = 1 : N
       Produce a random values between [0, 1] for the conditions of r1, r2 and r3
      If  r 1 > M O A   : Exploration phase
        If  r 2 > 0.5 ,
       Apply Division math operator (D “ ÷ ”) and update the position of
            solution i
         Else
       Apply Multiplication math operator (M “ x ”) and update the position of
            solution i
        End if
         Else
        If  r 3 > 0.5 : Exploitation phase
       Apply Substraction math operator (S “ ”) and update the position of
            solution i
       Else
       Apply Addition math operator (A “ + ”) and update the position of
            solution i
        End if
    End if
   End
  End
Cycle = Cycle + 1
END
In the AOA, M O A represents the math optimizer accelerated function, which can be calculated as follows:
M O A = M i n + C _ C y c l e   M a x M i n M _ C y c l e
where Min and Max denote the minimum and maximum values of the accelerated function, respectively, and C _ C y c l e and M _ C y c l e denote the current cycle and maximum number of cycles, respectively. On the other hand, the M O P value can be calculated as
M O P = 1   C _ C y c l e   1 / α   C _ C y c l e   1 / α
where α is a sensitive parameter affecting the exploitation accuracy over the cycles. In the exploration phase, the (D “ ÷ ”) and (M “ x ”) operators update the positions according to the following expressions:
x i , j   ( C _ C y c l e + 1 ) = b e s t ( x j ) ÷   ( M O P + ε )   x   [ ( u b j     l b j )   x   μ + l b j ]
x i , j   ( C _ C y c l e + 1 ) =   b e s t ( x j )   x   M O P   x   [ ( u b j     l b j )   x   μ + l b j ]
where ε is a small integer, μ is a control parameter to adjust the search process, and, finally, u b j   and l b j represent the upper and lower bounds of solution j, respectively. On the other hand, in the exploitation phase, the update of each position can be performed by the (S “ ”) and (A “ + ”) operators, as detailed below:
x i , j   ( C _ C y c l e + 1 )   =   b e s t ( x j )   M O P   x   [ ( u b j     l b j )   x   μ + l b j ]
x i , j   ( C _ C y c l e + 1 ) =   b e s t ( x j ) + M O P   x   [ ( u b j l b j )   x   μ + l b j ]

2.4. Generalized Normal Distribution Optimization Algorithm

The basic philosophy of the GNDO algorithm is to optimize the position of each individual in the population based on the generalized normal distribution curve. The GNDO algorithm has a very simple structure, consisting of only two control parameters. The GNDO algorithm performs local exploitation and global exploration phases according to the current and the mean positions of the individuals on a constructed generalized normal distribution model. This algorithm was proposed by Zhang et al. in 2020 as an effective math-inspired metaheuristic algorithm, in terms of its efficiency and accuracy [5].
Detailed pseudo-code of the GNDO algorithm is given as follows:
Randomly create an initial population by, x i , j t =   l j + ( u j l j ) · r a n d ,       i = 1 ,   ,   N   and j = 1 ,   ,   D  
Calculate the fitness value of each solution and determine the best solution as x b e s t
Cycle = 1
WHILE   C y c l e M a x i m u m   C y c l e
    For  i = 1 : N
        Produce a random number ( α ) in the interval of [0, 1]
     If  α > 0.5   : Local exploitation strategy
         Select the current optimal solution x b e s t t and calculate the mean position M
         Compute the generalized mean position μ i , generalized standard variances δ i
             and penalty factor η
         Apply the local exploitation strategy to calculate v i t and x i t + 1
        Else: Global exploration strategy
          Apply the global exploration strategy
        End if
    End for
Cycle = Cycle + 1
END
In the GNDO algorithm,   l j and   u j are the lower and upper boundaries of the j t h design problem, respectively; the r a n d operator produces random numbers in the interval of [0, 1]; and D represents the dimension of the problem.
In the local exploitation strategy, the mean position (M) can be calculated as
M =   i = 1 N x i t N
In addition, the values of μ i , δ i and η can be calculated as
μ i   =   1 3   ( x i t   +   x b e s t t   +   M )
δ i   =   1 3     [   ( x i t     μ ) 2   +     ( x b e s t t     μ ) 2   +     ( M     μ ) 2   ]      
η   =     log ( λ 1 ) · cos ( 2 π λ 2 ) ,                             a b   log ( λ 1 ) · cos ( 2 π λ 2 + π ) ,             o t h e r w i s e
In order to apply the local exploitation strategy, v i t and x i t + 1 can be calculated as
v i t   =   μ i   +   δ i   +   η           ,   i = 1 ,   ,   N
x i t + 1   = v i t   ,           f ( v i t )     <   f ( x i t )       x i t   ,           o t h e r w i s e
Finally, the global exploration strategy can be applied according to the following expressions:
v 1   = x i t     x p 1 t ,           f ( x i t )     <   f ( x p 1 t )       x p 1 t     x i t ,           o t h e r w i s e
v 2   = x p 2 t     x p 3 t ,           f ( x p 2 t )     <   f ( x p 3 t )       x p 3 t     x p 2 t ,           o t h e r w i s e
v i t   =   x i t   +   β · (   r a n d s n d · v 1 ) l o c a l     inf o r m a t i o n     s h a r i n g     +     ( 1 β ) · (   r a n d s n d · v 2 ) g l o b a l     inf o r m a t i o n     s h a r i n g      
x i t + 1   = v i t   ,           f ( v i t )     <   f ( x i t )       x i t   ,           o t h e r w i s e
where p 1 , p 2 , and p 3 ( p 1 p 2 p 3 i ) are random integers taken in the interval of [1, N]; r a n d s n d represents a random number taken from a standard normal distribution; and β is an adjustment parameter with value between 0 and 1.

2.5. Global Optimization Method Based on Clustering and Parabolic Approximation

The GOBC-PA approach, which was proposed by Pençe et al. in 2016, simulates the clustering and function approximation processes in order to reach the global minimum [6]. The GOBC-PA algorithm differs from the other metaheuristic approaches primarily in terms of its crossover and mutation processes; in particular, the evaluation of each solution is followed by means of second-order polynomials and clustering instead of crossover and mutation. The updating of cluster centers via curve fitting and the usage of clustering mechanisms based on the fitness values strengthen this method, especially in terms of its convergence speed.
The detailed pseudo-code of the GOBC-PA algorithm is given as follows:
Randomly create an initial population X 0     N   x   p
Set the number of cluster size as S =   r o u n d   ( N / 3 )   and maximum epoch size as M
Calculate the fitness value produced by the objective function E   ( X 0 )
For  k = 1 : M
   Determine the cluster centers C k =   c 1     c 2     c s T  
   Determine the membership matrix U k   of X k 1     E (   X k 1   )   via clustering
   Sort the cluster centers C k with ascending order in accordance with the fitness values
   Select the first ceil  ( S / 3 ) cluster centers as ( L k )
    For  r = 1 : S
      Determine the H set which represent the members of L k , r taken from X k 1 depending
          on U k 1
    If  M e m b e r   S i z e   o f   H > 1
     Determine the parabola coefficient matrix ( A )
     Find the coefficients of approximated parabola ( θ ^ r )
    End if
    End for
    For  i = 1 : p
     Determine the vertex V i of i t h parabola
     If  a i   <   0 , the i t h parabola can be assumed as concave and keep L k ( r , i )
     Else if  a i     0 , the i t h parabola can be assumed as convex, and if E   ( L k ( r , i ) ) is better
            than E   ( V i ) then replace C c l u s t k , r ( t ) with V i
     Else if  a i   is not a number or infinite then keep L k ( r , i )
     End if
    End for
    Randomly create a new population X c l u s t k around the C c l u s t k
   Randomly create new population X b e s t k around the current best two solutions
     Randomly create a new population X r a n d k
     Determine X k as X k   = X c l u s t k L k X b e s t k 1 X b e s t k           X r a n d k
     Calculate the fitness value of the objective function E by using the new population X k
     Determine X k * as the best solution of   E   ( X k )
     If    E   ( X k * ) ε , X k * can be defined as the global minimum
     End if
End for
In the GOBC-PA algorithm, N and p represent the population size and the number of variables, respectively. The parabola coefficient matrix ( A ) and the coefficients of the approximated parabola ( θ ^ r ) can be calculated as
A   =   L k , r 2 L k , r 1 H 2 H       1
θ ^ r =     ( A T · A ) 1 · A T ·   E   (   L k , r H   )     =     a b c
On the other hand, the vertex of the i t h parabola can be calculated as
V i   =           b i   2 a i  
The new population X c l u s t k can be created around the C c l u s t k using the following expression:
X c l u s t k   =     1 2   ·     σ c l u s t k   ·   r a n d ( 1 , 1 ) + L k L k L k T
where σ c l u s t k represents the standard deviation of C c l u s t k . Furthermore, X b e s t k can be calculated as follows:
X b e s t k   =     ( R max R min ) 2 S   ·       r a n d ( 6 , p ) + X b e s t k 1 X b e s t k 1 X b e s t k 1 T  
where R min and R max represent the lower and upper bounds of the solution space, respectively.
Finally, X r a n d k can be obtained as follows:
X r a n d k   =   R min   +   ( R max R min ) ·   r a n d (   5   9 N 8 ,     p )

2.6. Sine Cosine Algorithm

The SCA is a population-based metaheuristic algorithm inspired by the fluctuating behaviors and cyclic patterns of the sine and cosine trigonometric functions, which was proposed by Mirjalili in 2016 [1]. In the SCA, while the exploration of the search space is controlled by changing the range of the sine and cosine functions, the repositioning of a solution around another solution can also be realized according to its cyclic pattern property.
The detailed pseudo-code for the SCA is given as follows:
Randomly create an initial population consisting a set of search agents, x i , j     1 , 2 ,   ,   N
Cycle = 1
WHILE   C y c l e M a x i m u m   C y c l e
     Calculate the fitness value of each x i search agent in the population
     Determine the best search agent so far and store it in a variable as destination agent
     Update the random parameters r 1 ,   r 2 ,   r 3 and r 4
     Update the position of each search agent in the current population
     Store the current destination point as the best search agent obtained so far
Cycle = Cycle + 1
END
In the SCA, the random parameter r 1 decides whether a solution updates its position towards the best solution ( r 1 < 1) or away from it ( r 1 > 1). It can be updated using the following equation:
r 1 = a t   a M a x   C y c l e   N u m b e r
where a is a constant and t is the current cycle. Meanwhile, r 2 varies randomly in the interval of [ 0 , 2 π ] , which determines the required distance towards or away from the destination. Furthermore, r 3 varies randomly in the interval of [ 0 , 2 ] and represents a random weight for the destination, either emphasizing ( r 3 > 1 ) or de-emphasizing ( r 3 < 1 ) the effect of the destination in defining the distance. Finally, r 4 varies randomly in the interval of [ 0 , 1 ] and acts as a switch to choose between the sine and cosine functions. The position of each search agent in the current population can be updated using the following equation:
X i   j t + 1   =     X i   j t   +   r 1 · cos ( r 2 ) ·     r 3 · P i   j t     X i   j t     ,       r 4 0.5 X i   j t   +   r 1 · sin ( r 2 ) ·     r 3 · P i   j t     X i   j t     ,       r 4 < 0.5
where X i   j t   represents the current search agent i in cycle t and P i   j t denotes the position of the best search agent in cycle t .
The values of the Population Size and Maximum Cycle Number parameters were determined as 10 and 100, respectively, for each algorithm. Furthermore, the maximum and minimum pixel values were defined as X max and X min , respectively. The remaining control parameter values used in the simulations for each algorithm are provided in Table 1. The optimal control parameter values for the CSA, TSA, AOA, GNDO, GOBC-PA, and SCA algorithms were taken from [1,2,3,4,5,6], respectively. On the other hand, for the control parameters that can take values within a certain range, simulations were carried out for different values within the relevant range and the value at which the optimum segmentation results were obtained was defined as the optimal value. In particular, the parameter values that produced the highest segmentation performance (i.e., in terms of Se, Sp, and Acc) and also produced the lowest MSE values were determined as the optimal parameter values.
In this work, the metaheuristic algorithms were used to obtain the optimal cluster centers corresponding to optimal pixel values. While investigating the optimal cluster center values, the quality of each possible clustering center was calculated depending on the mean squared error (MSE) function:
M S E   =     1 M     i = 1 M ( f i     y i   ) 2
where M is the number of total pixels, f i represents the closest clustering center value to the i t h pixel, and, finally, y i is determined as the pixel value of the pixel i . According to this, the fitness value of any solution k can be calculated as follows:
f i t k   =     1   1   +   M S E k  
where M S E k represents the error value produced by solution k.
In order to present a fair performance comparison, analyses were carried out for both healthy and diseased retinal images taken from the DRIVE and STARE databases. The DRIVE database was established by Alonso-Montes et al., and the images in this database were captured using a Canon CR5 3CCD camera under a 45° FOV with a resolution of 565 × 584 pixels. The STARE database was constructed by Hoover et al. using a Topcon TRV-50 fundus camera under a 35° FOV with a resolution of 700 × 605 pixels. Each of the DRIVE and STARE databases contain 20 raw retinal images and their mask images separately. In both databases, half of the retinal images are healthy images, while the others present pathologies.

3. Results

In this work, recently proposed math-inspired algorithms—including the CSA, TSA, AOA, GNDO, GOBC-PA, and SCA—were implemented for clustering in order to classify the vessels from the background pixels of retinal images with high accuracy. The obtained results are presented in three groups below: segmentation performance, statistical analysis, and convergence analysis.

3.1. Segmentation Performance

The resulting retinal images from the DRIVE database after applying the CSA-, TSA-, AOA-, GNDO-, GOBC-PA-, and SCA-based segmentation processes are shown in Figure 5. It can be seen that some of the background pixels were incorrectly classified as vessel pixels. Similarly, Figure 6 shows the segmentation results for the retinal images taken from the STARE database. The results obtained for the STARE images indicate that the segmentation performance of the algorithms was similar, but slightly worse, when compared to the DRIVE images. When the results obtained for both databases are analyzed, it can be concluded that, although there were a certain number of misclassified pixels, the algorithms generally provide highly accurate segmentation results.

3.2. Statistical Analysis

The results produced by the metaheuristic algorithms were tested and verified in terms of the Se, Sp, Acc and standard deviation metrics due to their non-deterministic nature. Furthermore, the Wilcoxon rank-sum test was applied to statistically validate the obtained numerical results and investigate the significant differences among the algorithms [56].
The Se, Sp, and Acc metrics can be defined according to the following expressions:
S e = T P ( T P + F N )
S p = T N ( T N + F P )
A c c = ( T P + T N ) ( T P + F N + T N + F P )
where true positives (TP) are the correctly classified vessel pixels, false negatives (FN) are the incorrectly classified vessel pixels, true negatives (TN) are the correctly classified background pixels, and, finally, false positives (FP) are the incorrectly classified background pixels [55].
Performance measures and statistical analyses were applied separately to all 40 images in the DRIVE and STARE databases in order to perform a more comprehensive and global analysis. The performances of the algorithms in terms of Se, Sp, and Acc on the DRIVE and STARE databases are given in Table 2 and Table 3, respectively, where higher Se, Sp, and Acc values correspond to better segmentation performance. From the tables, it can be seen that all of the algorithms converged to similar values in terms of Se, Sp, and Acc for almost all retinal images in the DRIVE and STARE databases.
When the mean values obtained for the DRIVE images are examined, the performance of the TSA in terms of correctly classified TP pixels seems better than that of the other algorithms due to its highest mean Se value. On the other hand, the highest mean Sp value produced by the GOBC-PA algorithm demonstrates that it is able to produce better performance in terms of correctly classified TN pixels when compared to other algorithms. Finally, the highest mean Acc value obtained for the TSA proves its superior performance in terms of the total number of image pixels that are correctly classified.
From the mean values obtained for the STARE images, it can be seen that the CSA produced the highest performance in terms of the mean Se value, which corresponds to the correctly classified TP pixels. Moreover, the highest mean Sp value obtained for the SCA shows that it produced the best results in terms of correctly classified TN pixels. Finally, the performance of the TSA in terms of correctly classified total image pixels seems better than that of the other algorithms due to its highest mean Acc value.
Another important statistical performance metric for metaheuristic algorithms is the standard deviation, which indicates the ability of an algorithm to reach similar results in each run. In particular, smaller standard deviation values prove the stability and robustness of an algorithm. The standard deviation values of each algorithm obtained for 20 random runs are shown in Figure 7. It can be seen that the GOBC-PA algorithm reached the lowest standard deviation values on both the DRIVE and STARE images. It can be concluded that the GOBC-PA algorithm exhibits somewhat stabler and more robust behavior when compared to other algorithms, although the results were generally similar. It can also be seen that the performance of the TSA and AOA seems to be slightly worse than that of the other algorithms in terms of the standard deviation.
In order to prove the validity of the numerical results and to establish statistical relationships among the algorithms, the Wilcoxon rank-sum test was applied. In the statistical analyses carried out, the desired accuracy rate was defined as 95%, which corresponds to a significance level of p < 0.05. Table 4 shows the statistically significant differences (p < 0.05) observed between the algorithms. It can be expressed, for both the DRIVE and STARE databases, that the CSA, GNDO, and GOBC-PA algorithms presented significantly better performances than the other algorithms. On the other hand, it can also be expressed that the TSA for the DRIVE database and the TSA and AOA for the STARE database did not produce statistically significant results when compared to the other algorithms.

3.3. Convergence Analysis

Convergence speed is one of the most important metrics reflecting the performance of metaheuristic algorithms, indicating the number of cycles at which the algorithm reaches the global solution. Figure 8 demonstrates the convergence speeds obtained in terms of the mean MSE values for 20 random runs. As can be seen from the figure, the performances of the TSA and AOA in terms of convergence speed and mean squared error seem slightly worse than those of the other algorithms. On both databases, the TSA reached its global optimum at approximately cycle 40, while the AOA reaches it at approximately cycle 95. In contrast, the other algorithms reached similar MSE values at close convergence rates on both the DRIVE and STARE images.
In addition to the convergence speed, the CPU time required for each algorithm to reach the global solution is another important parameter. Analyses were carried out using a computer with a 3.4 GHz Intel i7-6700 CPU, 16 GB RAM, and 64 bit Windows 10 Pro. The minimum MSE values reached by the algorithms and the related CPU times are given in Table 5 for both databases. Evaluating Figure 8 and Table 5 simultaneously, it can be seen that, although they presented a slower rate of convergence, the TSA and AOA produced better results in terms of the CPU time for both databases. Meanwhile, the GNDO and GOBC-PA algorithms produced worse results in terms of the CPU time despite their higher convergence rates.
Due to the structural differences between the metaheuristic algorithms, the computational complexity of each algorithm was also analyzed in detail. In particular, in order to compare their computational complexities, the algorithms were analyzed in terms of the Number of Function Evaluations (NFEs) metric, which indicates the number of times the objective function is evaluated during the optimization process. In addition to the NFEs for each algorithm, the Elapsed CPU time for NFEs and the Percentage of this time in the total CPU time were also calculated. The results obtained using the NFEs analyses are given in Table 6. As can be seen from the table, the relationship between the NFEs and CPU time values can be defined as linear. However, by analyzing Figure 8 and Table 6 simultaneously, it can be concluded that the NFEs value does not cause an important variation in the minimum MSE values reached.
The analysis results can be summarized as follows: for both the DRIVE and STARE databases, all of the algorithms produced similar results in terms of correctly classified and misclassified pixels. The CSA stands out, with its simple algorithm structure, when compared to the other algorithms. Furthermore, the GNDO algorithm stands out, as it does not contain any control parameters, while the GOBC-PA algorithm stands ou as it contains only one control parameter. In general, it can be expressed that all of the considered algorithms provide high performance in terms of segmentation. On the other hand, the Se, Sp, and Acc values obtained for the CSA, TSA, AOA, GNDO, GOBC-PA, and SCA implementations indicate that each algorithm is able to produce effective results in terms of correctly and incorrectly classified vessel and background pixels. However, the performances of the algorithms in terms of Se seem a bit worse when compared to the Sp- and Acc-based performances. The lowest standard deviation values produced by the GOBC-PA algorithm for both the DRIVE and STARE databases prove that this algorithm is stabler and more robust when compared to the other algorithms. On the other hand, the TSA and AOA produced the highest standard deviation values for the DRIVE and STARE databases, respectively. On the DRIVE database, the GOBC-PA algorithm reached the minimum MSE value with the highest convergence speed. However, on the STARE database, the GNDO algorithm reached the lowest error value while the GOBC-PA algorithm exhibited the best performance in terms of convergence speed. Furthermore, for both databases, the worst MSE value was obtained for the TSA, while the worst convergence speed performance was observed for the AOA. Finally, although it produced relatively worse results in terms of other performance metrics, it was found that the TSA had the most effective performance in terms of CPU time.

4. Discussion

In this work, math-inspired metaheuristic algorithms—including the CSA, TSA, AOA, GNDO, GOBC-PA, and SCA—were implemented as clustering approaches for high-accuracy retinal vessel segmentation. Analyses were carried out on both healthy and diseased retinal images from the DRIVE and STARE databases. From the obtained segmentation results, it can be seen that each of the algorithms successfully distinguished the vessel and background pixels of the retinal images. The close and high Se, Sp, and Acc values produced by the algorithms prove the robustness of each algorithm in terms of their statistical performance. In addition, due to the lower standard deviation values of the CSA, GNDO, and GOBC-PA algorithms, it can be concluded that these algorithms are slightly more stable in the context of clustering-based retinal vessel segmentation. Furthermore, according to the results of the Wilcoxon rank-sum test carried out among the algorithms, it can be expressed that the CSA, GNDO, and GOBC-PA algorithms produced statistically better results compared to the TSA, AOA, and SCA. Finally, when the convergence performances of the algorithms were evaluated as a whole, it was found that, despite their higher convergence rates, the GNDO and GOBC-PA algorithms present slightly worse performance in terms of CPU time. It can additionally be expressed that the TSA presented the lowest NFEs rates in both databases compared to the other algorithms. It can consequently be expressed that the considered math-inspired metaheuristic algorithms can successfully be used in the context of biomedical image processing.
When the efficiency of math-inspired metaheuristic algorithms is compared to that of evolutionary-based, swarm-intelligence-based, and physics-based metaheuristic algorithms [52,53], it can be seen that these algorithms are able to produce similar results in terms of retinal vessel segmentation. The MSE, CPU time, and NFEs values obtained using the considered math-inspired metaheuristic algorithms reveal that these algorithms perform well in terms of convergence speed. On the other hand, the Se, Sp, Acc, standard deviation, and Wilcoxon rank-sum test results obtained demonstrate the good statistical performance and robustness of these algorithms. Consequently, it can be expressed that math-inspired metaheuristic algorithms are capable of producing similar performances to other algorithms in the considered context, despite their simpler algorithm structures and lower number of control parameters.

Author Contributions

Conceptualization, M.B.Ç.; methodology, M.B.Ç. and S.A.; software, M.B.Ç.; validation, M.B.Ç. and S.A.; formal analysis, M.B.Ç. and S.A.; investigation, M.B.Ç. and S.A.; writing—original draft preparation, M.B.Ç.; writing—review and editing, M.B.Ç. and S.A. All authors have read and agreed to the published version of the manuscript.

Funding

The authors declare that no funds, grants, or other support were received during the preparation of this manuscript.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors have no relevant financial or non-financial interests to disclose. The authors declare that they have no conflict of interests.

References

  1. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  2. Layeb, A. Tangent search algorithm for solving optimization problems. Neural Comput. Appl. 2022, 34, 8853–8884. [Google Scholar] [CrossRef]
  3. Qais, M.H.; Hasanien, H.M.; Turky, R.A.; Alghuwainem, S.; Tostado-Véliz, M.; Jurado, F. Circle search algorithm: A geometry-based metaheuristic optimization algorithm. Mathematics 2022, 10, 1626. [Google Scholar] [CrossRef]
  4. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  5. Zhang, Y.; Jin, Z.; Mirjalili, S. Generalized normal distribution optimization and its applications in parameter extraction of photovoltaic models. Energy Convers. Manag. 2020, 224, 113301. [Google Scholar] [CrossRef]
  6. Pence, I.; Cesmeli, M.S.; Senel, F.A.; Cetisli, B. A new unconstrained global optimization method based on clustering and parabolic approximation. Expert Syst. Appl. 2016, 55, 493–507. [Google Scholar] [CrossRef]
  7. Gonidakis, D.; Vlachos, A. A new sine cosine algorithm for economic and emission dispatch problems with price penalty factors. J. Inf. Optim. Sci. 2019, 40, 679–697. [Google Scholar] [CrossRef]
  8. Abdelsalam, A.A. Optimal distributed energy resources allocation for enriching reliability and economic benefits using sine–cosine algorithm. Technol. Econ. Smart Grids Sustain. Energy 2020, 5, 8. [Google Scholar] [CrossRef]
  9. Babaei, F.; Safari, A. SCA based fractional-order PID controller considering delayed EV aggregators. J. Oper. Autom. Power Eng. 2020, 8, 75–85. [Google Scholar] [CrossRef]
  10. Mishra, S.; Gupta, S.; Yadav, A. Design and application of controller based on sine–cosine algorithm for load frequency control of power system. In Intelligent Systems Design and Applications; Springer: Cham, Switzerland, 2020; pp. 301–311. [Google Scholar] [CrossRef]
  11. Algabalawy, M.A.; Abdelaziz, A.Y.; Mekhamer, S.F.; Aleem, S.H.A. Considerations on optimal design of hybrid power generation systems using whale and sine cosine optimization algorithms. J. Electr. Syst. Inf. Technol. 2018, 5, 312–325. [Google Scholar] [CrossRef]
  12. Bhadoria, A.; Marwaha, S.; Kamboj, V.K. An optimum forceful generation scheduling and unit commitment of thermal power system using sine cosine algorithm. Neural Comput. Appl. 2019, 32, 2785–2814. [Google Scholar] [CrossRef]
  13. Mirjalili, S.M.; Mirjalili, S.Z.; Saremi, S.; Mirjalili, S. Sine Cosine Algorithm: Theory, Literature Review, and Application in Designing Bend Photonic Crystal Waveguides. In Nature-Inspired Optimizers; Springer: Cham, Switzerland, 2020; pp. 201–217. [Google Scholar]
  14. Das, S.; Bhattacharya, A.; Chakraborty, A.K. Solution of short-term hydrothermal scheduling using sine cosine algorithm. Soft Comput. 2018, 22, 6409–6427. [Google Scholar] [CrossRef]
  15. Bhookya, J.; Jatoth, R.K. Optimal FOPID/PID controller parameters tuning for the AVR system based on sine-cosine-algorithm. Evol. Intell. 2019, 12, 725–733. [Google Scholar] [CrossRef]
  16. Hekimoğlu, B. Sine–cosine algorithm-based optimization for automatic voltage regulator system. Trans. Inst. Meas. Control 2019, 41, 1761–1771. [Google Scholar] [CrossRef]
  17. Gorripotu, T.S.; Ramana, P.; Sahu, R.K.; Panda, S. Sine cosine optimization based proportional derivative-proportional integral derivative controller for frequency control of hybrid power system. In Computational Intelligence in Data Mining; Springer: Singapore, 2020; pp. 789–797. [Google Scholar] [CrossRef]
  18. Belazzoug, M.; Touahria, M.; Nouioua, F.; Brahimi, M. An improved sine cosine algorithm to select features for text categorization. J. King Saud Univ. Comput. Inf. Sci. 2020, 32, 454–464. [Google Scholar] [CrossRef]
  19. Kaur, G.; Dhillon, J.S. Economic power generation scheduling exploiting hill-climbed sine–cosine algorithm. Appl. Soft Comput. 2021, 111, 107690. [Google Scholar] [CrossRef]
  20. Fan, F.; Chu, S.C.; Pan, J.S.; Yang, Q.; Zhao, H. Parallel sine cosine algorithm for the dynamic deployment in wireless sensor networks. J. Internet Technol. 2021, 22, 499–512. [Google Scholar] [CrossRef]
  21. Mookiah, S.; Parasuraman, K.; Chandar, S.K. Color image segmentation based on improved sine cosine optimization algorithm. Soft Comput. 2022, 26, 13193–13203. [Google Scholar] [CrossRef]
  22. Karmouni, H.; Chouiekh, M.; Motahhir, S.; Qjidaa, H.; Jamil, M.O.; Sayyouri, M. Optimization and implementation of a photovoltaic pumping system using the sine–cosine algorithm. Eng. Appl. Artif. Intell. 2022, 114, 105104. [Google Scholar] [CrossRef]
  23. Jafari, M.; Chaleshtari, M.H.B.; Khoramishad, H.; Altenbach, H. Minimization of thermal stress in perforated composite plate using metaheuristic algorithms WOA, SCA and GA. Compos. Struct. 2023, 304, 116403. [Google Scholar] [CrossRef]
  24. Chen, D.; Fan, K.; Wang, J.; Lu, H.; Jin, J.; Liu, C. Two-dimensional power allocation scheme for NOMA-based underwater visible light communication systems. Appl. Opt. 2023, 62, 211–216. [Google Scholar] [CrossRef] [PubMed]
  25. Agushaka, J.O.; Ezugwu, A.E. Advanced arithmetic optimization algorithm for solving mechanical engineering design problems. PLoS ONE 2021, 16, e0255703. [Google Scholar] [CrossRef]
  26. Abualigah, L.; Diabat, A. Improved multi-core arithmetic optimization algorithm-based ensemble mutation for multidisciplinary applications. Intell. Manuf. 2022, 34, 1833–1874. [Google Scholar] [CrossRef]
  27. Kaveh, A.; Hamedani, K.B. Improved arithmetic optimization algorithm and its application to discrete structural optimization. Structures 2022, 35, 748–764. [Google Scholar] [CrossRef]
  28. Khodadadi, N.; Snasel, V.; Mirjalili, S. Dynamic arithmetic optimization algorithm for truss optimization under natural frequency constraints. IEEE Access 2022, 10, 16188–16208. [Google Scholar] [CrossRef]
  29. Gürses, D.; Bureerat, S.; Sait, S.M.; Yıldız, A.R. Comparison of the arithmetic optimization algorithm, the slime mold optimization algorithm, the marine predators algorithm, the salp swarm algorithm for real-world engineering applications. Mater. Test. 2021, 63, 448–452. [Google Scholar] [CrossRef]
  30. Hu, G.; Zhong, J.; Du, B.; Wei, G. An enhanced hybrid arithmetic optimization algorithm for engineering applications. Comput. Methods Appl. Mech. Eng. 2022, 394, 114901. [Google Scholar] [CrossRef]
  31. Issa, M. Enhanced arithmetic optimization algorithm for parameter estimation of PID controller. Arab. J. Sci. Eng. 2023, 48, 2191–2205. [Google Scholar] [CrossRef]
  32. Abualigah, L.; Almotairi, K.H.; Al-qaness, M.A.; Ewees, A.A.; Yousri, D.; Abd Elaziz, M.; Nadimi-Shahraki, M.H. Efficient text document clustering approach using multi-search Arithmetic Optimization Algorithm. Knowl.-Based Syst. 2022, 248, 108833. [Google Scholar] [CrossRef]
  33. Rajagopal, R.; Karthick, R.; Meenalochini, P.; Kalaichelvi, T. Deep Convolutional Spiking Neural Network optimized with Arithmetic optimization algorithm for lung disease detection using chest X-ray images. Biomed. Signal Process. Control 2023, 79, 104197. [Google Scholar] [CrossRef]
  34. Antarasee, P.; Premrudeepreechacharn, S.; Siritaratiwat, A.; Khunkitti, S. Optimal Design of Electric Vehicle Fast-Charging Station’s Structure Using Metaheuristic Algorithms. Sustainability 2022, 15, 771. [Google Scholar] [CrossRef]
  35. Liu, Z.; Jiang, P.; Wang, J.; Zhang, L. Ensemble system for short term carbon dioxide emissions forecasting based on multi-objective tangent search algorithm. J. Environ. Manag. 2022, 302, 113951. [Google Scholar] [CrossRef]
  36. Akyol, S. A new hybrid method based on Aquila optimizer and tangent search algorithm for global optimization. J. Ambient. Intell. Humaniz. Comput. 2023, 14, 8045–8065. [Google Scholar] [CrossRef] [PubMed]
  37. Pachung, P.; Bansal, J.C. An improved tangent search algorithm. MethodsX 2022, 9, 101839. [Google Scholar] [CrossRef] [PubMed]
  38. Abdel-Basset, M.; Mohamed, R.; El-Fergany, A.; Abouhawwash, M.; Askar, S.S. Parameters identification of PV triple-diode model using improved generalized normal distribution algorithm. Mathematics 2021, 9, 995. [Google Scholar] [CrossRef]
  39. Khodadadi, N.; Mirjalili, S. Truss optimization with natural frequency constraints using generalized normal distribution optimization. Appl. Intell. 2022, 52, 10384–10397. [Google Scholar] [CrossRef]
  40. Chankaya, M.; Hussain, I.; Ahmad, A.; Malik, H.; García Márquez, F.P. Generalized normal distribution algorithm-based control of 3-phase 4-wire grid-tied PV-hybrid energy storage system. Energies 2021, 14, 4355. [Google Scholar] [CrossRef]
  41. Pençe, İ.; Çeşmeli, M.Ş.; Kovacı, R. Determination of the Osmotic Dehydration Parameters of Mushrooms using Constrained Optimization. Sci. J. Mehmet Akif Ersoy Univ. 2019, 2, 77–83. [Google Scholar]
  42. Pençe, İ.; Çeşmeli, M.Ş.; Kovacı, R. A New Stochastic Search Method for Filled Function. El-Cezeri J. Sci. Eng. 2020, 7, 111–123. [Google Scholar] [CrossRef]
  43. Kishore, D.J.K.; Mohamed, M.R.; Sudhakar, K.; Peddakapu, K. Application of circle search algorithm for solar PV maximum power point tracking under complex partial shading conditions. Appl. Soft Comput. 2024, 165, 112030. [Google Scholar] [CrossRef]
  44. Ghazi, G.A.; Ammar Hasanien, H.M.; Ko, W.; Lee, S.M.; Turky, R.; Veliz, M.T.; Jurado, F. Circle search algorithm-based super twisting sliding mode control for MPPT of different commercial PV modules. IEEE Access 2024, 12, 33109–33128. [Google Scholar] [CrossRef]
  45. Saeed, A.; Soomro, T.A.; Jandan, N.A.; Ali, A.; Irfan, M.; Rahman, S.; Aldhabaan, W.A.; Khairallah, A.S.; Abuallut, I. Impact of retinal vessel image coherence on retinal blood vessel segmentation. Electronics 2023, 12, 396. [Google Scholar] [CrossRef]
  46. Xian, Y.; Zhao, G.; Wang, C.; Chen, X.; Dai, Y. A novel hybrid retinal blood vessel segmentation algorithm for enlarging the measuring range of dual-wavelength retinal oximetry. Photonics 2023, 10, 722. [Google Scholar] [CrossRef]
  47. Wang, Y.; Li, H. A novel single-sample retinal vessel segmentation method based on grey relational analysis. Sensors 2024, 24, 4326. [Google Scholar] [CrossRef]
  48. Jiang, L.; Li, W.; Xiong, Z.; Yuan, G.; Huang, C.; Xu, W.; Zhou, L.; Qu, C.; Wang, Z.; Tong, Y. Retinal vessel segmentation based on self-attention feature selection. Electronics 2024, 13, 3514. [Google Scholar] [CrossRef]
  49. Wang, Y.; Wu, S.; Jia, J. PAM-UNet: Enhanced retinal vessel segmentation using a novel plenary attention mechanism. Appl. Sci. 2024, 14, 5382. [Google Scholar] [CrossRef]
  50. Ramesh, R.; Sathiamoorthy, S. Diabetic retinopathy classification using improved metaheuristics with deep residual network on fundus imaging. Multimed. Tools Appl. 2024, 1–27. [Google Scholar] [CrossRef]
  51. Sau, P.C.; Gupta, M.; Bansal, A. Optimized ResUNet++-enabled blood vessel segmentation for retinal fundus image based on hybrid meta-heuristic improvement. Int. J. Image Graph. 2024, 24, 2450033. [Google Scholar] [CrossRef]
  52. Çetinkaya, M.B.; Duran, H. Performance comparison of most recently proposed evolutionary, swarm intelligence, and physics-based metaheuristic algorithms for retinal vessel segmentation. Math. Probl. Eng. 2022, 2022, 4639208. [Google Scholar] [CrossRef]
  53. Çetinkaya, M.B.; Duran, H. A detailed and comparative work for retinal vessel segmentation based on the most effective heuristic approaches. Biomed. Eng. Biomed. Tech. 2021, 66, 181–200. [Google Scholar] [CrossRef]
  54. Alonso-Montes, C.; Vilariño, D.L.; Dudek, P.; Penedo, M.G. Fast retinal vessel tree extraction: A pixel parallel approach. Int. J. Circuit Theory Appl. 2008, 36, 641–651. [Google Scholar] [CrossRef]
  55. Hoover, A.D.; Kouznetsova, V.; Goldbaum, M. Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. IEEE Trans. Med. Imaging 2000, 19, 203–210. [Google Scholar] [CrossRef] [PubMed]
  56. Arcuri, A.; Briand, L. A hitchhiker’s guide to statistical tests for assessing randomized algorithms in software engineering. Softw. Test. Verif. Reliab. 2014, 24, 219–250. [Google Scholar] [CrossRef]
Figure 1. Retinal images from the DRIVE database: (a) healthy retinal image; (b) diseased retinal image.
Figure 1. Retinal images from the DRIVE database: (a) healthy retinal image; (b) diseased retinal image.
Applsci 15 05693 g001
Figure 2. Retinal images from the STARE database: (a) healthy retinal image; (b) diseased retinal image.
Figure 2. Retinal images from the STARE database: (a) healthy retinal image; (b) diseased retinal image.
Applsci 15 05693 g002
Figure 3. Enhanced retinal images from the DRIVE database: (a,c) are the G layers obtained as a result of band selection; (b,d) are the corresponding retinal images obtained as a result of the bottom-hat filtering and brightness correction.
Figure 3. Enhanced retinal images from the DRIVE database: (a,c) are the G layers obtained as a result of band selection; (b,d) are the corresponding retinal images obtained as a result of the bottom-hat filtering and brightness correction.
Applsci 15 05693 g003
Figure 4. Enhanced retinal images from the STARE database: (a,c) are the G layers obtained as a result of band selection; (b,d) are the corresponding retinal images obtained as a result of the bottom-hat filtering and brightness correction.
Figure 4. Enhanced retinal images from the STARE database: (a,c) are the G layers obtained as a result of band selection; (b,d) are the corresponding retinal images obtained as a result of the bottom-hat filtering and brightness correction.
Applsci 15 05693 g004aApplsci 15 05693 g004b
Figure 5. Results after applying CSA-, TSA-, AOA-, GNDO-, GOBC-PA-, and SCA-based segmentation to the images in Figure 1a,b.
Figure 5. Results after applying CSA-, TSA-, AOA-, GNDO-, GOBC-PA-, and SCA-based segmentation to the images in Figure 1a,b.
Applsci 15 05693 g005
Figure 6. Results after applying CSA-, TSA-, AOA-, GNDO-, GOBC-PA-, and SCA-based segmentation to the images in Figure 2a,b.
Figure 6. Results after applying CSA-, TSA-, AOA-, GNDO-, GOBC-PA-, and SCA-based segmentation to the images in Figure 2a,b.
Applsci 15 05693 g006
Figure 7. Standard deviation values obtained for the algorithms.
Figure 7. Standard deviation values obtained for the algorithms.
Applsci 15 05693 g007
Figure 8. Convergence speeds of the algorithms. (a) Diseased retinal image taken from the DRIVE database (see Figure 1b); (b) healthy retinal image taken from STARE database (see Figure 2a).
Figure 8. Convergence speeds of the algorithms. (a) Diseased retinal image taken from the DRIVE database (see Figure 1b); (b) healthy retinal image taken from STARE database (see Figure 2a).
Applsci 15 05693 g008
Table 1. The control parameter values for each algorithm.
Table 1. The control parameter values for each algorithm.
AlgorithmControl Parameters
CSA
  • Constant parameter, c = 0.8
  • Variable parameter, w [ π ,   0 ]
TSA
  • Switching parameter, P s w i t c h = 0.3
  • Probability of the escape procedure, P e s c = 0.8
AOA
  • Sensitive parameter, α = 5 .
  • Search process adjustment parameter, μ = 0.5
GOBC-PA
  • Cluster size, S = 20.
SCA
  • Random parameter, r 2 [ 0 ,   2 π ]
  • Random parameter, r 3 [ 0 ,   2 ]
  • Switching parameter, r 4 [ 0 ,   1 ]
GNDO
  • No additional control parameters.
Table 2. Se, Sp, and Acc values obtained for 20 retinal images from the DRIVE database.
Table 2. Se, Sp, and Acc values obtained for 20 retinal images from the DRIVE database.
SensitivitySpecificityAccuracy
ImageCSATSAAOA GNDOGOBC-PA SCACSATSAAOA GNDOGOBC-PA SCACSATSAAOA GNDOGOBC-PASCA
10.83750.87450.83750.83750.83750.83750.97490.97060.97490.97490.97490.97490.96440.96400.96440.96440.96440.9644
20.91020.91020.93870.93870.93870.93870.96980.96980.96550.96550.96550.96550.96500.96500.96350.96350.96350.9635
30.90920.90920.74360.67150.67150.67150.94870.94870.96940.97560.97560.97560.94700.94700.95090.94520.94520.9452
40.94640.92780.94640.94640.94640.94640.96150.96480.96150.96150.96150.96150.96060.96260.96060.96060.96060.9606
50.78670.88420.88420.78670.78670.78670.97770.97300.97300.97770.97770.97770.96290.96740.96740.96290.96290.9629
60.79310.86310.69630.79310.69630.69630.96610.97610.97180.96610.97180.97180.95260.97260.94500.95260.94500.9450
70.73800.79110.73800.79110.66900.73800.97740.97400.97740.97400.98080.97740.96100.96310.96100.96310.95570.9610
80.45760.66900.55860.55860.45760.45760.98690.97920.98330.98330.98690.98690.92320.95730.94420.94420.92320.9232
90.69730.69730.69730.58890.58890.69730.97230.97230.97230.97730.97730.97230.95050.95050.95050.93750.93750.9505
100.89940.86270.86270.80890.80890.80890.97050.97420.97420.97800.97800.97800.96710.96810.96810.96730.96730.9673
110.95910.92640.87950.87950.87950.87950.95580.96190.96820.96820.96820.96820.95590.96010.96290.96290.96290.9629
120.66640.66640.76630.54880.54880.66640.98000.98000.97550.98420.98420.98000.95070.95070.95980.93150.93150.9507
130.89560.92730.89560.84610.84610.89560.95470.94980.95470.95990.95990.95470.95080.94850.95080.95130.95130.9508
140.71490.93090.71490.61560.61560.71490.98230.96320.98230.98570.98570.98230.96040.96180.96040.94850.94850.9604
150.78160.78160.78160.72570.72570.72570.97140.97140.97140.97460.97460.97460.95980.95980.95710.95710.95710.9571
160.87760.94930.87760.87760.87760.92520.97340.96550.97340.97340.97340.96930.96680.96460.96680.96680.96680.9667
170.66540.74880.74880.66540.57180.66540.98070.97700.97700.98070.98450.98070.95400.96100.96100.95400.94120.9540
180.80250.87270.80250.80250.80250.80250.97100.96620.97100.97100.97100.97100.95730.95970.95730.95730.95730.9573
190.88100.92990.92990.88100.88100.88100.96590.96070.96070.96590.96590.96590.95930.95860.95860.95930.95930.9593
200.69610.88700.80300.69610.58820.69610.97140.95910.96500.97140.97700.97140.94500.95470.94500.94500.92910.9450
Mean0.79580.85050.80510.76300.73690.77160.97060.96790.97110.97340.97470.97300.95570.95990.95780.95470.95150.9554
Table 3. Se, Sp, and Acc values obtained for 20 retinal images from the STARE database.
Table 3. Se, Sp, and Acc values obtained for 20 retinal images from the STARE database.
SensitivitySpecificityAccuracy
ImageCSATSAAOA GNDOGOBC-PA SCACSATSAAOA GNDOGOBC-PA SCACSATSAAOA GNDOGOBC-PASCA
10.51650.56680.51650.51650.51650.51650.97470.97000.97470.97470.97470.97470.92980.93740.92980.92980.92980.9298
20.53190.62460.48070.48070.48070.48070.97290.96560.97670.97670.97670.97670.94110.94880.93310.93310.93310.9331
30.75010.60570.48850.48850.48850.48850.96050.96960.97550.97550.97550.97550.95000.94140.92250.92250.92250.9225
40.41910.57520.57520.41910.41910.27400.96250.95230.95230.96250.96250.97350.89240.92440.92440.89240.89240.8033
50.76850.72610.74870.72610.72610.70380.95610.96170.95880.96170.96170.96490.94530.94540.94560.94540.94540.9451
60.72690.77410.75280.72690.72690.72690.96640.96020.96330.96640.96640.96640.94970.94950.95000.94970.94970.9497
70.80590.70760.73000.70760.70760.73000.96240.97400.97180.97400.97400.97180.95360.95260.95370.95260.95260.9537
80.73040.75460.66540.66540.66540.70120.97230.96860.97900.97900.97900.97570.95600.95570.95270.95270.95270.9551
90.73280.76040.73280.76040.76040.78430.97620.97380.97620.97380.97380.97120.95650.95780.95650.95780.95780.9583
100.59820.64570.59820.59820.59820.59820.97180.96730.97180.97180.97180.97180.93550.94060.93550.93550.93550.9355
110.79710.70050.77010.73960.73960.73960.97050.97960.97340.97650.97650.97650.96120.95890.96130.96090.96090.9609
120.82940.80800.78990.78990.78990.78990.95040.95780.96300.96300.96300.96300.94430.94870.95110.95110.95110.9511
130.66320.62610.64650.66320.66320.64650.96300.96910.96590.96300.96300.96590.94210.94040.94160.94210.94210.9416
140.68040.70870.65040.68040.65040.68040.97560.97280.97830.97560.97830.97560.95350.95490.95130.95350.95130.9535
150.73800.69430.58670.58670.58670.58670.96670.97050.97770.97770.97770.97770.95270.95110.94030.94030.94030.9403
160.71280.71280.71280.64900.64900.64900.96860.96860.96860.97450.97450.97450.94990.94990.94990.94540.94540.9454
170.76000.72800.76000.72800.72800.69440.97270.97550.97270.97550.97550.97820.95960.95860.95960.95860.95860.9568
180.81840.66010.56140.66010.66010.66010.94520.95630.96190.95630.95630.95630.94180.94110.93340.94110.94110.9411
190.53500.53500.53500.53500.43340.43340.93990.93990.93990.93990.94560.94560.91310.91310.91310.91310.89540.8954
200.59160.65890.59160.59160.59160.59160.95950.95210.95950.95950.95950.95950.92650.93190.92650.92650.92650.9265
Mean0.68530.67870.64470.63560.62910.62380.96440.96530.96810.96890.96930.96970.94270.94510.94160.94020.93920.9349
Table 4. Wilcoxon rank-sum test results obtained for the algorithms (p < 0.05).
Table 4. Wilcoxon rank-sum test results obtained for the algorithms (p < 0.05).
AlgorithmsBetter Than
DRIVE
(Figure 1b)
TSA-
AOATSA (3.50 × 10−3)
SCATSA (2.4621 × 10−4)
CSATSA (2.2337 × 10−5)   AOA (4.1790 × 10−4)   SCA (2.2337 × 10−5)
GNDOTSA (1.8307 × 10−5)   AOA (3.6315 × 10−4)   SCA (1.8307 × 10−5)
GOBC-PATSA (1.0269 × 10−5)   AOA (2.4155 × 10−4)   SCA (1.0269 × 10−5)
STARE
(Figure 2a)
TSA-
AOA-
SCATSA (1.2335 × 10−4) AOA (1.70 × 10−3)
CSATSA (2.2337 × 10−5)   AOA (4.1790 × 10−4)   SCA (2.2337 × 10−5)
GNDOTSA (1.4191 × 10−5)   AOA (3.0345 × 10−4)   SCA (1.4191 × 10−5)
GOBC-PATSA (1.0269 × 10−5)   AOA (2.4155 × 10−4)   SCA (1.0269 × 10−5)
Table 5. Performance comparison of the CSA, TSA, AOA, GNDO, GOBC-PA, and SCA implementations on the DRIVE and STARE databases.
Table 5. Performance comparison of the CSA, TSA, AOA, GNDO, GOBC-PA, and SCA implementations on the DRIVE and STARE databases.
DRIVE
(Figure 1b)
STARE
(Figure 2a)
CSAMinimum MSE0.66310.5953
CPU Time (seconds)2.81033.4689
TSAMinimum MSE0.70770.6189
CPU Time (seconds)0.41520.4869
AOAMinimum MSE0.68030.6138
CPU Time (seconds)2.66513.4248
GNDOMinimum MSE0.65760.5918
CPU Time (seconds)5.25656.6776
GOBC-PAMinimum MSE0.65620.5920
CPU Time (seconds)6.42168.3765
SCAMinimum MSE0.66870.5996
CPU Time (seconds)2.69803.4260
Table 6. Performance comparison in terms of the NFEs convergence analysis.
Table 6. Performance comparison in terms of the NFEs convergence analysis.
Elapsed Time for NFEs (seconds)
AlgorithmsTotal NFEsDRIVE
(Figure 1b)
STARE
(Figure 2a)
CSA10102.1720 s
(77.28% of the total CPU time)
2.6960 s
(77.72% of the total CPU time)
TSA1100.2820 s
(67.92% of the total CPU time)
0.3440 s
(70.64% of the total CPU time)
AOA10102.0740 s
(77.82% of the total CPU time)
2.6720 s
(78.02% of the total CPU time)
GNDO20003.9220 s
(74.61% of the total CPU time)
5.1320 s
(76.85% of the total CPU time)
GOBC-PA27105.0150 s
(78.09% of the total CPU time)
6.5470 s
(78.16% of the total CPU time)
SCA10102.0750 s
(76.91% of the total CPU time)
2.6120 s
(77.20% of the total CPU time)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Çetinkaya, M.B.; Adige, S. Retinal Vessel Segmentation Using Math-Inspired Metaheuristic Algorithms. Appl. Sci. 2025, 15, 5693. https://doi.org/10.3390/app15105693

AMA Style

Çetinkaya MB, Adige S. Retinal Vessel Segmentation Using Math-Inspired Metaheuristic Algorithms. Applied Sciences. 2025; 15(10):5693. https://doi.org/10.3390/app15105693

Chicago/Turabian Style

Çetinkaya, Mehmet Bahadır, and Sevim Adige. 2025. "Retinal Vessel Segmentation Using Math-Inspired Metaheuristic Algorithms" Applied Sciences 15, no. 10: 5693. https://doi.org/10.3390/app15105693

APA Style

Çetinkaya, M. B., & Adige, S. (2025). Retinal Vessel Segmentation Using Math-Inspired Metaheuristic Algorithms. Applied Sciences, 15(10), 5693. https://doi.org/10.3390/app15105693

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop