1. Introduction
Ocular abnormalities occur in many ocular diseases, such as retinal vascular disorder, Diabetic retinopathy (DR). These diseases are characterized by the observation of geometric changes in the blood vessels of the eye [
1,
2]. Among the eye complications, serious eye diseases such as DR are the primary leading disease as they cause blindness, especially in the working-age population [
3,
4]. Based on the estimate, according to the World Health Organization, the number of people with diabetes increased from 108 million in 1980 to 422 million in 2014 [
5,
6]. Prevalence is rising faster in low- and middle-income countries than in high-income countries. Early treatment may be feasible by diagnosing the changes and tracking their progression, and these measures give an inexpensive treatment alternative. As a result, reconstructing a distinct network of vessels from retinal images aids in either quantifying the severity of the disease or evaluating the impact of routine eye treatment [
6]. The analysis takes place using an image called a retinal image [
7,
8]. The retinal image was captured using digital fundus photography, which is done using an optical technique known as fundoscopy. The fundus camera works in different modes, but the two standard modes are angiography mode and color-filtered fundus mode. The angiography mode is known as fluorescein angiography (FFA), and it is a standard method of capturing retinal images. Still, FFA is an invasive method and is not recommended by doctors or medical experts due to the injection of fluorescein dye which leads to many complications for patients’ health. Color filtered fundus mode is known as color retinal image, and it is a non-invasive mode, but it suffered from varying contrast and noise as shown in
Figure 1.
A fundus camera is a sophisticated lens system that offers a magnified view of the whole retina, including the optic disc, macula, and posterior pole. It is used to take images of the fundus of the retina. The primary goal is to spot any abnormalities or alterations in the images. Even for experienced physicians, however, this manual procedure takes time [
7]. Manual process and expert observation the manual process takes 1 or 2 days to provide their disease progress. The primary goal is to spot any abnormalities or alterations in the images. Even for experienced physicians, however, this manual procedure takes time. Because human observers need 1–2 days to provide their feedback, any delay in results causes a delay in treatment as well as loss of examination follow-up, and numerous communication errors between clinical staff and the patient can occur [
12,
13].
One of the main tasks of manual segmentation of retinal blood vessels is performed by a qualified ophthalmologist to separate the vessels from their background for their subsequent clinical evaluations [
14,
15,
16]. However, the process of manual segmentation takes time and also gives errors. Adopted Computerized segmentation methods/approaches had made good progress using image processing, computer vision, machine learning, and pattern recognition [
17,
18]. The main task of the computerized method is to process the color fundus photographs as input and make the system produce the segmented binary image with realistic clinical potential. The effectiveness of these approaches can improve accuracy and sensitivity while reducing the amount of time spent manually analyzing retinal images. These automated approaches for analyzing eye diseases will be a viable tool for large-scale screening [
19,
20].
The network of retinal vessels is known as the vascular network; this network consists of vessels containing arteries and veins.
Figure 2 shows how the vessels resemble trees with roots and branches. These vessels feature a tabular design with variable widths and orientations that progressively change. Because of the variations in the vessels, there is low and variable contrast, making it difficult to see the vessels. The pre-processing steps are necessary to enhance and coherence the retinal vessels for the retinal segmentation process. Here are the main challenges of the segmentation process [
7].
The presence of the central light reflex of the vessel.
Uneven background illumination.
False vessels near the optic disc’s edge are often detected.
Thin vessels with little contrast are seen.
Bifurcations, crossing areas, and the fusion of closely parallel vessels.
As shown in the
Figure 2, the emergence of diseases such as microaneurysms (AD), cotton patches, light and dark lesions, and exudates.
This research work aims to analyze the effect of pre-processing steps for accurate segmentation of retinal blood vessels. To examine the study of the correct segmentation of retinal vessels, we apply these pre-processing stages on current techniques and post-processing steps on our proposed module. Our pre-processing steps for the retinal fundus image contain non-uniform background removal, contrast enhancement technique (morphological techniques versus homomorphic filtering), and Principal Component Analysis (PCA) to obtain a well-contrasted image in greyscale. The post-processing steps contain different filters and double-threshold techniques to achieve a well-segmented image.
Low and varying contrast and uneven illumination are handled by increasing the contrast level of each channel of retinal color fundus images and then converting to a grayscale image to get a well-contrast image. The detailed process is explained in the methodology section. The first step of the proposed method is eliminating background noise and irregular lighting(uneven illumination). We tested two ways, one based on morphological techniques and the other based on homomorphic filtering. The best method is selected by comparing the level of contrast through histograms. The second step is based on a well-contrasted grayscale image with the visibility of tiny vessels suppressing noise. We used traditional PCA techniques to achieve a high contrast grayscale image. Then we use our post-processing module to get the well-segmented image. The post-processing module is based on the coherence of the vessels using a second-order detector and a diffusion filter with the binary double threshold method. This article presents four main contributions:
Implementation of new preprocessing steps. These preprocessing steps are the most used with post-processing methods and give improved performance.
Contrast analysis for observations of small vessels leads to improved segmentation and helps diagnose the level of disease.
The preprocessing steps have improved the performance of existing methods based on conventional techniques.
Preprocessing methods may improve the learning processing of methods based on machine learning techniques.
2. Related Work
The distribution of blood vessels in the fundus of the retinal image is multidirectional, and it is difficult to extract them accurately. To obtain accurate segmentation of retinal blood vessels by using several filter-based methods have been presented in recent years to increase the visibility of retinal blood vessels [
21]. Lathen et al. [
22] implemented the improved local phase-based filter for proper vessel enhancement, and their method is an intensity filtering method for the segmentation of retinal vessels.
Many researchers implement many methods for the detection of retinal vessels [
5,
23]. The retinal vessel segmentation methods are divided into two classes, namely supervised retinal vessel detection methods and unsupervised retinal vessel detection methods [
24]. Supervised retinal vessel detection methods require user interaction and require labeled samples to form the vessel, and non-vessel pixel classifiers [
25]. The most widely used classifiers are the Gaussian Mixture Models (GMM) [
20,
26], K-Nearest Neighbors Classifier [
25], Artificial Neural Networks (ANN) Classifier [
27] and Support-Vector Machine (SVM) Classifier [
28,
29]. The unsupervised retinal vessel detection methods do not require any user interaction. They are based on imaging techniques or any mathematical modeling tactics to classify vessel and non-vessel pixels based on the image, and these methods do not require training [
24,
30,
31]. The most novel approach is proposed by Yin et al. [
32] as they developed the method of classifying retinal blood vessels based on the pattern matching with a retinal vascular network of retinal fundus image. Their method can segment vessel pixels and non-vessel pixels from the proposed model of the retinal vascular network, but their method gave false detection on the pathological image. The learning approach has been proposed [
33,
34] to improve their method but lacks validation with retinal vessel analysis, and the small vessels have not been segmented.
Mendonca et al. [
35] implemented the unsupervised method. It is based on a limit difference of Gaussian filter threshold (DoOG) and morphological reconstruction at multiscale. Still, their method did not select the tiny vessels even though it gave numerous false detections of vessel pixels. The multiscale retinal vessel algorithm [
36] is improved in [
37] by a detailed analysis of the intensities of retinal images, which is related to the pixels of the vessels corresponding to particular pixels during the image acquisition process. They calculated the diameter-dependent equalization factor based on multiscale information. However, their method gave pixel detection of false vessels on images, especially images with a central light reflex. Later, Al-Diri et al. [
31] implemented the retinal vessel segmentation module by extracting a vessel profile with a combination of vessel segmentation and width measurement based on the Ribbon of twin active contour model. The ribbon twin active contour model contained the tramline filter used to locate an initial region of actual vessel pixels by proposing to segment the centerline pixels. Then they used a segmented growth algorithm to convert the pixel map of the tramline filter into a set of segments. Each segment contains a series of profiles. A junction resolution algorithm is used to segment the different crosses and join the various junctions to give images of segmented vessels. One of the novel methods is proposed by Azzopardi et al. [
38] for the automatic segmentation of retinal blood vessels from retinal fundus images, and it is based on a COSFIRE filter. The COSFIRE filter suppressed the irregular illuminations, but many vessels did not detect. The common problem of all of these methods is that they did not detect tiny vessels and false pixel detection of other vessels. These problems are due to low varying contrast, central light reflex, and noise. These steps improve in the pre-processing module, and Our proposed method is the unsupervised method. Still, the impact of our methods’ pre-processing steps can be used on supervised methods to improve training effectiveness.
Achieving the required threshold levels has always been a problem in these unsupervised methods. In particular, it is a big problem with the model based on active contours because it required mathematically precise modeling of the protocol and the development of optimization techniques to obtain well-segmented images on each image’s properties. This article offers pre-processing and post-processing to solve each problem until we get a well-segmented vessels image. Image performance of segmented vessels is assessed visually and statistically.
3. Material and Methods
Segmentation of retinal blood vessels for diagnosing eye disease remains a challenge due to low contrast, uneven illumination, and noise problems. Many researchers have implemented automatic segmentation methods of retinal blood vessels, but there is yet to improve the precise detection of tiny vessels to improve performance. We propose a method to solve these problems, and it is illustrated in
Figure 3. It includes a pre-and post-processing module for obtaining an improved image and a well-segmented image during post-processing. Each module is explained in detail below.
3.1. Pre-Processing Module
Our pre-processing module contains different steps to obtain a well-contrasted image, and the process is shown in
Figure 3. Each step of the pre-processing module is explained below.
3.1.1. Processing Retinal Color Fundus Image
The retinal color fundus images are processed as input to our pre-processing module to achieve a much-enhanced image. The retinal fundus images are categorized into monochrome image types, and most retinal image databases are monochrome, and these images are captured using a special camera called a fundus camera and are mainly used in hospitals. There are three channels of fundus images, namely Red, Green, and Blue, or called RGB channels of retinal color background images, and each channel has its imaging properties, as shown in
Figure 4. The Red channel has luminance and contains noise [
19,
39], the green channel contains fewer noise pixels and gives better contrast, while the blue channel has more noise and shadow. Our main goal is to manage these retinal color channels and obtain a well-contrasted greyscale output image for further processing in detectors for small vessel observations. We use greyscale as it takes less processing time, as color images take more processing time, as well as loss of detail also in medical images. Because the color image process increases unnecessary information, this process increases the amount of processing data in any segmentation or classification method. The next step is the removal of uneven illuminations from the retinal fundus image.
3.1.2. Uneven Illumination Removal or Background Homogenization
The removal of uneven illumination or background homogenization is solved using image processing tactics. We used morphological operations and homomorphic filtering, and we selected the best tactics based on image visualizations and histogram comparisons.
Morphological Operation: Each grayscale (RGB) retinal image channel is treated using morphological techniques to eliminate noise and cope with uneven illuminations. It is observed from
Figure 5 that each RGB channel contained noise and uneven illumination, and this impacts the blood vessel observations. Bottom hat and top hat procedures are used in the suggested morphological approaches. Both are applied to each RGB channel to see how background noise affects retinal blood vessels in retinal color fundus images. It is analyzed that there are variations in the intensities of background intensities and blood vessels intensities. Because the intensity levels of the blood vessels are significantly lower than the background intensity level, there is an uneven illumination and noise problem as a result of the changes in these intensity levels. The observation of retinal blood vessels, is critical to eliminate the background illuminations. The morphological bottom hat improves the image’s backdrop and provides more information to the image while lowering the noise level on the retinal blood vessels, making the blood vessels visible. Equation (
1) shows the mathematical form of the bottom-hat operation. The • shows the closing operation.
The top hat operation increases image contrast and controls the changing contrast of retinal blood vessels by applying it to each RGB channel. The mathematical representation of the top hat operation is defined in Equation (
2)—the ∘ shows the opening operation.
Uneven illuminations and the noise problem are addressed by subtracting the top hat image from the bottom image. A more improved vessel image is obtained with controlled uneven illumination and noise level. The vessels appear more observable, and the output image of the morphological operation of each channel is shown in
Figure 5. However, an imaging tactic cannot be judged as the best improvement technique. We tested homomorphic filtering then after selecting the best operating basis on the comparison of the two tactics. Homomorphic filtering is discussed in the next paragraph.
Homomorphic filtering: Homomorphic filtering is an imaging method that is primarily used to separate illuminations and reflectance components from the image. An appropriate illuminated image is required to overcome the uneven illuminations of the fundus retinal images. Each image contains two components. First, the illuminate component, and the second is the reflectance component. The amount of light incident on the scene in an image is called the illumination component. It is an essential component that overcomes the problems of low-varying contrast and noise. Reflectance components are the light reflected from the scene. The mathematical representation of an image is
according to their pixel locations
and the lighting component
) and the reflectance component
are shown in Equation (
3).
Homomorphic filtering is based on transformation functions to convert from a spatial domain image to a frequency domain image, and this conversion process is the basis of the Fourier transform. The logarithmic function is applied to the essential homomorphic filter function (as shown in the Equation (
3)), and the product of illumination and reflection is transformed into the sum of illumination and reflection as shown in the Equations (
4) and (
5).
After applying Fourier transform, it gives Equations (
6) and (
7). Where
,
and
is the fourier transform of
,
and
.
The Fourier transform an image in homomorphic filtering is obtained by high pass filtering. The illumination and reflectance image obtain from the inverse Fourier transform. The output of the homomorphic filtering process of each channel of retinal fundus image is shown in
Figure 6.
It is observed that the morphological images give more details compared to the homomorphic filtering output images, as seen clearly in
Figure 5 and
Figure 6. It is further validated from histogram images, by way of illustration as shown in the
Figure 6, the comparison of histogram images of the green channel of morphological operations and the green channel of homomorphic filtering as shown in
Figure 6. The homomorphic filtering gives a very smooth image that loses vessel detail and blurs the image’s background. Still, the morphological image has a variation of pixels and noise but gives more visible retinal blood vessels than the homomorphic filter output. We select retinal channels obtained through morphological operations for further processing via principal component analysis to obtain a well-contrasted greyscale image.
3.1.3. Converting a Colour (RGB) Image to a Single Greyscale Image
The grayscale image is mainly used to examine the image’s details. Observing characteristics in medical images is vital, particularly in the case of the retinal vasculature. Observations are crucial for tracking the progression of eye illness. The RGB background normalizations of the retinal channels reduced noise and handled the problem of uneven illumination. Converting RGB channels, on the other hand, yields more promising outcomes. Because most research studies employed the green channel of retinal pictures for post-processing, the primary objective is to convert RGB to a single channel in greyscale. However, the green channels also exhibit changes in contrast. We have used PCA in our recent work to obtain the greyscale image as shown in
Figure 7. The PCA transformation rotates the axes from the color space’s intensity values into three orthogonal axes to produce a more appropriate greyscale image. A well-contrasted greyscale image is generated for further processing in the post-processing model. Color-to-grey conversion is performed by combining the three previously processed channels with their respective non-uniform removal processes. The mathematical representation of the conversion from color to greyscale is explained by Soomro et al. [
7] in detail. It’s worth noting that the PCA produced a considerably more differentiated image of the vessels, and the histogram has a far more comprehensive range of intensity levels than the uniform green background image. In comparison to the green image histogram of morphological processes illustrated in
Figure 8b, the PCA histogram is more spread out and reflects more intensity levels (
Figure 9).
3.2. Post-Processing Model
The post-processing module contains the segmentation of the retinal vessels. Our main goal for post-processing is to analyze the impact of our pre-processing model and observe the accuracy of vessel segmentation. Our post-processing module contains three steps. The first step is related to the coherence of the vessels using a second-order detector, and the second step is associated with the improvement of the coherent vessels. The third step is based on the segmentation of the retinal blood vessels using image reconstruction techniques. Each step is explained in detail below.
3.2.1. Secnd-Order Gaussian Detector for Coherence of Vessels
The retinal blood vessels in retinal fundus images have geometric shapes, and the pattern of a geometric shape is known as the ridge structure. Applying the filtering as a second-order derivative directed filter in a specific direction is the easiest way to decrease the noise and structure of the segmented ridges structure of retinal blood vessels. Second-order derivative oriented filter contains three parameters such as length (
), width (
) and orientation. The length (
) is multiplied by the width to maintain the elongation. The width (
) of the filtering process is chosen from the set of 4.3, and the method will be validated until the best-normalized image is obtained. The best-normalized image is accepted among the parameters of length, width and orientation and such an image is known as the coherence image of retinal vessels, and the generalized Gaussian function of the coherence image concerning these parameters is used and mathematically represented as:
It contained two independent parameters
and
, and it can obtain it performing its second derivative concerning
u and giving:
Scale factors are used to obtain a normalized image. In this process, the maximum of each pixel is determined, which averages all of the different lengths, widths, and orientations, and results in an output image of a well-normalized image or initial Coherent Vessels Image. The result of this process is shown in
Figure 10.
3.2.2. Final Coherent Vessels: Anisotropic Oriented Diffusion Filter
A more coherent vessel image is required because the initial coherent vessel image contains noise and gave a minor appearance of tiny vessels. We have used the normalized Anisotropic Diffusion Filter scheme [
40] and the scheme workflow includes the following steps:
Calculate the second-moment matrix to every pixel in the vessel.
Ensuring each Vessels pixel has its formation diffusion matrix.
Calculate the intensity change for every vessel pixel as follows: .
Process the updated image based on the difference formula depicted equation below and achieved coherent vessels image as shown in
Figure 11
3.2.3. Segmented Image
The final coherent vessel images still contained noisy pixels that make it difficult to analyze small vessels and connect vessels. We used the double threshold method based on a morphological reconstruction operation. The morphological reconstruction operation creates the final binary image that is a combination of marker and mask images. Let A (the mask image) and B (the marker image) be the two binary images of the same domain (D). The
i.e
,
are used in image reconstruction to give a detailed binary image. The reconstruction process images, such as marker and mask binary images achieved from the histogram and the histogram of obtaining marker and mask is shown in
Figure 12, and mask and marker images are also shown in
Figure 13. The mask and the marker are obtained using mathematical operations. The marker image is created by multiplying by 0.9 standard deviations and subtracting it from the image’s mean value.
In contrast, the mask image is obtained by applying the image’s mean value based on the histogram. A segmented image of the retinal vessels is produced, morphological reconstruction is conducted using a maker and mask. The marker image contained less noise than the mask image, and the primary purpose of multiplying the standard deviation is to reduce the background noise. We got a better marker image over 0.9 standard deviations. Tiny vessels are detected by reducing false pixels in the background of the marker image. Another reason to multiply the standard deviation with the marker to satisfy the maximum edge pixels is based on the marker’s histogram and the mask in the image reconstruction technique. This process also gives more edge pixels to obtain a well-segmented vessel image. However, the morphological reconstruction image contained isolated noise pixels that segmented the false vessels. We utilized a simple image processing approach to eliminate the noisy pixels in this case. The retinal vessels segmented image is created, the smaller area of fewer than 50 pixels is eliminated.
3.3. Overall Algorithm
The retinal blood vessel segmentation method is made up of our proposed pre-processing and post-processing components:
The first step manages the processing of the retinal color image and converts the retinal color images into three channels (RGB) and then converts each channel into greyscale images.
The second step manages the removal of uneven illumination from each channel. Morphological operations and homomorphic filtering are tested to deal with irregular illuminations. Morphological operations gave better output images against homomorphic filtering.
The third step is based on obtaining the image in greyscale. We used the PCA approach to convert the RGB retinal fundus images to a greyscale image.
The fourth step is based on analyzing the normalization of vessels, especially tiny vessels, and it is an essential factor in increasing vessels’ sensitivity. The second-order detector is used to normalize the vessels, and there is still varying intensity of the vessels and broken ridges. This problem is addressed using anisotropic oriented diffusion filtering.
A segmented image of the vasculature is achieved. The next step is to combine double thresholding with morphological image reconstruction methods.
3.4. Database and Measuring Parameters
Databases: We used two mainly used publicly available databases, named Digital Retinal Images for Vessel Extraction (DRIVE) [
25] and Structured Analysis of the Retina (STARE) [
13] to validate our proposed method. The DRIVE database included images of two types: test and training images, which contained their mask images and ground truth images. Images in DRIVE databases have resolutions of
pixels. The 25% of images in the DRIVE databases contained pathologies and make this database a challenging database to test any retinal segmentation algorithm. The STARE database contained 20 images, and 50% of the images had different anomalies, and it is one of the most challenging databases to test the retinal segmentation algorithm. The images in the STARE database are
pixels in resolution and include mask and ground truth images. The main advantage of using this database is that it has ground truth images for validations, as many researchers have used it. This allows us to compare our retinal segmentation methods with existing methods.
Measuring Parameters: To assess the effectiveness of our suggested retinal segmentation approach, we employed the most widely used measurement parameters. Accuracy, sensitivity and specificity are these measuring parameters. The specificity and sensitivity give the true and false pixel detection information of vessels and non-vessels. The accuracy provides information on all the pixels of the segmented vessels. The main objective of this research work is to detect true pixels of vessels and analyze the impact of the pre-processing module on our post-processing module and other existing methods.