Extraction of Blood Vessels in Fundus Images of Retina through Hybrid Segmentation Approach

A hybrid segmentation algorithm is proposed is this paper to extract the blood vessels from the fundus image of retina. Fundus camera captures the posterior surface of the eye and the captured images are used to diagnose diseases, like Diabetic Retinopathy, Retinoblastoma, Retinal haemorrhage, etc. Segmentation or extraction of blood vessels is highly required, since the analysis of vessels is crucial for diagnosis, treatment planning, and execution of clinical outcomes in the field of ophthalmology. It is derived from the literature review that no unique segmentation algorithm is suitable for images of different eye-related diseases and the degradation of the vessels differ from patient to patient. If the blood vessels are extracted from the fundus images, it will make the diagnosis process easier. Hence, this paper aims to frame a hybrid segmentation algorithm exclusively for the extraction of blood vessels from the fundus image. The proposed algorithm is hybridized with morphological operations, bottom hat transform, multi-scale vessel enhancement (MSVE) algorithm, and image fusion. After execution of the proposed segmentation algorithm, the area-based morphological operator is applied to highlight the blood vessels. To validate the proposed algorithm, the results are compared with the ground truth of the High-Resolution Fundus (HRF) images dataset. Upon comparison, it is inferred that the proposed algorithm segments the blood vessels with more accuracy than the existing algorithms.


Introduction
Cataract, uncorrected refractive error, Glaucoma, Age-related macular degeneration, Diabetic Retinopathy, corneal opacity, trachoma, and others are responsible for vision impairment.Among the listed medical conditions, uncorrected refractive error, cataract, and glaucoma are the major cause of blindness.Statistics from the World Health Organization (WHO) states that 81% of the people who are blind are above 50 years of age.It is estimated that the number of people who are blind will increase from 38.5 million in 2020 to 115 million in 2050 [1].Diabetic retinopathy is a major cause evolving for blindness.WHO report on diabetes states that diabetes will be the seventh major cause of death in 2030.2.6% of blindness reported is attributed to diabetic retinopathy.The percentage reported may be minuscule, but a report from WHO states that the number of people with diabetes has increased by a factor of 3.5 times i.e., from 118 million in 1980 to 424 million in 2014 [2].These statistics show that diabetic retinopathy cases will drastically increase in the near future.Because the availability of expert ophthalmologists with specialization in Retinal disorders is not on par with the forecast of the disease, an automated system is required.The availability of sophisticated imaging and computing systems 54 consuming process that involves extensive training and skill sets.The initial step in a CAD system 55 for ophthalmic disorders involves the automatic segmentation of blood vessels and the identification 56 of optic disk.Many algorithms have been proposed by researchers for segmenting the blood vessels, 57 which are discussed in the forthcoming section.This paper proposes a hybrid segmentation approach 58 to extract the blood vessels from the fundus image.Workflow of the proposed methodology is given

Related Works
The segmentation and analysis of blood vessels through image processing is required in the diversified fields of medicine.Many researchers have contributed efficient algorithms for extraction and analysis.Some significant contributions in the field of Ophthalmology are given in this section.
Segmentation of blood vessels can be done by two different methods: • Pixel-based methods and tracing/tracking-based methods.In pixel-based methods, every pixel is processed to reveal whether it is a vessel or background pixel.Pixel-based methods use thresholding, morphological operations, and kernels for filtering and pattern recognition.

•
Pattern recognition-based methods use classifiers and clustering algorithms for segmenting blood vessels from the fundus image.
Soares et al. have proposed a supervised classification using two-dimensional (2D) Gabor wavelet [3].Ricci et al. have used line operators and Support Vector Machine (SVM) for segmenting blood vessels [4].Moment invariants-based feature and 7D feature based on gray-level were used to train the neural network by Marin et al. [5].Tolias et al. have proposed a method based on Fuzzy C-means clustering for segmenting the blood vessels [6].Niemeijer et al. proposed a k-Nearest Neighborhood (kNN) based classifier for segmenting blood vessels [7].Salem et al. have used a novel algorithm (RACAL), which is a partially supervised algorithm for segmenting the blood vessels [8].
Kernels are filters that are used in images for identifying the pixels of interest.The most common kernels are the edge filters that are used for finding the edges in the images.Prominent kernels used for edge detection are Robert, Sobel, Prewitt and Canny.Apart from this, kernels of specific type can be customized for an application to identify the edges.Chaudhuri [13].
Thresholding of fundus image is another method that is used for segmenting blood vessels.Thresholding can be global, local, or adaptive.Adaptive thresholding is mostly used for segmentation and it gives better results.Hoover et al. proposed a piecewise threshold probing of the matched filter response for segmenting the blood vessels [14].Jiang et al. proposed an adaptive thresholding based on multi-threshold probing scheme [15].Reza et al. proposed automatic tracing algorithm for detecting optic disc and exudates using fixed and variable thresholds [16].They have also proposed a quadtree based blood vessel detection algorithm using RGB (Red-Blue-Green) color components of fundus images [17].
Morphological operators are quite handy in segmenting the object of interest using mathematical operations.There are many morphological operators that are defined for image processing.The most commonly used morphological operations are dilation, erosion, closing, and opening [18].These operators are applied mainly to binary images.However, they can also be applied for grayscale images.Zana et al. employed a morphology-based method with cross curvature evaluation for segmenting vasculature from the medical image [19].Heneghan et al. combined morphological operations with the second order derivative operator to locate both the primary and secondary vessels [20].Yang et al. employed a combination of fuzzy clustering algorithm and morphological operator [21].Mehrotra et al. employed a morphological operator for highlighting the blood vessels and then later applied the Kohonen Clustering Network to segment the blood vessels [22].Miri et al. used Forward Discrete Cosine Transform (FDCT) for image contrast enhancement followed by morphological operations for extracting the blood vessels [23].Bharkad used top hat, a morphological operator with three different structuring elements [24].Yavuz et al. enhanced the retinal image using Gabor, Frangi, and Gaussian filters, followed by the use of top hat transform and clustering mechanism for segmenting the blood vessels [25].
Employing the tracking or tracing based method, retinal vasculature can be segmented.Most tracking algorithms need a seed point to trace the vasculature.The success of the algorithm depends on the seed point.Gao et al. modelled the gray level distribution using the Gaussian function [26].Using this, the vessels are tracked to segment the blood vessels.Liu et al. employed an adaptive tracking algorithm in a three-stage recursive procedure [27].Delibasis et al. proposed a tracking algorithm that uses the geometric model and automatically seeks vessel bifurcation without user intervention [28].Vlachos et al. employed a procedure that starts with a small group of pixels that are based on the brightness rule and stops when the cross-sectional profile becomes invalid [29].Sheng et al. have proposed the Minimum Spanning Superpixel Tree (MSST) detector for segmenting retinal blood vessels [30].MSST uses geometrical structures, texture, and space information in superpixel graph.
Deformable models are also used for segmenting vasculature.Espona et al. have used an active contour that incorporates blood vessel topological properties [31].Al-Diri et al. proposed a contour-based model that uses two pairs of active contour model for segmenting blood vessels [32].In this method, the generalized morphological order operator is used to identify approximate center lines of the vessel.Palomera-Pérez et al. proposed a parallel implementation based on multiscale feature extraction and the region growing algorithm [33].Zhao et al. proposed a segmentation process based on level set and region growing method [34].Initially, adaptive histogram equalization and Gabor wavelet transform are used for enhancing the blood vessels.After preprocessing, the level set and region growing methods are applied independently and post-processing is done to obtain the final result.Instead of active contour, the graph cut technique with Markov Random field was used by Salazar et al. for segmenting blood vessels and optic disk [35].Zhao et al. have proposed an infinite active contour model that uses the Lebesgue measure of the γ-neighbourhood for infinite perimeter regularization [36].This method also adopts the advantage of region information, such as the combination of intensity information and local phase-based enhancement map.Gao et al. proposed an automated segmentation approach for extracting the retinal vessels using U-shaped fully convolutional neural network, called the U-net.The authors have used Gaussian matched filter for preprocessing the retinal fundus images [37].Li et al. framed a supervised vascular segmentation approach for retinal fundus images using multi-scale convolutional neural networks.They have also used the label processing approach to achieve better segmentation accuracy [38].Dasgupta et al. formulated the retinal vessels segmentation task as a multi-label inference task, which includes the convolutional neural network and structured prediction [39].
From the literature survey, it is found that pattern-and morphology-based methods are predominantly used for segmenting blood vessels.Pattern-based methods consume more time for classifying the blood vessels.Morphology-based methods are easier to compute, but they require other filters to achieve high accuracy while segmenting blood vessels.These filters are dependent on the type of morphological operator used.Hence, an attempt is made to develop a hybrid segmentation approach using morphological operators, MSVE, and image fusion.

Datasets
There are different databases that are available for the fundus image of the eye.The most commonly used datasets are DRIVE and STARE.These two datasets have low-resolution images with no proper tagging for diabetic retinopathy, glaucoma, and healthy eye image.High-Resolution Fundus image (HRF) dataset gives a good collection of images in all of these categories [13].High-resolution image in the HRF dataset enables a better understanding of the image for the segmentation process.The dataset has a total of 45 images with a resolution of 3504x2336.Ground truth for the 45 images is also available in the dataset.This dataset is chosen because of the quality of the image and proper categorization.The DRIVE [40] and CHASE [41] datasets are used to check the effectiveness of the proposed algorithm.
This paper is organized as follows: Section 2 presents the materials and methods used in the research work; Section 3 presents the proposed methodology; Section 4 elicits the results and discussion; and, Section 5 concludes the proposed research work and the scope for the future work.

Materials and Methods
The proposed approach for blood vessel segmentation is framed using image enhancement techniques, morphological operations, adaptive thresholding, color code transformation, and image fusion.It also includes an algorithm proposed by Frangi et al., which enhances the image and helps in identifying the tubular structure of the blood vessels [42].These algorithms are explained in this section.

Contrast Stretching
Contrast stretching is one of the image enhancement techniques, where the quality of the images improved by the even distribution of intensity values [43].Original fundus image of RGB (Red-Green-Blue) code is given as input.In contrast stretching process, intensities of the pixels are scaled to global maximum and global minimum such that contrast of the image is distributed uniformly.Contrast stretching is achieved through Equation (1).
where a = 0, b = 255, and c and d are non-zero minimum and maximum intensity values of the input image.Initially, the stretching process is implemented for single band (R/G/B) image with size m x n.Subsequently, the procedure is repeated for rest of the two bands and final image is generated with the enhanced individual bands of the original image.

Multi-Scale Vessel Enhancement (MSVE)
The geometric structure in the image can be captured using the Hessian matrix.To interpret the Hessian matrix, properties can be extracted from it.Determinant is one such property of the matrix that can be used to understand the matrix.It is found that the determinant of the Hessian matrix cannot help in inferring the geometric structure that is inherently held in the Hessian matrix.Additional information is required to extract the inherent geometric information.Similar to the determinant, the Eigen values (λ i ) of a matrix also help in inferring the hidden information in the matrix.Hence, the Eigen values of the Hessian matrix are found, which helps in inferring the geometric information that is held inherently in the Hessian matrix.Based on the following conditions, the geometric information inherently present in the Hessian matrix can be understood.

then no conclusion can be drawn
This is used to define a parameter called vesselness, which can be used to identify the blood vessels in the image.Vesselness is calculated using Equation (2).Once the vessels are identified, it can be enhanced, which aids in proper segmentation of blood vessels.Frangi et al. proposed this method of Vessel enhancement [42].
where β = 0.5, which controls the sensitivity of the line filter, s is scale value within certain range, c depends on the grayscale range, R B = λ 1 λ 2 is the blobness measure in two-dimensions (2D) and accounts for the eccentricity of the second order ellipse.λ 1 , λ 2 are the Eigen values of the Hessian matrix given in Equation ( 3): In this research work, this algorithm is used for enhancing the blood vessels of the fundus image of the eye.

Bottom Hat Transform
The bottom-hat transform is defined as the difference between the closing of the original image and the original image.The closing of an image is the collection of background parts of an image that fit a particular structuring element.

Area Based Filtering
In this process [44], the area of all non-zero pixels in an image is estimated by accumulating the areas of each pixel in the image.Area of each pixel is found by examining the two-by-two neighbourhood.Based on the following six different patterns, each area is represented: The fundus image of the eye is given as input for this process and based on the rules discussed, the area of each region is estimated.The regions of the image above the threshold 60 are selected for further process.The threshold value is fixed experimentally.

Binarization Using Adaptive Thresholding
Thresholding is the process to separate the foreground and background pixels by using a threshold value [43].All of the pixel intensities above the threshold are set to 1 and pixel intensities below the threshold are set to 0. Conventionally, threshold value is fixed globally for all the intensities.Adaptive thresholding is a process that accepts a grayscale or color image as input and then outputs a binary image.Unlike conventional thresholding, the threshold value is calculated for each pixel.Pixel intensity value is set to 0 or 1 based on the calculated threshold.
In adaptive thresholding, the local threshold value is calculated by examining the intensity values of local neighborhood of each pixel, as shown in the Figure 5.
Mathematics 2019, 7 8 where (.) operation is the combination of dilation and erosion i.The fundus image of the eye is given as input for this process and based on the rules discussed, the area of each region is estimated.The regions of the image above the threshold 60 are selected for further process.The threshold value is fixed experimentally.

Binarization using Adaptive thresholding
Thresholding is the process to separate the foreground and background pixels by using a threshold value [43].All of the pixel intensities above the threshold are set to 1 and pixel intensities below the threshold are set to 0. Conventionally, threshold value is fixed globally for all the intensities.
Adaptive thresholding is a process that accepts a grayscale or color image as input and then outputs a binary image.Unlike conventional thresholding, the threshold value is calculated for each pixel.
Pixel intensity value is set to 0 or 1 based on the calculated threshold.
In adaptive thresholding, the local threshold value is calculated by examining the intensity values of local neighborhood of each pixel, as shown in the Figure 5.The mean value of the neighbourhood pixels is taken as the local threshold of the pixel x(i,j).Instead of mean statistic, the median value can also be used as local threshold.However, when using median as a local threshold, the computation will be slow.Hence, in this research work, the mean statistic is used as local threshold to binarize the image.

Fusion
It is possible to generate a single image, which holds the details of both the image, when two The mean value of the neighbourhood pixels is taken as the local threshold of the pixel x(i,j).Instead of mean statistic, the median value can also be used as local threshold.However, when using median as a local threshold, the computation will be slow.Hence, in this research work, the mean statistic is used as local threshold to binarize the image.

Fusion
It is possible to generate a single image, which holds the details of both the image, when two images with different details are available.This can be done through fusion process.In this work, fusion is achieved by overlaying an image on the other image.This ensures that the details of both the images are preserved and exhibited by the new image achieved through fusion.

Proposed Methodology
This section presents the proposed hybrid segmentation approach for extracting blood vessels from the fundus image of the eye.This method is hybridised using the methods that are discussed in Section 2, such that it can segment the blood vessels from the fundus image of retina.The existing vessel enhancement algorithm [42] is hybridized with pre-processing and post-processing approaches to achieve the better segmentation of blood vessels.As a significant contribution, mask generation is proposed in this hybrid segmentation approach that helps in achieving the accurate segmentation of vessel like structures.The work flow of the proposed methodology is given in Figure 1.

Hybrid Segmentation Approach
The proposed hybrid segmentation approach has three phases, (1) image acquisition and preprocessing, (2) Mask generation for optic papilla removal, and (3) vessel enhancement and fusion.Figure 2, Figure 3, and Figure 4 depict the flow of these three phases.Following is the proposed blood vessels segmentation procedure:   2) When a smaller scale value is used to extract the blood vessels, it also segments vessel-like structures.To avoid this, area filtering can be applied at threshold of 65, which is fixed experimentally.Area filtering is a morphological operation that can be used only in bi-level images.Accordingly, the binarization process is done before area filtering is applied on the image with thinner vessels obtained after MSVE with smaller scale.


When MSVE is applied with a scale value of 15, only the boundary of the thick vessels are found.The pixels inside the major vessels are also part of the thick vessels, so they also should be white in color.To ensure that the inner pixels of the thick vessels are white in color, we can use filling algorithms.However, there is a possibility that, due to a discontinuity in boundary, it cannot be filled properly.Hence, the fusion of MSVE applied images with different scale values can avoid the tracing of the vessel path to fill the gaps in the boundary.


Accordingly, the MSVE algorithm is used with a larger scale value, which ensures that the inner regions have white color, but it misses the thinner vessels.To overcome this problem, MSVE is applied with scales of 50 and 15, which, when fused, provides an accurate result.
Pseudocode for the proposed hybrid segmentation approach is given as follows: (2) When a smaller scale value is used to extract the blood vessels, it also segments vessel-like structures.To avoid this, area filtering can be applied at threshold of 65, which is fixed experimentally.Area filtering is a morphological operation that can be used only in bi-level images.Accordingly, the binarization process is done before area filtering is applied on the image with thinner vessels obtained after MSVE with smaller scale.(3) Fusion of MSVE images

•
When MSVE is applied with a scale value of 15, only the boundary of the thick vessels are found.The pixels inside the major vessels are also part of the thick vessels, so they also should be white in color.To ensure that the inner pixels of the thick vessels are white in color, we can use filling algorithms.However, there is a possibility that, due to a discontinuity in boundary, it cannot be filled properly.Hence, the fusion of MSVE applied images with different scale values can avoid the tracing of the vessel path to fill the gaps in the boundary.

•
Accordingly, the MSVE algorithm is used with a larger scale value, which ensures that the inner regions have white color, but it misses the thinner vessels.To overcome this problem, MSVE is applied with scales of 50 and 15, which, when fused, provides an accurate result.
Pseudocode for the proposed hybrid segmentation approach is given as follows: /* Phase 1: Image acquisition and Preprocessing */ I := readImage(); globalMin := 0; globalMax := 255; // it is the maximum gray level of the image localMin := minimum(I(x,y)); // minimum non-zero intensity value of the taken image localMax := maximum(I(x,y)); // maximum intensity value of the taken image for x := 0 to m for y := 0 to n I(x,y) := ((I(x,y)-localMin)* ((globalMax-globalMin))/((localMax-localMin))) + globalMin; The proposed segmentation algorithm is implemented and tested with the HRF dataset [13], DRIVE [40], and ChaseDB [41].The results of the implementation are shown in Figures 7-9  The proposed segmentation algorithm is implemented and tested with the HRF dataset [13], DRIVE [40], and ChaseDB [41].The results of the implementation are shown in Figures 7,8  were obtained using the proposed method.The proposed algorithm is evaluated using the metrics: Sensitivity, Specificity, and Accuracy, with respect to the gold standard.The proposed segmentation algorithm is implemented and tested with the HRF dataset [13], DRIVE [40], and ChaseDB [41].The results of the implementation are shown in Figures 7, 8 were obtained using the proposed method.The proposed algorithm is evaluated using the metrics: Sensitivity, Specificity, and Accuracy, with respect to the gold standard.The proposed segmentation algorithm is implemented and tested with the HRF dataset [13], DRIVE [40], and ChaseDB [41].The results of the implementation are shown in Figures 7, 8 were obtained using the proposed method.The proposed algorithm is evaluated using the metrics: Sensitivity, Specificity, and Accuracy, with respect to the gold standard.Figure 7a, Figure 8a, and Figure 9a are the fundus images of glaucoma, diabetic retinopathy, and healthy eye, respectively.Figure 7b, Figure 8b, and Figure 9b are the mask generated by the above said procedure for accurate segmentation.Figure 7c, Figure 8c, and Figure 9c are the images of segmented blood vessels that were obtained using the proposed method.The proposed algorithm is evaluated using the metrics: Sensitivity, Specificity, and Accuracy, with respect to the gold standard.

Results and Discussion
The proposed method has been tested on HRF, DRIVE, and CHASE databases, which provide ground truth data.The effectiveness of the proposed segmentation algorithm is measured using the parameters Sensitivity (SE), Specificity (SP), and Accuracy (ACC), derived from the contingency table (Table 1  Sensitivity (SE) measures the proportions of positives, both True Positives (TP) and False Negatives (FN), which are correctly identified.Specificity (SP) measures the proportions of negatives, both True Negatives (TN) and False Positives (FP).Accuracy (ACC) is the proportion of true results, both True Positives (TP) and True Negatives (TN), among the total number of examined pixels.These measures are calculated using Equations ( 4) to (6).
Segmentation process may result in under segmentation, over segmentation, or accurate segmentation.

•
When sensitivity is low and specificity is high, the vessels are under segmented i.e., the vessels are not properly identified.

•
When sensitivity is high, and specificity is low, the vessels are over segmented i.e., Non-vessel regions are also identified as vessels

•
When both the sensitivity and specificity are high, the vessels are segmented properly Theoretically, the values of sensitivity and specificity are preferred to be 100%.If the sensitivity and specificity of the segmentation process is close to 100%, it is a better algorithm.Sensitivity and specificity quantify the ability of the method to detect blood vessels (foreground) and background, respectively.Accuracy gives the overall measure of the segmentation done by the proposed method against the ground truth data.The average of SE, SP, and ACC are compared with the results of Odstrcilik et al. [13].The results are tabulated in Table 2. To check the effectiveness of the proposed algorithm, it is tested with the DRIVE and CHASE datasets.The results are tabulated in Table 3. From Tables 2 and 3, it is inferred that: • For the HRF image dataset, the proposed segmentation approach outperforms the state-of-the-art technique and it is tabulated in Table 2.
Diabetic retinopathy images normally have more artefacts.Accordingly, it becomes difficult to identify the blood vessel from fundus image.
There is a marginal increase in the sensitivity for Diabetic retinopathy images and glaucoma images.Change in the Healthy image is high.

•
For images in DRIVE dataset, the proposed approach underperforms, because: the resolution of the image is very less when compared to the images in HRF dataset; varying aspect ratio of the image; and, the dataset has two manually segmented ground truth data for each image and the average values are presented in Table 3.For both the manually segmented ground truth data, fixed threshold values are used.

•
For the CHASE dataset, the proposed segmentation algorithm underperforms in terms of sensitivity; consistent in the terms of specificity; and, outperforms in terms of accuracy.

•
In the research work of Zhang et al., the authors have used different threshold values for different datasets [42].The proposed segmentation approach uses fixed threshold value for different datasets.
When the proposed algorithm is implemented, the varying aspect ratio and low-resolution are the limitations found in the images of DRIVE dataset.Additionally, these limitations affect the precise segmentation in the case of DRIVE dataset.Sensitivity value of diabetic retinopathy images of the HRF dataset is highly affected by the artefacts (i.e.,) high red component.The removal of the red component can remove the artefacts, but unfortunately some pixels in the blood vessels are also lost in this process.To improve the performance for diabetic retinopathy images, it is imperative to remove the artefacts without losing the pixels in the blood vessels.The generation of the mask must be enhanced, which can be used for removing the artefacts without compromising on the pixels of blood vessels.

Conclusions
A hybrid segmentation approach with a novel mask generation scheme is proposed to extract the retinal vasculature from the fundus images of eye.The proposed method is evaluated on HRF, CHASE, and DRIVE datasets, and the results are compared with the existing results of the same database and with existing state of the art methods.The proposed approach outperforms the existing methods for the high-resolution fundus images of retina (HRF dataset); achieves better accuracy for images in the CHASE dataset; and, underperforms for the images in the DRIVE dataset.A huge drop in sensitivity is recorded when the proposed algorithm is tested with the DRIVE database.It is because of the low-resolution images and varying aspect ratio when compared to the HRF dataset.The existing vessel enhancement algorithm is hybridized with pre-processing and post-processing approaches to achieve precise and fully automated segmentation.In the proposed segmentation approach, a mask is generated to remove the artefacts in the fundus image of retina.A global threshold value is fixed on an experimental basis, such that the algorithm can be used to segment the vessel structures from the fundus image of any dataset.Additionally, the proposed segmentation approach is tested with the real time data acquired from Sankara Nethralaya, Chennai and the results are found to be promising.Efficacy of the proposed algorithm found to be better in terms of the classification parameters.Further, the algorithm can be enhanced or added with artefacts removal approaches that lead to better segmentation of retinal structures.

59 in Figure 1 .
The proposed methodology has three phases and these phases are depicted in Figures2, 60

Figure 1 .
Figure 1.Work flow of proposed methodology.

Procedure: 1 : 2 : 3 : 4 : 5 :
Bottom hat transform Input: Original image (A) of size m × n, structuring element (S) of size s × s Output: Transformed image Step Read the input image A of size m × n Step Initialize the structuring element S which is a square matrix with all zeros or ones Step Morphological closing is applied to the original image with the structuring element created in step 2 closingImage := A .S; where (.) operation is the combination of dilation and erosion i.e., A . S = (A ⊕ S) S, ⊕ denotes dilation and denoted erosion.Step Apply bottom hat filter by subtracting the original image from the closingImage bottomHat := closingImage -A; Step Transformed image is displayed

4 :Step 5 :
e.  . = ( ⊕ ) ⊖  , ⊕ denotes dilation and ⊖ denoted erosion.Step Apply bottom hat filter by subtracting the original image from the closingImage bottomHat:= closingImage -A; Transformed image is displayed 2.2.2.Area based filtering In this process [44], the area of all non-zero pixels in an image is estimated by accumulating the areas of each pixel in the image.Area of each pixel is found by examining the two-by-two neighbourhood.Based on the following six different patterns, each area is represented:  if the pattern is with NO non-zero pixels, then area = 0;  if the pattern is with ONE non-zero pixel, then area = 1/4;  if the pattern is with TWO adjacent non-zero pixels, then area = 1/2;  if the pattern is with TWO diagonal non-zero pixels, then area = 3/4;  if the pattern is with THREE non-zero pixels, then area = 7/8; and,  if the pattern is with all FOUR non-zero pixels, then area = 1.

Figure 5 .
Figure 5. Neighbourhood of a pixel x(i ,j).

Figure 5 .
Figure 5. Neighbourhood of a pixel x(i,j).

Procedure:
Proposed blood vessel segmentation Input: RGB fundus image (I) of size m × n Output: Segmented image with blood vessels (segmentedImage) Phase 1: IMAGE ACQUISITION & PREPROCESSING Step 1: Read input image I of size m × n Step 2: Enhance the contrast of the input image I Phase 2: MASK GENERATION FOR OPTIC PAPILLA REMOVAL Step 3: Generate structuring element s1_Zeros of size (25 × 25) with all zeros Step 4: Apply Bottom Hat transform to I with s1_Zeros as structuring element Step 5: Assign the red and blue channel of bottomHatZeros to zero Step 6: Threshold the green channel (greenBottomHatZeros) with the value of 20 to generate the mask (Threshold values used in this procedure are selected by maximizing the average sensitivity and specificity of the segmentation results of different datasets.As there is wide difference between the number of true positive and false positive pixels, accuracy constraint is not considered for parameter optimization.Hence, the focus is on sensitivity and specificity.)Step 7: Generate another structuring element s2_Ones of size (25 × 25) with all ones Step 8: Apply Bottom Hat transform to I with s2_Ones as structuring element Step 9: Assign the red and blue channel of bottomHatOnes to zero Step 10: Enhance the contrast of green channel of bottomHatOnes Step 11: Remove the pixels below intensity value of 60 in greenBottomHatOnes Step 12: Initialize a new matrix (newGreenImage) with zeros of size of the image (m × n) Step 13: Apply the mask greenBottomHatZeros to greenBottomHatOnes and the resultant image is stored in newGreenImage Phase 3: VESSEL ENHANCEMENT & FUSION Step 14: Apply multi-scale vessel enhancement algorithm to newGreenImage with scale values of 50 (enhancedVessel1) and 15(enhancedVessel2) Step 15: Binarize the enhancedVessel2 image using adaptive thresholding Step 16: Filter the closed area of the binarized EnhancedVessel2 using morphological operations Step 17: Fuse the enhancedVessel1 and area filtered, binarized EnhancedVessel2 Step 18: Convert the fused image into binary image and display the resultant segmented image

( 1 ) 1 )
In step 11, the threshold value is fixed as 60 based on the histogram of the input image, as shown in Figure6, whereas in Zhang et al., the threshold value is experimentally fixed for different datasets to achieve better segmentation[42].In step 11, the threshold value is fixed as 60 based on the histogram of the input image, as shown in Figure6, whereas in Zhang et al., the threshold value is experimentally fixed for different datasets to achieve better segmentation[42].

Figure 9 .
Figure 9. Input & Output of the algorithm-Healthy: (a) Fundus image of Healthy eye, (b) Generated Mask, and (c) Segmented blood vessels.
et al. have proposed a kernel based matched filtering mechanism for blood vessel segmentation [9].It uses 12 different templates that were generated by rotating the actual template by 15 degrees.Al-Rawi et al. proposed an improved matched filtering mechanism based on Chaudhuri et al's matched filtering mechanism [10].Cinsdikici et al. have proposed an algorithm that uses matched filtering with ant colony optimization [11].

Table 2 .
Results of Proposed Segmentation approach for High-Resolution Fundus image (HRF) dataset.