Next Article in Journal
Comparative Investigation of Cutting Devices on Bone Blocks: An SEM Morphological Analysis
Next Article in Special Issue
Classification of Pulmonary CT Images by Using Hybrid 3D-Deep Convolutional Neural Network Architecture
Previous Article in Journal
The Relationships between Somatic Cells and Isoleucine, Leucine and Tyrosine Content in Cow Milk
Previous Article in Special Issue
Relationship between Continuity of Care in the Multidisciplinary Treatment of Patients with Diabetes and Their Clinical Results
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Statistical Edge Detection and Circular Hough Transform for Optic Disk Localization

1
Department of Computer Engineering, Kırıkkale University, 71451 Kırıkkale, Turkey
2
Department of Computer Technologies, Elmadağ Vocational School, Ankara University, 06780 Ankara, Turkey
3
Department of Computer Engineering, Gazi University, Technology Faculty, 06500 Ankara, Turkey
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(2), 350; https://doi.org/10.3390/app9020350
Submission received: 8 December 2018 / Revised: 11 January 2019 / Accepted: 13 January 2019 / Published: 21 January 2019
(This article belongs to the Special Issue Data Analytics in Smart Healthcare)

Abstract

:
Accurate and efficient localization of the optic disk (OD) in retinal images is an essential process for the diagnosis of retinal diseases, such as diabetic retinopathy, papilledema, and glaucoma, in automatic retinal analysis systems. This paper presents an effective and robust framework for automatic detection of the OD. The framework begins with the process of elimination of the pixels below the average brightness level of the retinal images. Next, a method based on the modified robust rank order was used for edge detection. Finally, the circular Hough transform (CHT) was performed on the obtained retinal images for OD localization. Three public datasets were used to evaluate the performance of the proposed method. The optic disks were successfully located with the success rates of 100%, 96.92%, and 98.88% for the DRIVE, DIARETDB0, and DIARETDB1 datasets, respectively.

1. Introduction

Biomedical image analysis has gained more importance and is attracting increasing attention from researchers. With the development of image analysis techniques, biomedical image analysis facilitates feature extraction in retinal images and the diagnosis of such retinal diseases as diabetic retinopathy, papilledema, and glaucoma.
The optic disk (OD) is a bright area on the retina where blood vessels pass through the OD and which does not carry any light sensors. The diameter of the OD is approximately 80–100 pixels in a retinal image with an average resolution [1], which is an important indicator for the detection of the fovea and other retinal anatomic structures [2,3].
The detection of retinal blood vessels provides preliminary knowledge for the classification and grading of glaucoma and diabetic retinopathy. Other fundus features, such as the fixed distance between the center of the macula and OD, are also used as an indicator for estimating the region of the macula [4,5]. The OD is used as a starting point for retinal vessel tracking methods as well [4].
A perusal of the literature shows that the starting point for an operation is always the OD, whether to detect blood vessels from retinal images, to detect the fovea and other anatomical structures, or to diagnose a retinal disease. Therefore, localization of the OD is the most basic and preliminary step in the automatic analysis of retinal images and in the detection of retinal diseases [6].
Anatomical structures on the retina—the optic disk, blood vessels, and fovea/macula regions—are shown in Figure 1 [7].

1.1. Optic Disk Detection Literature

There have been many studies proposed in the literature related to the detection of the OD. These studies are mainly related to localization of the OD, which stands for detection of the location of the OD in the retinal image, and related to segmentation of the OD region, which is the extraction process of the actual region of the OD.
OD detection is comprised by three categories, which are morphology-based methods, template-based methods, and deformable model-based methods.
Morphology-based approaches benefit from brightness and shape properties of the OD. Therefore, boundaries of the OD can be detected by utilizing morphological operators. The shape of the OD, with some errors, was detected by morphological filtering techniques and the watershed transformation algorithm [8]. An adaptive morphological method was proposed to determine the OD and OD boundaries (rim) and was applied to the DRIVE and DIARETDB1 public datasets and compared with the success rates of other methods [9]. Dai et al. [10] offered a new method for automatically segmenting the OD in fundus images based on variational models with multiple energies. Firstly, a sparse coding-based method was designed in which the initial boundary curve is estimated by the circular Hough transform to determine the OD center. Then, OD segmentation was considered as a problem of energy minimization, and a variational model combining the three energy terms to reach the limit of the OD curve was proposed.
Within the second category, template-based approaches are generally interested in the shape of the OD, for example, the circular or elliptical shape. A template-matching algorithm was applied to OD segmentation in one study [11]. In another study carried out in 2004 [12], principal component analysis (PCA) and a modified active shape model (ASM) were used for model-based OD detection. The Sobel edge-detection algorithm and Hough transform were used to detect the OD and its center [13]. Zou et al. [14] proposed a framework based on the density of the image and the parabolic placement of retinal blood vessels in order to obtain the OD position. In cases where the OD location cannot be detected by image density, OD localization is obtained with respect to the placement of retinal blood vessels. The approximate location of the OD is detected by finding the intersection of the thickest veins in the retinal fundus images [15]. Pereira et al. [16] proposed an isotropic diffusion-based method which uses an ant colony algorithm for OD localization. Kamble et al. [17] used a one-dimensional scanned density profile analysis for rapid and accurate localization of the OD and fovea. The proposed method effectively uses both time- and frequency-domain information for OD localization with high accuracy rates. Sarathi et al. [18] proposed a methodology for localization of the OD center based on the staining of vessels around the OD. After detection of the center of the OD, an adaptive threshold-based region growing technique was applied to obtained points.
Regarding the deformable model-based approaches, they exploit the specific characteristics of the OD. Harangi et al. [19] proposed a model based on the combination of probability models in order to detect the OD and its boundaries. Furthermore, they increased the accuracy of the method using axiomatic and Bayesian approximations. Al-Bander et al. [20] designed and trained a deep multiscale sequential convolutional neural network for simultaneous localization of the OD and fovea. In this deep learning method where public databases are used, the detection of the OD and fovea are done accurately and fast. Abed et al. [21] focused on swarm intelligence techniques and also proposed a novel preprocessing approach called background subtraction-based optic disk detection (BSODD) for effective and fast OD localization. Five swarm intelligence algorithms were compared, and according to experimental results, the best performance was obtained with the FireFly algorithm. Li et al. [22] suggested that learning a series of controlled descent directions between the coordinates of the OD limit and the surrounding visual appearance for their OD segmentation would improve the performance of the OD segmentation, and evaluated the method for the six datasets. A supervised gradient vector flow snake (SGVF snake) method was used for OD localization by Hsiao et al. [23]. The results show that the SGVF snake algorithm is capable of OD localization with high success rates.

1.2. Proposed Method

Firstly, the green channel extraction was performed on the retinal image in order to effectively detect OD location in the proposed method. The reason of using the green channel for extraction is that the channel contains the best contrast value between the background and the anatomical structures and gives more information about vessels than other channels [24].
Then, contrast-limited adaptive histogram equalization (CLAHE) was used to increase the clarity of the images. Based on the rule that the OD region is the brightest region of the retinal image, the average brightness level of each image was calculated and the pixels below the average value were eliminated from the image. As is known, there are differences between the brightness of the images due to the settings of the fundus cameras. Due to the different brightness level of each image, the threshold value was adjusted dynamically for each image rather than defining a constant threshold value.
Next, a modified robust rank order-based edge-detection method was applied [25], which has never been implemented in the OD localization problem and shows better performance than conventional edge-detection methods in noisy images. After obtaining the edges of the image, the circular Hough transform was performed due to the fact that the OD has a circular structure.
Figure 2 illustrates the steps of the proposed method.
The main issues addressed in the proposed procedure are:
(a) The average brightness level of each image was calculated, and the pixels below the average value were eliminated from the image. Due to the different brightness levels of each image, the threshold value was adjusted dynamically.
(b) Dust accumulates in the lens of the fundus camera due to lack of maintenance, and this causes noise in the retinal images and reduces the performance of OD localization methods. Therefore, a statistical edge-detection framework was applied to retinal images, and successful results were obtained in this work. The edge detection performance of the applied method has been proven to be higher in noisy images than other conventional edge-detection methods [25,26].
This paper is organized as follows: In Section 2, the methods and algorithms for OD detection are presented in detail. Experimental results are explained in Section 3. Section 3 also gives a discussion about the experimental results. Finally, a conclusion is given in Section 4.

2. Materials and Methods

In this paper, a new procedure is presented by which to detect the OD properly. The proposed algorithm consists of 4 steps: (1) increasing the significance of the image in a preprocessing step, (2) eliminating the pixels below the average brightness level by subtracting by the average brightness level of the image, (3) implementing the modified robust rank order method for edge detection, and (4) detection of the circular OD via circular Hough transform.
In the first phase, green channel extraction was performed on the retinal images to enhance the image contrast. Then, the clarity of the image was increased by using contrast-limited adaptive histogram equalization (CLAHE).
In the second phase, the average brightness level of each image was calculated. The pixels that were below the average brightness level of the image were eliminated because the OD region is the brightest region of the retinal image [14]. Furthermore, due to the specific brightness level of each image, a dynamic calculation of the average is more accurate for OD detection.
In the third phase of the algorithm, the modified robust rank order-based edge-detection algorithm was applied to the image, in which the non-bright pixels were eliminated.
In the last phase, the circular Hough transform was applied to the edge-extracted image to locate the OD. Due to the fact that the OD has a circular structure and the OD diameter is about 80–100 pixels in each retinal image [1], the circular Hough transform is easy to implement.

2.1. Retinal Datasets

The proposed procedure was tested on fundus images obtained from three widespread public datasets.
1. The DRIVE (Digital Retinal Images for Vessel Extraction) dataset [27] was established to allow comparative studies on the segmentation of blood vessels in retinal images. In total, 40 retinal images from the DRIVE dataset, which were obtained from a diabetic retinopathy screening program in the Netherlands, were randomly selected from 400 people between 25 and 90 years of age. The images were captured using a Canon CR5 non-mydriatic 3CCD with a forty-five-degree field of view (FOV) and saved in JPEG format. The size of each image was (584 × 565) pixels. Diabetic retinopathy was detected in 7 images, while 33 images showed no signs of the disease.
2. The DIARETDB0 (Standard Diabetic Retinopathy Database Calibration level 0) dataset [28] was established to evaluate the success of automatic diabetic retinopathy detection and compare the performance of the developed methods. The dataset consists of 130 colored fundus images, of which 20 are normal and 110 include symptoms of diabetic retinopathy. The fundus images were captured with a fifty-degree field of view (FOV). The dataset was titled as “calibration level 0 fundus images”, and the size of each image was (1152 × 1500) pixels.
3. The DIARETDB1 (Standard Diabetic Retinopathy Database Calibration level 1) dataset [29] is a public dataset for evaluating the performance of automatic diabetic retinopathy detection methods. The dataset consists of 89 colored fundus images, of which 84 include mild non-proliferative signs (Ma) of the diabetic retinopathy and the remaining 5 fundus images contain no sign of the disease. The fundus images were captured with a Nikon F5 fundus camera and with a fifty-degree field of view. The dataset was titled as “calibration level 1 fundus images”, and the size of each image was (1152 × 1500) pixels.
The detailed specifications for the DRIVE, DIARETDB0, and DIARETDB1 datasets are given in Table 1.

2.2. Preprocessing for Image Contrast Enhancement

The preprocessing step has a vital importance for contrast enhancement and for easier OD localization.

2.2.1. Green Channel Extraction

A color retinal fundus image consists of a combination of red, green, and blue color channels (RGB). Among these channels, the green channel is more successful in separating the OD and blood vessels at the forefront of the image from the background of the image, giving the highest contrast values [30,31]. For this reason, the preliminary processing step of the images was started with green channel extraction. The green channel extractions of different retinal images are shown in Figure 3 [32].

2.2.2. Contrast-Limited Adaptive Histogram Equalization (CLAHE)

The basis of contrast-limited adaptive histogram equalization, which is frequently used for image enhancement, is based on histogram equalization. In histogram equalization, image quality can be improved by expanding the dynamic range of density with the entire image histogram. In histogram equalization, the intensity distribution of the image is normalized in order to obtain an output image with a uniform density distribution. Adaptive histogram equalization is also a modified histogram equalization process. The main idea of CLAHE is that the image is divided into rectangular regions in the form of a grid and the standard histogram equalization is applied to each region. After dividing the image into subregions and applying histogram equalization to each of them, the subregions are combined with the bilinear interpolation method to obtain an optimized whole image. However, adaptive histogram equalization shows poor performance when an image has various types of noise. In order to prevent the noise problem, it is necessary to limit contrast enhancement in homogeneous regions. Contrast-limited adaptive histogram equalization was developed to overcome the difficulty. In this method, each pixel is mapped using its four nearest neighbors. When the subregions are combined with bilinear interpolation, the regions are divided into groups according to their neighborhood. This method is formulated based on the fact that the image is divided into several regions of nearly equal size that do not overlap with each other. In the literature, contrast-limited adaptive histogram equalization has given successful results on medical images [33,34,35].
In Figure 4, we can see the original color retinal image and the enhanced fundus image, which was obtained by using CLAHE with a healthy image and a diseased image.

2.3. Calculation of the Average Brightness Level of Images

The region of the OD has the maximum intensity of retinal images, and thus it is time-consuming to look over dark areas. For this reason, the average brightness level was calculated in order to reduce calculation time. The average brightness value was considered as a threshold and the pixels below the threshold value were eliminated from the image. The threshold value and the number of eliminated pixels are specific for each fundus image.
The calculation of the average brightness level is given by the following equation:
Threshold   Level = 1 M x N i = 0 M j = 0 N Img ( i , j )
M and N represent the number of rows and columns of an image, respectively. In order to calculate the average color value, the grayscale color values, which take a range of possible values from 0 to 255, of all pixels are added and divided by the total number of pixels in the image (M × N). A block diagram for this calculation is given in Figure 5.
Elimination of the pixels below the specified threshold value from the retinal image was applied with the formula:
Img ( i , j ) = { 255 , i f ( Img ( i , j ) <   Threshold   Level Img ( i , j ) ,   otherwise
An image resulting from the elimination of the pixels below the threshold from the retinal image is shown in Figure 6.

2.4. Implementing the Modified Robust Rank Order Test-Based Edge-Detection Algorithm

A subimage of size r × r is chosen for each pixel. r should be odd and selected properly according to the image and its size. In this study, r is equal to 5, as shown in Figure 7. If a r value smaller than 5 is chosen, the subimage would be sensitive to noise. Conversely, a larger r value increases the time complexity and edge variations. Two different regions are considered as a set of N = m + n , excluding the target pixels, which are divided into X = ( X 1 , X 2 , , X M ) and Y = ( Y 1 , Y 2 , , Y N ) . X i and Y i are pixels in two different groups as shown in Figure 7, where white pixels represent the X region and blue pixels represent the Y region in the mask. The model is built as:
A i = { X İ , X İ X Y İ , Y İ Y }
The null hypothesis and alternative hypothesis are set as:
H 0 : X Y
H 1 : X < Y
The term "null hypothesis" was first defined and used by Ronald Fisher, a British-born statistician and geneticist [36]. A rank-order based statistical test is used as a good alternative to the Wilcoxon test. Figure 7 shows 8 distinct edge scenarios representing 2 different colors. The test to be performed will be applied for each scenario and will be considered as the edge for the evaluated pixel if any of the scenarios are appropriate for the criteria selected for edge detection. From this point on, it will be explained through scenario (a) shown in Figure 7, and the same procedures are performed for the other scenarios.
To evaluate H 0 against H 1 , A i is obtained as follows: For each X i   , X the sum of the difference of lower-valued Y i pixels in Y are calculated. The obtained number represents X i and is denoted by U ( Y , X i ) . Then, the average of the U   ( Y , X i ) is calculated by
U ( Y , X ) = i = 1 m U ( Y , X i ) / 12
This calculation is also done for the Y pixels Y i , Y i   , Y . U ( X , Y i ) is calculated with the sum of the difference of lower-valued X i pixels in X . Then, the average of U ( Y , X i ) is calculated by
U ( X , Y ) = i = 1 m U ( X , Y i ) / 12
Next, the homogeneity index is defined as:
V X = i = 1 m [ U ( Y , X İ ) U ( Y , X ) ] 2
V Y =   i = 1 m [ U ( X , Y İ ) U ( X , Y ) ] 2
After obtaining test parameters and homogeneity index, the test statistic is built as:
U = | m U ( Y , X ) n U ( X , Y ) 2 V X + V Y + U ( Y , X ) U ( X , Y ) |
U s e l e c t e d = max ( U i )   i 0
The U value is calculated eight times for eight edge scenarios. If any U value is higher than the threshold u , H0 is rejected and the pixel is labeled as an edge pixel.
The result of applying the modified robust rank order edge-detection algorithm is shown in Figure 8.

2.5. Circular Hough Transform

The OD center can be detected by the circular Hough transform for retinal images. Various types of circular Hough transform have been proposed, and recently, the latest circular Hough transform has been proposed by Duda et al. [37]. The Hough transformation basically works with the logic of voting for the possible geometric shape of the edges [38]. The Hough transform can be defined as the transformation of a point in Cartesian space into the parameter space defined by the shape of the object of interest. When circular forms are concerned, the following formula for the conversion of equations is taken into consideration:
r 2 = ( x a ) 2 + ( y b ) 2
In Equation (12), r represents the radius and a and b represent the center of the circle and the abscissa, respectively [39].
A result of the circular Hough transform applied to the retinal image after the edge-detection process is given in Figure 9.

3. Experimental Results and Discussion

In this work, a method developed for OD localization is proposed. The method was tested on the DRIVE, DIARETDB0, and DIARETDB1 public datasets. The detailed specifications of these databases are given in Table 1.
Proposed methods and all experiments and observations were carried out by the following specifications of computer and programs:
  • Operating system: Windows 10, 64-bit
  • Processor: Intel(R) Core(TM) i5-2430 CPU @2.40 GHz
  • Memory: 8.00 GB RAM
  • Computing environment: MATLAB R2016a
The OD center localization performance is evaluated by comparing it with ground-truth OD centers. According to Hoover and Goldbaum’s study [40], a distance of up to 60 pixels between the automatically detected OD center and manually detected OD center is acceptable. The proposed method for OD localization was measured and evaluated in terms of the accuracy rate and mean absolute distance of the algorithm. The accuracy rate is calculated separately for each dataset. The mean absolute distance is defined as the difference in pixels between the automatically detected and manually detected OD centers. The accuracy rate and mean absolute distance are calculated by the following equations:
Accuracy = C 0 N 0
N0 represents the total number of retinal images present in a database. C0 indicates the number of images in which OD centers are correctly detected.
Distance = 1 N 0 i = 1 N 0 ( | M c i ( x ) A c i ( x ) | ) + ( | M c i ( y ) A c i ( y ) | ) 2
The distance is the difference between the automatically detected and the manually detected OD centers.   M c ( x , y ) and A c ( x , y ) are the manually and automatically calculated OD center points, respectively [41].
The accuracy of the proposed method was 100%, 96.92%, and 98.88% for the DRIVE, DIARETDB0, and DIARETDB1 datasets, respectively. The average absolute distance value was approximately 10 pixels for the DRIVE dataset, about 10 pixels for the DIARETDB0, and about 12 pixels for the DIARETDB1 dataset.
A comparative analysis of the results and average absolute distance values of the proposed method and the state-of-the-art models in the literature is given in Table 2.
Our method was compared with nine studies in the literature, and the results are shown in Table 2. For the DRIVE dataset, it is seen that the rate of correctly detecting the OD is 100% in all other studies except three. Only one image could not be identified correctly in the study conducted by Ahmad and Amin [42]; also, Zhu et al. [13] failed to identify four images. The other work that could not detect all the images in the DRIVE dataset is the study of Sinha and Babu [47]. The OD regions of all the images in the DRIVE dataset have been correctly identified by the proposed method.
For the DIARETDB0 dataset, OD localization performance was 96.92% in the study of Bharkad [41]. Mahfouz and Fahmy [46] achieved 98.5% accuracy success in the same dataset. Sinha and Babu [47] performed OD localization at a success rate of 96.9%. In the proposed approach, the OD location in 126 of the 130 images was correctly identified.
For the DIARETDB1 dataset, Sinha and Babu [47] successfully performed the OD localization in all retinal images; the proposed method and the study by Bharkad [41] followed the study of Sinha and Babu [47] with a 98.88% success rate. In the proposed study, only one image was not detected correctly, and the remaining 88 images were successfully detected.
The average absolute distance values in the proposed study were calculated as 10.07, 10.54, and 12.36 for the DRIVE, DIARETDB0, and DIARETDB1 datasets, respectively. The proposed method was observed to be superior to other studies discussed in the literature for the DIARETDB0 and DIARETDB1 datasets. For the DRIVE dataset, the proposed method was ranked second after the method proposed by Bharkad, according to the average absolute distance measure [41].
It has been observed that the proposed method can detect the OD from retinal images effectively and successfully in spite of all lesions and diseases. When the images that caused the failure of the method in OD localization were examined, it was observed that the OD regions were not significantly bright compared to other regions. The reason for this problem is that the proposed method is based on the hypothesis that the OD region is the brightest region of the retinal images.

4. Conclusions

In this study, a methodology for the automatic localization of the OD based on statistical edge detection and circular Hough transform was described. After the standard preprocessing steps, the average brightness value for each image was calculated, and the pixels below the mean value were eliminated from the image with a dynamically determined threshold. Then, the statistical edge detection framework was applied to retinal images in order to avoid performance degradation due to noise. Optic disk localization was performed by applying the circular Hough transform to the images, which has previously been used in the edge-extraction process. It has been proved in previous studies [25,26] that the performance of the robust rank order-based statistical edge detection method is the most robust to variations in noise and performs better in all noise distributions tested than the conventional edge detection methods. Therefore, compared to the approaches proposed by the researchers given in Table 2, the proposed procedure for OD detection has the advantage that it is applicable to images contaminated with noise.
The results show that the proposed method is able to locate the OD accurately in three public databases. According to experiments, the accuracy of the method was 100%, 96.92%, and 98.88% for the DRIVE, DIARETDB0, and DIARETDB1 databases, respectively.
As a limitation of the proposed method, in some cases, such as when the OD region is not significantly bright compared to other regions, the circular Hough transform may be found to be unsuccessful in detecting the OD.
In our future studies, it is planned to overcome this limitation by using a hybrid framework in which heuristic methods are used.

Author Contributions

The manuscript was written by Y.K. under the supervision of H.M.Ü. The modeling, analysis, and software process was executed by Y.K. and E.D. Technical support was provided by O.A.E., and H.M.Ü. helped in the review.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kaur, J.; Sinha, D.H. Automated localisation of optic disc and macula from fundus images. Int. J. Adv. Res. Comput. Sci. Softw. Eng. 2012, 2, 242–249. [Google Scholar]
  2. Niemeijer, M.; Abràmoff, M.D.; Van Ginneken, B. Fast detection of the optic disc and fovea in color fundus photographs. Med. Image Anal. 2009, 13, 859–870. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Zheng, S.; Chen, J.; Pan, L.; Guo, J.; Yu, L. A novel method of macula fovea and optic disk automatic detection for retinal images. J. Electron. Inf. Technol. 2014, 36, 2586–2592. [Google Scholar]
  4. Gagnon, L.; Lalonde, M.; Beaulieu, M.; Boucher, M.-C. Procedure to detect anatomical structures in optical fundus images. In Proceedings of the Medical Imaging 2001: Image Processing, San Diego, CA, USA, 17–22 February 2001; pp. 1218–1226. [Google Scholar]
  5. Sinthanayothin, C.; Boyce, J.F.; Cook, H.L.; Williamson, T.H. Automated localisation of the optic disc, fovea, and retinal blood vessels from digital colour fundus images. Br. J. Ophthalmol. 1999, 83, 902–910. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Cheng, J.; Liu, J.; Xu, Y.; Yin, F.; Wong, D.W.K.; Tan, N.-M.; Tao, D.; Cheng, C.-Y.; Aung, T.; Wong, T.Y. Superpixel classification based optic disc and optic cup segmentation for glaucoma screening. IEEE Trans. Med. Imaging 2013, 32, 1019–1032. [Google Scholar] [CrossRef] [PubMed]
  7. Basit, A.; Egerton, S. Bio-medical imaging: Localization of main structures in retinal fundus images. In IOP Conference Series: Materials Science and Engineering; IOP Publishing Ltd.: Bristol, UK, 2013; p. 012009. [Google Scholar]
  8. Walter, T.; Klein, J.-C.; Massin, P.; Erginay, A.A. Contribution of image processing to the diagnosis of diabetic retinopathy-detection of exudates in color fundus images of the human retina. IEEE Trans. Med. Imaging 2002, 21, 1236–1243. [Google Scholar] [CrossRef] [PubMed]
  9. Welfer, D.; Scharcanski, J.; Marinho, D.R. A morphologic two-stage approach for automated optic disk detection in color eye fundus images. Pattern Recognit. Lett. 2013, 34, 476–485. [Google Scholar] [CrossRef]
  10. Dai, B.; Wu, X.; Bu, W. Optic disc segmentation based on variational model with multiple energies. Pattern Recognit. 2017, 64, 226–235. [Google Scholar] [CrossRef]
  11. Lowell, J.; Hunter, A.; Steel, D.; Basu, A.; Ryder, R.; Fletcher, E.; Kennedy, L. Optic nerve head segmentation. IEEE Trans. Med. Imaging 2004, 23, 256–264. [Google Scholar] [CrossRef]
  12. Li, H.; Chutatape, O. Automated feature extraction in color retinal images by a model based approach. IEEE Trans. Biomed. Eng. 2004, 51, 246–254. [Google Scholar] [CrossRef]
  13. Zhu, X.; Rangayyan, R.M.; Ells, A.L. Detection of the optic nerve head in fundus images of the retina using the hough transform for circles. J. Digit. Imaging 2010, 23, 332–341. [Google Scholar] [CrossRef] [PubMed]
  14. Zou, B.; Chen, C.; Zhu, C.; Duan, X.; Chen, Z. Classified optic disc localization algorithm based on verification model. Comput. Graph. 2018, 70, 281–287. [Google Scholar] [CrossRef]
  15. Ravishankar, S.; Jain, A.; Mittal, A. Automated feature extraction for early detection of diabetic retinopathy in fundus images. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2009), Miami, FL, USA, 20–25 June 2009; pp. 210–217. [Google Scholar]
  16. Pereira, C.; Gonçalves, L.; Ferreira, M. Optic disc detection in color fundus images using ant colony optimization. Med. Biol. Eng. Comput. 2013, 51, 295–303. [Google Scholar] [CrossRef] [PubMed]
  17. Kamble, R.; Kokare, M.; Deshmukh, G.; Hussin, F.A.; Mériaudeau, F. Localization of optic disc and fovea in retinal images using intensity based line scanning analysis. Comput. Biol. Med. 2017, 87, 382–396. [Google Scholar] [CrossRef] [PubMed]
  18. Sarathi, M.P.; Dutta, M.K.; Singh, A.; Travieso, C.M. Blood vessel inpainting based technique for efficient localization and segmentation of optic disc in digital fundus images. Biomed. Signal Process. Control 2016, 25, 108–117. [Google Scholar] [CrossRef]
  19. Harangi, B.; Hajdu, A. Detection of the optic disc in fundus images by combining probability models. Comput. Biol. Med. 2015, 65, 10–24. [Google Scholar] [CrossRef] [PubMed]
  20. Al-Bander, B.; Al-Nuaimy, W.; Williams, B.M.; Zheng, Y. Multiscale sequential convolutional neural networks for simultaneous detection of fovea and optic disc. Biomed. Signal Process. Control 2018, 40, 91–101. [Google Scholar] [CrossRef]
  21. Abed, S.E.; Al-Roomi, S.A.; Al-Shayeji, M. Effective optic disc detection method based on swarm intelligence techniques and novel pre-processing steps. Appl. Soft Comput. 2016, 49, 146–163. [Google Scholar] [CrossRef]
  22. Li, A.; Niu, Z.; Cheng, J.; Yin, F.; Wong, D.W.K.; Yan, S.; Liu, J. Learning supervised descent directions for optic disc segmentation. Neurocomputing 2018, 275, 350–357. [Google Scholar] [CrossRef]
  23. Hsiao, H.-K.; Liu, C.-C.; Yu, C.-Y.; Kuo, S.-W.; Yu, S.-S. A novel optic disc detection scheme on retinal images. Expert Syst. Appl. 2012, 39, 10600–10606. [Google Scholar] [CrossRef]
  24. Şevik, U. Retinal Image Quality Assessment and Detection of Diabetic Retinopathy Disease. Ph.D. Thesis, Karadeniz Technical University, Trabzon, Turkey, 2014. [Google Scholar]
  25. Duman, E.; Erdem, O.A. A statistical edge detection framework for noisy images. In Proceedings of the 2018 26th Signal Processing and Communications Applications Conference (SIU), Izmir, Turkey, 2–5 May 2018. [Google Scholar]
  26. Lim, D.H. Robust edge detection in noisy images. Comput. Stat. Data Anal. 2006, 50, 803–812. [Google Scholar] [CrossRef]
  27. Staal, J.; Abràmoff, M.D.; Niemeijer, M.; Viergever, M.A.; Van Ginneken, B. Ridge-based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 2004, 23, 501–509. [Google Scholar] [CrossRef] [PubMed]
  28. Kauppi, T.; Kalesnykiene, V.; Kamarainen, J.-K.; Lensu, L.; Sorri, I.; Uusitalo, H.; Kälviäinen, H.; Pietilä, J. DIARETDB0: Evaluation Database and Methodology for Diabetic Retinopathy Algorithms; Machine Vision and Pattern Recognition Research Group, Lappeenranta University of Technology: Lappeenranta, Finland, 2006; Volume 73. [Google Scholar]
  29. Kälviäinen, R.; Uusitalo, H. DIARETDB1 diabetic retinopathy database and evaluation protocol. Med. Image Understand. Anal. 2007, 2007, 61. [Google Scholar]
  30. Miri, M.S.; Mahloojifar, A. Retinal image analysis using curvelet transform and multistructure elements morphology by reconstruction. IEEE Trans. Biomed. Eng. 2011, 58, 1183–1192. [Google Scholar] [CrossRef] [PubMed]
  31. Perfetti, R.; Ricci, E.; Casali, D.; Costantini, G. Cellular neural networks with virtual template expansion for retinal vessel segmentation. IEEE Trans. Circuits Syst. II 2007, 54, 141–145. [Google Scholar] [CrossRef]
  32. Patwari, M.B.; Manza, R.R.; Rajput, Y.M.; Saswade, M.; Deshpande, N. Automatic detection of retinal venous beading and tortuosity by using image processing techniques. Int. J. Comput. Appl. 2014, 0975–8887, 27–32. [Google Scholar]
  33. Burçin, K.; Nabiyev, V.V. Dijital Mamografi Görüntülerinin Kontrast Sınırlı Adaptif Histogram Eşitleme ile Iyileştirilmesi. In Proceedings of the VII. Ulusal Tıp Bilişimi Kongresi, Gazimağusa, KKTC, 14–17 October 2010. [Google Scholar]
  34. Göreke, V.; Uzunhisarcıklı, E.; Güven, A. Gri Seviyeli Eşoluşum Matrisleri Kullanılarak Sayısal Mamogram Görüntüsünden Doku Özniteliklerinin Çıkarılması ve Yapay Sinir Ağı ile Kitle Tespiti. In Proceedings of the Tıp Teknolojileri Ulusal Kongresi-TıpTekno’14, Nevşehir, Turkey, 25–27 September 2014. [Google Scholar]
  35. Reza, A.M. Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement. J. VLSI Signal Process. Syst. Signal Image Video Technol. 2004, 38, 35–44. [Google Scholar] [CrossRef]
  36. Fisher, R.A. The Design of Experiments; Oliver and Boyd: Edinburgh, UK, 1937. [Google Scholar]
  37. Duda, R.O.; Hart, P.E. Use of the Hough transformation to detect lines and curves in pictures. Commun. ACM 1972, 15, 11–15. [Google Scholar] [CrossRef] [Green Version]
  38. Duman, E.; Kökver, Y.; Ünver, H.M.; Erdem, O.A. Automatic landmark detection through circular hough transform in cephalometric X-rays. In Proceedings of the 2017 10th International Conference on Electrical and Electronics Engineering (ELECO), Bursa, Turkey, 30 November–2 December 2017; pp. 583–587. [Google Scholar]
  39. Alioua, N.; Amine, A.; Rziza, M.; Aboutajdine, D. Eye state analysis using iris detection based on Circular Hough Transform. In Proceedings of the 2011 International Conference on Multimedia Computing and Systems (ICMCS), Ouarzazate, Morocco, 7–9 April 2011; pp. 1–5. [Google Scholar]
  40. Hoover, A.; Goldbaum, M. Locating the optic nerve in a retinal image using the fuzzy convergence of the blood vessels. IEEE Trans. Med. Imaging 2003, 22, 951–958. [Google Scholar] [CrossRef] [Green Version]
  41. Bharkad, S. Automatic segmentation of optic disk in retinal images. Biomed. Signal Process. Control 2017, 31, 483–498. [Google Scholar] [CrossRef]
  42. Ahmed, M.I.; Amin, M.A. High speed detection of optical disc in retinal fundus image. Signal Image Video Process. 2015, 9, 77–85. [Google Scholar] [CrossRef]
  43. Youssif, A.A.; Ghalwash, A.Z.; Ghoneim, A.A. Optic disc detection from normalized digital fundus images by means of a vessels’ direction matched filter. IEEE Trans. Med. Imaging 2008, 27, 11–18. [Google Scholar] [CrossRef] [PubMed]
  44. Rangayyan, R.M.; Zhu, X.; Ayres, F.J.; Ells, A.L. Detection of the optic nerve head in fundus images of the retina with Gabor filters and phase portrait analysis. J. Digit. Imaging 2010, 23, 438–453. [Google Scholar] [CrossRef]
  45. Dehghani, A.; Moghaddam, H.A.; Moin, M.-S. Optic disc localization in retinal images using histogram matching. EURASIP J. Image Video Process. 2012, 2012, 19. [Google Scholar] [CrossRef] [Green Version]
  46. Mahfouz, A.E.; Fahmy, A.S. Fast localization of the optic disc using projection of image features. IEEE Trans. Image Process. 2010, 19, 3285–3289. [Google Scholar] [CrossRef] [PubMed]
  47. Sinha, N.; Babu, R.V. Optic disk localization using L 1 minimization. In Proceedings of the 2012 19th IEEE International Conference on Image Processing (ICIP), Orlando, FL, USA, 30 September–3 October 2012; pp. 2829–2832. [Google Scholar]
Figure 1. Anatomical structures on the retina.
Figure 1. Anatomical structures on the retina.
Applsci 09 00350 g001
Figure 2. Main steps of the proposed method. CLAHE: contrast-limited adaptive histogram equalization.
Figure 2. Main steps of the proposed method. CLAHE: contrast-limited adaptive histogram equalization.
Applsci 09 00350 g002
Figure 3. Retinal fundus images and their green channel images.
Figure 3. Retinal fundus images and their green channel images.
Applsci 09 00350 g003
Figure 4. (a) Original healthy retinal image from the DRIVE dataset; (b) Enhancement of image (a) using CLAHE; (c) Original diseased retinal image from the DIARETDB0 dataset; (d) Enhancement of image (c) using CLAHE.
Figure 4. (a) Original healthy retinal image from the DRIVE dataset; (b) Enhancement of image (a) using CLAHE; (c) Original diseased retinal image from the DIARETDB0 dataset; (d) Enhancement of image (c) using CLAHE.
Applsci 09 00350 g004
Figure 5. Block diagram for the calculation of the average brightness level.
Figure 5. Block diagram for the calculation of the average brightness level.
Applsci 09 00350 g005
Figure 6. Images consisting of pixels remaining above the specified threshold. (a) is a healthy retinal image; (b) is a diseased retinal image.
Figure 6. Images consisting of pixels remaining above the specified threshold. (a) is a healthy retinal image; (b) is a diseased retinal image.
Applsci 09 00350 g006
Figure 7. Eight different edge scenarios, where white pixels represent the X region and blue pixels represent the Y region in the mask.
Figure 7. Eight different edge scenarios, where white pixels represent the X region and blue pixels represent the Y region in the mask.
Applsci 09 00350 g007
Figure 8. Implementing the modified robust rank order edge-detection algorithm. (a) is a healthy retinal image; (b) is a diseased retinal image.
Figure 8. Implementing the modified robust rank order edge-detection algorithm. (a) is a healthy retinal image; (b) is a diseased retinal image.
Applsci 09 00350 g008
Figure 9. Circular Hough transform applied after edge detection. (a) is a healthy retinal image; (b) is a diseased retinal image.
Figure 9. Circular Hough transform applied after edge detection. (a) is a healthy retinal image; (b) is a diseased retinal image.
Applsci 09 00350 g009
Table 1. The specifications of the public datasets used in this work.
Table 1. The specifications of the public datasets used in this work.
DatabaseNormal ImagesDiseased ImagesTotal
DRIVE33740
DIARETDB020110130
DIARETDB158489
Table 2. A comparative analysis of the results of the proposed method and other methods in the literature.
Table 2. A comparative analysis of the results of the proposed method and other methods in the literature.
MethodDatasetNumber of ImagesCorrect ClassificationAccuracy (%)Distance
Pereira et al. [16]DRIVE4040100-
DIARETDB1898393.25-
Ahmad and Amin [42]DRIVE403997.5-
DIARETDB1898696.5-
Youssif et al. [43] DRIVE404010017
Rangayyan et al. [44]DRIVE404010023.2
Dehghani et al. [45]DRIVE404010015.9
Zhu et al. [13]DRIVE40369018
Bharkad [41]DRIVE40401009.12
DIARETDB013012696.9211.83
DIARETDB1898898.8813.00
Mahfouz and Fahmy [46]DRIVE4040100-
DIARETDB013012898.5-
DIARETDB1898797.8-
Sinha and Babu [47]DRIVE403895-
DIARETDB013012696.9-
DIARETDB18989100-
Proposed MethodDRIVE404010010.07
DIARETDB013012696.9210.54
DIARETDB1898898.8812.36

Share and Cite

MDPI and ACS Style

Ünver, H.M.; Kökver, Y.; Duman, E.; Erdem, O.A. Statistical Edge Detection and Circular Hough Transform for Optic Disk Localization. Appl. Sci. 2019, 9, 350. https://doi.org/10.3390/app9020350

AMA Style

Ünver HM, Kökver Y, Duman E, Erdem OA. Statistical Edge Detection and Circular Hough Transform for Optic Disk Localization. Applied Sciences. 2019; 9(2):350. https://doi.org/10.3390/app9020350

Chicago/Turabian Style

Ünver, Halil Murat, Yunus Kökver, Elvan Duman, and Osman Ayhan Erdem. 2019. "Statistical Edge Detection and Circular Hough Transform for Optic Disk Localization" Applied Sciences 9, no. 2: 350. https://doi.org/10.3390/app9020350

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop