Next Article in Journal
Topological Design Methods for Mecanum Wheel Configurations of an Omnidirectional Mobile Robot
Previous Article in Journal
Using Human Objects for Illumination Estimation and Shadow Generation in Outdoor Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Accurate Optic Disc and Cup Segmentation from Retinal Images Using a Multi-Feature Based Approach for Glaucoma Assessment

1
College of Information Science and Engineering, Northeastern University, Shenyang 110819, China
2
Faculty of Robot Science and Engineering, Northeastern University, Shenyang 110819, China
3
School of Computing, Shenyang Aerospace University, Shenyang 110136, China
4
College of Basic Medical Science, China Medical University, Shenyang 110122, China
*
Authors to whom correspondence should be addressed.
Symmetry 2019, 11(10), 1267; https://doi.org/10.3390/sym11101267
Submission received: 12 September 2019 / Revised: 28 September 2019 / Accepted: 5 October 2019 / Published: 10 October 2019

Abstract

:
Accurate optic disc (OD) and optic cup (OC) segmentation play a critical role in automatic glaucoma diagnosis. In this paper, we present an automatic segmentation technique regarding the OD and the OC for glaucoma assessment. First, the robust adaptive approach for initializing the level set is designed to increase the performance of contour evolution. Afterwards, in order to handle the complex OD appearance affected by intensity inhomogeneity, pathological changes, and vessel occlusion, a novel model that integrates ample information of OD with the effective local intensity clustering (LIC) model together is presented. For the OC segmentation, to overcome the segmentation challenge as the OC’s complex anatomy location, a novel preprocessing method based on structure prior information between the OD and the OC is designed to guide contour evolution in an effective region. Then, a novel implicit region based on modified data term using a richer form of local image clustering information at each point of interest gathered over a multiple-channel feature image space is presented, to enhance the robustness of the variations found in and around the OC region. The presented models symmetrically integrate the information at each point in a single-channel image from a multiple-channel feature image space. Thus, these models correlate with the concept of symmetry. The proposed models are tested on the publicly available DRISHTI-GS database and the experimental results demonstrate that the models outperform state-of-the-art methods.

1. Introduction

Glaucoma is a degenerative optic neuropathy that progressively damages the optic nerve head causing deterioration in vision and quality of life [1]. Approximately 12.3% of the persons on the earth are likely distressed with glaucoma and are counted as the second major reason of blinding for glaucoma in modern times. Glaucoma will affect approximately 80 million persons on the earth by the year 2020 [2]. Because glaucoma is always asymptomatic, caused by progressive degeneration of optic nerve fibers in the early stages, the glaucomatous patients are not perceived for the illness until it has arrived with an obvious visual loss at a later period. Furthermore, the loss of vision is not able to be regained. Therefore, early detection and timely therapy for glaucoma are considered as the most efficient ways to retard the procession of visual damage.
There are three principal methods for glaucoma detection that can be divided into the air-puff intraocular pressure (IOP) measurement, visual field test, and optic nerve head (ONH) assessment. However, the IOP general normal is not accurate enough to diagnose glaucoma and the visual field test always requires specialized equipment that only exists in sophisticated medical centers. Hence, they are inappropriate for early glaucoma detecting. ONH assessment using retinal fundus images is most reliable, which is operated by the professional. However, manual assessment is laborious, expensive, and highly subjective compared with the automatic optic disc assessment. Therefore, automatic ONH assessment is the economic mode for estimating early-stage glaucoma and has been widely used in recent years [3]. In retinal fundus images, ONH consists of a bright yellowish area named the optic disc (OD) with a central maximum color contrast zone called the optic cup (OC), and the peripheral area between the OD and OC boundaries called the neuroretinal rim, as shown in Figure 1. Considering the characteristics of the OD, two strategies can be used for automatic ONH assessment. One is to distinguish the healthy and glaucoma fundus images with image features [4]. However, the main limitation is that it is hard to select suitable image features and classifiers. Apart from the above strategy, the clinical indicator is the other way to evaluate ONH, for example, the vertical cup to disc ratio (CDR) [5], ISNT rule [6], and notching. Among these clinical indicators, the CDR is well accepted and generally applied to real applications. Although each of these clinical indicators for evaluating ONH has a unique model for calculating, the commonality among them is that the accurate information of the OD and the OC boundary is needed for effectively detecting glaucoma.
There are a series of approaches that have been developed for segmenting the OD and the OC in retinal images, including more methods on OD [7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31] and fewer on OC [30,31,32,33,34] due to the OC’s complex anatomy structure relative to the OD, gradual variation in color intensity between the neuroretinal rim and the OC, and interlacement with blood vessels (Figure 1).
For segmenting the OD, the real application is challenging mainly because of the complex OD appearance. Therefore, a lot of researchers have made various attempts to improve the performance for segmenting the OD, this research is respectively classified as template-based matching [7,8,9,10,11], morphology-based [12,13,14], active shape model (ASM)-based [15,16,17], classification-based [18,19,20,21,22,23], and active contour models (ACM)-based [24,25,26,27,28,29,30,31]. Pinz et al. [7], H. Li et al. [8], A. Bhuiyan et al. [9], A. Aquino et al. [10], and S. Sekhar et al. [11] have all proposed some template-based matching approaches, these approaches consider the prior information for the shape of the OD and use the circle templates or Hough transform technique to segment the OD. However, they generally fail to extract the unabridged OD boundary because the real OD shape is not a regular round or oval shape. Plenty of morphology-based methods have been researched by A.W. Reza et al. [12], D. Welfer et al. [13], and R. Srivastava et al. [14] for extracting the boundary of the OD. They modeled the bright of the OD for segmentation, but the intensity inhomogeneity as well as obscuration by the bridging retinal vessel always affects the performance of these approaches. H. Li et al. [15] and F. Yin et al. [16,17] represented the OD shape using statistical approaches. It can accurately describe the normal shape of the OD, but it may fail in segmenting objects with variation and irregular boundaries. M. D. Abràmoff et al. [18], M.K. Dutta et al. [19], N.M. Tan et al. [20], E. Saeed et al. [21], and W. Zhou et al. [22,23] extracted the OD boundary using the image pixel-level characteristics or the superpixel-level characteristics, achieved according to the retinal image information. However, these methods are usually affected by sample size and are time consuming when a mass of fundus images are employed.
Compared with the above-mentioned approaches, the profound mathematical properties for complex topology with effective level-set-based numerical schemes, prior knowledge (e.g., the anatomical structure, intensity distribution), and some properties (e.g., edges, feature, statistics) of the OD are fully applied by ACM-based methods to achieve higher accuracy than the other four categories for segmenting the OD. Hence, there is an emerging trend to make use of ACMs in the OD segmentation tasks [24]. Mendels et al. [25] processed the image by the local minima detection and the morphological filtering. Then, the gradient vector flow (GVF) is employed to drive the contour, which is initialized manually to achieve the whole OD boundary. Although the method evolves the contour near the OD, the imprecise boundary of the OD can be obtained due to the high variation at the vessel locations. Circularity constraint is introduced into the traditional Chan–Vese (C-V) model by Tang et al. [26] and makes the segmented region restricted to a specific shape based on a random initial curve. The method can handle some influence of the bright lesion, but it fails to describe the contours of complex topology and extract accurate boundary for the OD, because the OD is not a regular round or oval shape according to the clinical research. Wong et al. [27] modified the conventional level-set approach to segment the OD using a fixed size initial contour, followed with the smoothed contour by fitting an ellipse. It performs well in capturing a large range of image variations. However, this method cannot work well in handling the OD regions with the shape irregularity because of a lot of pathological variations and changes in view. Yu et al. [28] evolved the deformable contour by applying the local edge vector, and the true boundary of the OD is achieved by covering the area information based on the fixed size initial contour. It can fast extract a boundary for the OD. Nevertheless, the onerous preprocessing step is necessary for achieving the contour when the deformable model method is used. Esmaeili et al. [29] presented the new deformable model to segment the OD with the empirical selecting initial contour. Compared with the other methods, a better convergence property along with more outstanding computational efficiency is displayed. However, it is greatly influenced by intensity inhomogeneity, which is produced by illumination variations or the imperfection of image devices. Joshi et al. [30] used the region-based active contour model to extract the OD contour. Although the model performs better in handling the local gradient variations in the retinal image, it cannot distinguish the bright lesion that is similar to the OD. Thakur et al. [31] proposed a level set adaptively regularized kernel-based intuitionistic fuzzy c means (LARKIFCM)-based method, which used the level set combined with the clustering approach. The method can more effectively segment the OD by taking the positive features of the combined approaches. However, it ignores the imperfection of the single-feature information, and it is not able to well describe the complex OD appearance. These methods present a potential for handling a series of the image variations. Nevertheless, these methods have a problem to simultaneously deal with the complex OD appearance affected by intensity inhomogeneities, blood vessels occlusions, ill-defined boundaries, irregular shape, and some anomalies (e.g., peripapillary atrophy (PPA)), as shown in Figure 2. To address this challenge, this paper presents a new approach for the OD segmentation by integrating the ample information describing the OD into the effective LIC model.
Compared with the OD segmentation methods, fewer of the OC segmentation methods [30,31,32,33,34] have been presented from color retinal images because segmentation of the OC is further challenged by the cup’s complex anatomy location relative to the OD, gradual variation in color intensity between the neuroretinal rim and the OC, and the density of blood vessels covering parts of the OC, as shown in Figure 1. In [32], a novel approach for segmenting the OC is designed by considering vessel bends at the OC boundary. First, they extracted the initial OC boundary by the level set to generate the image patches. Then, edge detection and wavelet transform are used for obtaining features that identify likely vessel edges according to the statistical approach rule. Finally, the cup boundary can be obtained by vessel bends, which are some vessel pixel points including the direction change. However, this approach is sensitive to the blood vessels extraction, especially the inter-image variations, which are highly reliant on the initial contour of the OC, and it cannot achieve an accurate segmentation result when the vessel kinks are absent in the regions. An approach presented in [33] uses the thresholding of the green color plane to derive cup pixels, and the cup boundary can be obtained by ellipse fitting. Joshi et al. [30] proposed the method to obtain a range of latent pixels against the OC edge via thresholding and then they used these pixels to fit an ellipse to determine the OC borderline. Mittapalli et al. [34] proposed the novel approach in which an efficient clustering-based thresholding algorithm is implemented to acquire useful information for the OC, and the information is subsequently estimated by the symmetrical properties of the OC. The final borderline of the OC is obtained by ellipse fitting. Thakur et al. [31] presented the hybrid method for segmenting the OC, and the result obtained by the novel clustering method guides level set evolution to extract the OC boundary. These methods can extract the boundary information from the OD. However, they are not suitable for large inter-image intensity variations because of physiological difference among patients. In this paper, in order to handle the above issues to provide robustness against variations found in and around the OC area, the new preprocessing approach and the novel implicit region based on modified data term that integrates the local image clustering information around the each point of interest formed by the features like intensity and color from multiple image channels are presented.
The paper is organized as follows. The presented adaptive contour initialization is introduced in Section 2.1. The local intensity clustering model extended (LICE) by integrating ample information describing the OD is given in Section 2.2. In Section 2.3, a novel implicit region based on the modified data term by a novel preprocessing approach is built. Section 3 explains the experimental results and the comprehensive analysis of the result. Section 4 describes the conclusion and the future work.

2. Method

2.1. Rough Boundary Extraction

Considering that the active contour models (ACM) are sensitive to the initialization of the contour, an imprecise initial contour will reduce the performance for ACM [35,36,37]. In order to handle these disadvantages and effectively segment the OD, a robust adaptive approach is proposed to initialize the level set (details follow).
Our previous work [38], which is a saliency detection approach in cooperation with the feature extraction, was applied for locating the OD in retinal image (Figure 3a). We clipped the pixels patch area of interest (ROI) (Figure 3b), referring to the OD location. Considering that plenty of the choroidal vessel was covered in the retinal image, which affects the performance of the segmentation method, we processed each channel image using the morphological opening followed by closing operations, and Figure 3c describes the vessels removal result. According to a large number of experimental verifications, the basic structural element is set as a disk structure with a radius of 15 pixels to eliminate the vessels. After that, comparing with the surrounding retinal areas, the OD area usually represents a brighter pallor. Hence, the OD area can be considered as the salient objective that includes one of the densest sites for the intensity density distribution in the retinal fundus images. According to this concept, an efficient approach called mean-shift [39], which can automate classification was used to detect the boundary of the OD. Each pixel is associated with a feature point and the produced segmentation can be seen as the modes of the density of feature points estimated with a Gaussian kernel [39]. The major region of the OD can be classified displayed in the Figure 3d from the vessel removal image (Figure 3c). Then, for detecting the boundary of the classification result (Figure 3d), we needed to convert the color classification image to grayscale, as shown in Figure 3e. We used the canny approach to obtain the object edge in grayscale, Figure 3f shows the corresponding boundary map. The corresponding edge map is marked on the ROI image displayed in Figure 3g. Finally, the OD is approximate to the circular or oval-shaped object [1], so the edge of the OD (Figure 3h) is effectively fitted by the circular Hough transform (CHT) based on the voted amount of the circular central points from the enough edge information in Figure 3f. The achieving rough edge of OD is displayed in the Figure 3i.
Compared to the OD, the extraction of the OC boundary is more challenging due to the OC’s complex anatomy location relative to the OD, gradual variation in color intensity between the neuroretinal rim and the OC, and the density of blood vessels covering parts of the OC. In this paper, we also used the unsupervised learning technology to obtain the OC edge, and the main difference between the OD and the OC contour initialization is the usage of the thresholding method. Considering the OC is the highest intensity region inside the OD, the highest intensity based thresholding method is appropriate for obtaining the OC initial contour. First, the OD region extracted by above method (Figure 3i) was retained in the vessel removal image for the red channel, and the values of the other region were set as 0 to exclude the unnecessary influence, as shown in the Figure 3j. Then, the thresholding [40] was applied on the image (Figure 3j), with the value of threshold 10% of the highest intensity value, and the obtained binary image is shown in the Figure 3k. The corresponding edge map (Figure 3l) calculated by the canny approach was marked on the ROI image (Figure 3m). After that, because the OC shape is predicted to be either an approximate circular or oval within the OD, a Hough transform was done on the estimated the OC boundary (Figure 3l), which helped to achieve a more complete and smoother OC boundary, as shown in the Figure 3n. Finally, the initialization contour for the OC was extracted (Figure 3o).

2.2. Accurate Boundary Curve Extraction of the OD

The seriousness of glaucoma sickness will change with the changes of the OD and the OC. In order to availably research the course of glaucoma disease and treatment results, the accurate boundary information for the OD and the OC is required. Nevertheless, the retinal image for segmenting the OD and the OC is generally affected by intensity inhomogeneity because of illumination variations or the imperfection of image devices. In order to handle the issue, we introduce an efficient model—local intensity clustering (LIC) [35]. This model defines a local clustering function for the intensity in a neighborhood of each point, and it describes a partition of the image domain and a bias field that accounts for the intensity inhomogeneity. The following objective function is minimized to evolve the contour in the achieving clipped image.
E L I C ( ϕ , c j , b )   =   ( v L ( ϕ ) + μ R ( ϕ ) + ε ( ϕ , c j , b ) )
L ( ϕ )   =   Ω | H ( ϕ ) | d y
R ( ϕ )   =   Ω 1 2 ( | ϕ | 1 ) 2 d y
ε ( ϕ , c j , b )   =   Ω j = 1 n e j ( y ) M j ( ϕ ( y ) ) d y
where
e j ( y )   =   Ω k ( x y ) | I ( y ) b ( x ) c j | 2 d x
M 1 ( ϕ )   =   H ( ϕ ) ; M 2 ( ϕ )   =   1 H ( ϕ )
where is defined as the gradient operator; H is the Heaviside function; L ( ϕ ) is the length term for computing the length of the zero level contour; ϕ is a level set function; R ( ϕ ) is the regularization term; is expressed as the set of real number, pixel coordinates x , y Ω R 2 , { Ω j } j = 1 N form a partition of the image domain Ω ; Ω j can be expressed as the domain of the j-th object; N represents the total number of objects in the image; ε ( ϕ , c j , b ) is the data term; b ( x ) is a bias field; I is the value of pixel; c j is a constant approximated by an area Ω j in the image; k is a truncated Gaussian function [35]; and M j ( ϕ ( y ) ) is the membership function of the region Ω j .
Apart from the intensity inhomogeneity, a complex OD appearance is always affected by the blood vessels occlusions, ill-defined boundaries, and some anomalies (e.g., PPA) making the segmentation of the OD more challenging. In order to overcome many-sided distractions existing on the complex OD appearance, the data term in the LIC model is extended by integrating ample local image clustering information consisting of features like color and intensity, which complement each other’s advantages for adequately describing the OD. In this paper, the original red color plane, vessel-free red color plane, and value channel image from the vessel-free HSV color space constituting the three-element vector ( d = 3 ) can be integrated by the extended data term to enhance the information of an image point x. I i is expressed as the i-th feature of the image on Ω with i   =   1 , , d . The extended data term in the LIC model for the vector case is defined as follows:
ε M _ D ( ϕ , c j i , b i )   =   Ω j = 1 n e j ( y ) M j ( ϕ ( y ) ) d y
where
e j ( y )   =   1 d i = 1 d Ω k ( x y ) | I i ( y ) b i ( x ) c j i | 2 d x
M 1 ( ϕ )   =   H ( ϕ ) ; M 2 ( ϕ )   =   1 H ( ϕ )
Considering that the OD is a brighter object than the other areas in images, the number of objects n is set as 2, c 1 i   =   ( c 11 , c 12 , , c 1 d ) , c 2 i   =   ( c 21 , c 22 , , c 2 d ) , b i   =   ( b 1 , b 2 , , b d ) that are three constant vectors, and d is set as 3. c 1 i and c 2 i , respectively, express the true signal of the OD and the background for the i-th feature space. The b i is assumed to be the bias field function for the i-th feature space. b i ( x ) c j i can be regarded as the spatial varying mean in each subregion Ω j O x for the i-th feature space. We minimize the energy function (2) to obtain the optimal values of c 1 i , c 2 i , and b i . The LIC model extended (LICE) by integrating (1) and (2) into a unified framework is expressed as:
E L I C E ( ϕ , c j i , b i ) = ( v L ( ϕ ) + μ R ( ϕ ) + ε M _ D ( ϕ , c j i , b i ) )
where v and μ are respectively the weight parameters for the length term and the regularization term. There are three terms comprising the proposed object function. Each term has a unique performance, which constitutes the advantage of the model to effectively segment the complex OD in the retinal fundus image. Because contour evolution always generates the drastic protuberance and is sunken frequently, the first term L ( ϕ ) can be applied to penalize the arc length of zero level of ϕ . For avoiding the re-initialization procedure during the contour evolution, we use the second term R ( ϕ ) to regularize the zero level of ϕ . Because of the many-sided distractions caused by the complex OD appearance, the third term ε M _ D integrates various feature information of the OD to precisely depict the boundary location.
We solved the presented LICE model in the Equation (3) by the standard gradient descent method [35], and resolved the minimization problem for each variable alternatively to obtain a minimizer of the Equation (3), because it is difficult to acquire a minimizer E L I C E ( ϕ , c j i , b i ) for ϕ , c 1 i , c 2 i , and b i simultaneously. For a fixed ϕ , the functional Equation (3) is minimized with respect to the c 1 i , c 2 i , b i ( i   =   1 ,   2 ,   3 ) as follows:
c 1 i   =   Ω [ k ( x y ) b i ( x ) ] I i ( y ) H ( ϕ ( y ) ) d y Ω [ k ( x y ) b i 2 ( x ) ] H ( ϕ ( y ) ) d y
c 2 i   =   Ω [ k ( x y ) b i ( x ) ] I i ( y ) [ 1 H ( ϕ ( y ) ) ] d y Ω [ k ( x y ) b i 2 ( x ) ] [ 1 H ( ϕ ( y ) ) ] d y
b i ( x )   =   [ k ( x y ) ( I i ( y ) H ( ϕ ( y ) ) ) ] c 1 i + [ k ( x y ) ( I i ( y ) ( 1 H ( ϕ ( y ) ) ) ) ] c 2 i [ k ( x y ) H ( ϕ ( y ) ) ] c 1 i 2 + [ k ( x y ) ( 1 H ( ϕ ( y ) ) ) ] c 2 i 2
Keeping c 1 i , c 2 i , b i fixed, the energy functional Equation (3) is minimized with respect to ϕ , the gradient vector flow is solved as follows:
ϕ t   =   δ ( ϕ ) ( e 1 i e 2 i ) + v δ ( ϕ ) d i v ( ϕ | ϕ | ) +   μ ( 2 ϕ d i v ( ϕ | ϕ | ) )
where δ expresses the univariate Dirac function; t describes the time step of the experiment.

2.3. Accurate Boundary Curve Extraction of the OC

Compared with the OD segmentation, the extraction of the OC boundary is more challenging as the OC’s complex anatomy location relative to the OD leading to the incapable segmentation for the (ACM)-based methods. For the complete bright region (optic nerve head) consisting of both the OD and the OC compared to the background, two-phase ACM representing two objects and four-phase ACM representing four objects cannot directly describe the region of the OC. To handle the issue, a novel preprocessing method, which introduces a structure prior information constraint into the contour evolution, was presented to integrally describe the OC. In this paper, the evolution of the OC contour is restricted in the OD region by our preprocessing, and the OD region can be regarded as a region of the function ϕ greater than zero ( ϕ   >   0 ). This modified data term in the LIC model by our preprocessing method is described as:
ε P ( φ , c j , b )   =   Ω j = 3 p e j ( y ) M j ( φ ( y ) ) d y
where
e j ( y )   =   Ω k ( x y ) | I ( y ) b ( x ) c j | 2 d x
M 3 ( φ )   =   H ( ϕ ) H ( φ ) ; M 4 ( φ )   =   H ( ϕ ) ( 1 H ( φ ) )
where OD includes two different regions, which are respectively the OC and the neuroretinal rim in retinal fundus images, so p is set as 4. ϕ denotes the level set function of the OD. φ expresses the level set function of the OC. b ( x ) denotes the bias field function; c j is a constant approximated by the j-th object in the OD; M 3 and M 4 are proposed to express the intrinsic anatomical structure for the OC. Furthermore, these two membership functions respectively describe the inner and the outer regions for the OC within the OD, which was segmented by our approach mentioned in Section 2.2.
Considering smooth region transition between the OC and the neuroretinal rim, and blood vessel cover makes OC segmentation a much more difficult case altogether. In this paper, the novel implicit region based on a modified data term presented in Section 2.3, using a richer form of local image clustering information at each point of interest gathered over the multiple-channel feature image space is proposed to enhance the robustness of the variations found in and around the OC regions. The multiple-channel feature images ( m   =   6 ) are respectively taken from the original red color plane, vessel-free green color plane, and the value channel image from vessel-free HSV color space to construct the multi-element vector. The novel implicit region based on the modified data term can be described as follows:
ε P _ M _ C ( φ , c j i , b i )   =   Ω j = 3 p e j ( y ) M j ( φ ( y ) ) d y
where
e j ( y )   =   1 m i = 4 m Ω k ( x y ) | I i ( y ) b i ( x ) c j i | 2 d x
M 3 ( φ )   =   H ( ϕ ) H ( φ ) ; M 4 ( φ )   =   H ( ϕ ) ( 1 H ( φ ) )
where m is set as 6 due to constructing the three-element vector, and the values of i are respectively 4, 5, and 6. I i is denoted as the i-th feature space. c 3 i   =   ( c 34 , c 35 , , c 3 m ) , c 4 i   =   ( c 44 , c 45 , , c 4 m ) , and b i   =   ( b 4 , b 5 , , b m ) are respectively constant vectors. b i is referred to as a bias field function for the i-th feature space inside the OD. c 3 i and c 4 i respectively express the true signal of the OC and the neuroretinal rim for the i-th feature space. b i ( x ) c j i can be regarded as the spatial varying mean in each subregion Ω j O x for the i-th feature space inside the OD. We minimize the energy function (9) to obtain the optimal values of c 3 i , c 4 i , and b i .
The modified LIC model extended (MLICE), which integrates (1) and (9) into the unified framework can be expressed as:
E M L I C E ( φ , c j i , b i ) = ( λ L ( φ ) + γ R ( φ ) + ε P _ M _ C ( φ , c j i , b i ) )
where λ and γ are weighting constants.
The solving method, which minimizes the proposed energy functional (Equation (10)) is similar to the OD segmentation. After a series of calculations, the solutions are acquired:
c 3 i   =   Ω [ k ( x y ) b i ( x ) ] I i ( y ) H ( ϕ ( y ) ) H ( φ ( y ) ) d y Ω [ k ( x y ) b i 2 ( x ) ] H ( ϕ ( y ) ) H ( φ ( y ) ) d y
c 4 i   =   Ω [ k ( x y ) b i ( x ) ] I i ( y ) H ( ϕ ( y ) ) [ 1 H ( φ ( y ) ) ] d y Ω [ k ( x y ) b i 2 ( x ) ] H ( ϕ ( y ) ) [ 1 H ( φ ( y ) ) ] d y
b i ( x )   =   [ k ( x y ) ( I i ( y ) H ( ϕ ( y ) ) H ( φ ( y ) ) ) ] c 3 i + [ k ( x y ) ( I i ( y ) H ( ϕ ( y ) ) ( 1 H ( φ ( y ) ) ) ) ] c 4 i [ k ( x y ) ( H ( ϕ ( y ) ) H ( φ ( y ) ) ) ] c 3 i 2 + [ k ( x y ) ( H ( ϕ ( y ) ) ( 1 H ( φ ( y ) ) ) ) ] c 4 i 2
φ t   =   H ( ϕ ) δ ( φ ) ( e 3 i e 4 i ) + λ δ ( φ ) d i v ( φ | φ | ) +   γ ( 2 φ d i v ( φ | φ | ) )
The main steps describing the OD and the OC segmentation are summarized as follows:
  • Initialization: Input the set of multi-channel feature images including original red channel image, vessel-free red channel image, vessel-free green channel image, and value channel image from vessel-free HSV color space, b i ( x )   =   1 , i   =   1 , 2 , 3 , 4 , 5 , 6 . The level set functions ϕ l   =   ϕ 0 , φ q   =   φ 0 . ϕ 0 and φ 0 are respectively initial level set function of the OD and the OC obtained in Section 2.1. l and q are respectively defined as iterations.
  • Respectively update c j i and b i , j   =   1 , 2 , i   =   1 , 2 , 3 , using (4)–(6).
  • Evolve the level set functions, according to (7). If ϕ satisfies the stationary condition, stop; otherwise, l   =   l + 1 and return to Step 2.
  • Input level set function ϕ of the OD obtained in Step 3.
  • Respectively update c j i and b i , j   =   3 , 4 , i   =   4 , 5 , 6 , using (11)–(13).
  • Evolve the level set functions, according to (14). If φ q satisfies the stationary condition, stop; otherwise, q   =   q + 1 and return to Step 5.

3. Experimental Results

3.1. Database

For this section, to prove the availability of the presented approach, we select the publicly available DRISHTI-GS dataset [41] based on the retinal fundus image for evaluating the segmented results. DRISHTI-GS is made up of 31 normal images and 70 glaucomatous images. Both are produced with the 30 degree field of view at a resolution of 2896 × 1944. Four glaucoma experts correctly mark the OD and the OC in each image. For compensating variations marked by inter-observer, the majority voting manual is regarded as the final ground truth (GT), indicating agreement among at least three experts [41] to qualitatively assess the presented approaches. In our experiment, the initial contour is achieved in Section 2.1 for methods to segment the OD and the OC. The segmentation methods for evaluating experiments are calculated in the vessel-free image.

3.2. Evaluation Measures

For verifying the effectiveness of the two presented segmentation approaches for the OD and the OC, the other seven methods are respectively compared with our method. For the OD segmentation, four methods are implemented, namely Hough transform [42], GVF [43], C-V model [30], and LARKIFCM [31]. Similarly, for the OC segmentation, the thresholding [30], ellipse fitting [33], clustering [34], and LARKIFCM [31] are employed.
To further quantificationally evaluate the comprehensive property of the segmentation approaches, some performance measures are used for quantitative analyses, which are respectively detected area, detected boundary, and the CDR error estimation.
We quantificationally assess the property of method based on detected area in term of the GT using the traditional F-score, which can be described as the harmonic mean of precision and recall. First, the precision and recall values can be respectively described as:
p r e c i s i o n   =   t p t p + f p ; r e c a l l   =   t p t p + f n
where tp is the value of true positive, fp is the value of false positive, and fn is the value of false negative. The calculation of these based on the overlap region between the obtained segmentation area and the GT area.
Then, the single performance F-score (F) can be expressed as:
F   =   2 p r e c i s i o n r e c a l l p r e c i s i o n + r e c a l l
The scope of the value for F-score is between 0 and 1. If the performance of the method is excellent, the F-score will get a high value.
A quantitative analysis based on a segmented boundary can be used to evaluate the property of the method in term of the GT, and it is expressed as the distance D between C g and C o , which are respectively the boundary of the GT and the segmentation method.
D   =   1 k θ = 1 θ k ( d g θ ) 2 ( d o θ ) 2
where k is defined as the total number of angular samples. According to [44], we set the k as 4 and respectively assign 0°, 90°, 180°, and 270° to the angular directions. d g θ and d 0 θ are defined as the distances from the centroid of the curve C g to points on C g and C o , respectively, in the angular direction of θ . Ideally, the value of D should approximate zero.
Third, the CDR depending on the diameters of the OD and the OC in the vertical direction can be regarded as an important parameter for the glaucoma assessment. The CDR error estimation ( Error ) is the metric applied to analyze the property of the segmenting method, the Error can be calculated and defined as:
E r r o r   =   | C D R G C D R T |
where C D R G is solved based on GT, and C D R T is solved based on the automatic method.

3.3. Segmentation Results with Different Initial Contour

We compared the presented initial contour with the other different initial contours for segmenting the OD and the OC, and the segmentation results achieved by different initial contours are displayed in Figure 4, where there are some advantages for the presented adaptive initial contour. First, considering that most of the ACM-based methods are sensitive to the initialization of the contour, the adaptive initial contour is closer to the GT of the OD and the OC compared with the other initial contours to accurately drive the contour evolution. Second, the designed method for extracting the initial contour can automatically classify without class value based on the kernel density estimation to obtain the high-efficiency outcomes. Furthermore, the proposed segmentation approach using an adaptive initial contour is more robust to interference generated by the complex OD and OC appearance. The major reason is the combined advantages of the adaptive initial contour and the multiple-channel features incorporated into our model.
According to Table 1, these methods obtain the best segmentation results using the adaptive initial contour, the average F-score, and the average boundary distance of OD can be respectively achieved as 0.948 and 8.885. The average F-score and the average boundary distance of OC can be respectively obtained as 0.826 and 21.980.

3.4. Optic Disc Segmentation Results

The segmentation results shown in Figure 5 obtained from Hough transform [42], GVF [43], C-V model [30], and LARKIFCM [31] are applied to assess the property for the presented method. The segmented results are described with white lines marked by the expert and described with blue lines marked by the methods. The first column lists the original clipped image, and the contours in the second column are the adaptive initial contour achieved by the proposed approach mentioned in the Section 2. The first two row images in Figure 5 depict the challenging situation for extracting the boundary, which is the irregular shape of the OD. We can clearly see that the presented approach extracted more precise and robust boundary results than achieved by the other methods due to the fact that our method is unrestricted by the regular shape constraint and takes the sufficient information of the OD. The third, fourth, and fifth row images contain the PPA having high gradient variations. Because of the sensitivity to the local gradient variations, the Hough transform and the GVF achieve an inaccurate boundary segmentation result. The C-V model can overcome this problem, but it is influenced by the PPA because of a subtle difference presenting between the average intensity of the segmented foreground and background areas. For the LARKIFCM method, it can overcome most of the influences caused by bright regions of the PPA, owing to taking positive features of the combined approaches, but it is misled in some regions where the single-feature information plays a decisive role, which is insufficient. Comparing the segmented results of the presented approach with the others, the OD boundary extracted by the presented method is matching closely with the GT because the ample information describing the OD is integrated into the effective LIC model, which is robust to the many-sided distractions.
For further assessing the properties of the presented approach, Table 2 shows the quantitative analysis results according to the F-score and the boundary-based distance measures to compare the presented approach with the other approaches. The highest F-score and the lowest average boundary distance among all the approaches can be obtained, which make the presented method superior over the other approaches.

3.5. Optic Cup Segmentation Results

The property of our model extracting the OC boundary can be evaluated through comparison with the other methods such as the threshold-based approach [30], ellipse fitting approach [33], clustering-based approach [34], and LARKIFCM [31]. Figure 6 respectively presents obvious segmentation results obtained by these methods. Seen from Figure 6, the OC boundary obtained by the presented model has a small deviation from the GT both on the nasal and temporal sides. However, the other approaches suffer from a significant influence on the segmentation accuracy when the dense blood vessels are presented on the nasal side, which achieves a large deviation for the detected OC boundary from the GT on the temporal side. As mentioned above, the proposed model is superior to the other methods and is closer to the GT. The reasons are given as: (1) the introduced LIC model based on the estimated bias field used for the intensity inhomogeneity correction can handle the intensity inhomogeneity and then enhance the discrimination between the OC and the non-OC; (2) the novel preprocessing method based on a structure prior information of the OD and the OC is integrated into the LIC to guide the contour evolution in an effective area, excluding the negative effect of the non-objects; (3) a novel implicit region based on the modified data term, which uses a richer form of the local image clustering information at each point of interest gathered over the multiple-channel feature image space, is presented to enhance the robustness of the variations found in and around the OC regions. Table 3 quantificationally analyzes the segmentation results of the OC according to the F-score and the boundary-based distance. Our model obtains a lower false positive (fp), false negative (fn), and the highest F-score compared with the other three methods. Furthermore, the lowest average boundary distance obtained by comparing our model with the other methods illustrates that the segmented boundary from our model is closer to the GT. In summary, this proposed model has a significant improvement in the segmentation performance in contrast to the other approaches.

3.6. Glaucoma Assessment

Based on the boundary information for the OD and the OC segmented by the method, the CDR using the diameter measurement in the vertical direction (v-CDR) can be derived, which is a common measurement for the glaucoma assessment. However, the OC may be oriented at different angles, and the v-CDR describes precision only in a vertical direction and has an inadequate measure. Therefore, the cup to disc area ratio (a-CDR) should be employed to assess the overall segmenting accuracy derived simultaneously in all directions. Table 4 displays the mean error μ E r r o r and the standard deviation of the error σ E r r o r in estimating the v-CDR and the a-CDR for all of the images. Seen from the Table 4, the mean/standard deviation of error for v-CDR are 0.152/0.104 for normal case and 0.093/0.084 for glaucoma case, and the mean/standard deviation of error for a-CDR are 0.172/0.127 for normal case and 0.101/0.091 for glaucoma case. The results show that mean error μ E r r o r and standard deviation of error σ E r r o r are smaller in glaucoma images compared to normal images. Hence, the presented model has a high sensitivity for the glaucoma detection.

4. Conclusions and Future Work

The paper proposed the solution of glaucoma assessment in terms of the different estimation parameters. Based on the requirement against the accurate boundary information of the OD and the OC, the adaptive method for extracting the initial contour and two novel models for contour evolution were presented by us. Considering the active contour models are sensitive to the initialization of the contour, the robust adaptive approach, which initializes level set was designed to increase the performance of contour evolution. For extracting the precise boundary of the OD, the LICE model was proposed. First, the introduced LIC model based on the estimated bias field used for the intensity inhomogeneity correction can deal with the commonly occurred intensity inhomogeneity phenomenon. Then, an ample information describing the OD was integrated into the effective LIC model to deal with complex OD appearance affected by the pathological changes and the vessel occlusion. Meanwhile, a novel segmentation model MLICE for the OC was presented. First, the novel preprocessing method based on the structure prior information of the OD and the OC was used for modifying data term in LIC to guide the contour evolution in an effective region excluding the negative effect of non-objects. Second, a novel implicit region based on the modified data term, which uses a richer form of local image clustering information at each point of interest gathered over a multiple-channels feature image space was presented. It can provide robustness against variations found in and around the OC region. The performance of two novel models for segmenting the OD and the OC is assessed on the publicly available DRISHTI-GS database. According to the Table 2 and Table 3, the presented models outperform the other state-of-the-art approaches. Although the proposed models obtain the outstanding segmentation performance, boundary estimation errors are mostly in the regions with no more prior knowledge cues such as the shape. Considering that our models can make full use of prior knowledge to extract the boundary of the OD and the OC, it will be investigated in our future work.

Author Contributions

Y.G. and C.W. proposed the idea of the paper; Y.G., X.Y. and W.Z. designed and performed the experiments; C.W., X.W. and Y.Z. analyzed the data; X.Y. and W.Z. contributed analysis tools; all authors contributed to writing and editing the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant nos. 61701101, 61973093, U1713216, 61901098, 61971118, the National Key Robot Project under Grant nos. 2017YFB1301103, the Fundamental Research Fund for the Central Universities of China N172603001, N181602014, N172604004, N172604003, and N172604002, Liaoning Provincial Department of Science and Technology Natural Fund Guidance Project under Grant no. 2019-ZD-0234, Scientific Research Fund Project of Liaoning Provincial Department of Education under Grant no. JYT19040, Shenyang Research Fund Grant no. 18-013-0-15.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lester, M.; Garway-Heath, D.; Lemij, H. Optic Nerve Head and Retinal Nerve Fibre Analysis; European Glaucoma Society: Editrice Dogma, Italy, 2005; pp. 121–149. [Google Scholar]
  2. Quigley, H.; Broman, A. The number of people with glaucoma worldwide in 2010 and 2020. Br. J. Ophthalmol. 2006, 90, 262–267. [Google Scholar] [CrossRef] [Green Version]
  3. Cheng, J.; Liu, J.; Xu, Y.; Yin, F.; Wong, D.; Tan, N. Superpixel classification based optic disc and optic cup segmentation for glaucoma screening. IEEE Trans. Med. Imaging 2013, 32, 1019. [Google Scholar] [CrossRef] [PubMed]
  4. Bock, R.; Meier, J.; Nyl, L.; Michelson, G. Glaucoma risk index: Automated glaucoma detection from color fundus images. Med. Image Anal. 2010, 14, 471–481. [Google Scholar] [CrossRef] [PubMed]
  5. Damms, T.; Dannheim, F. Sensitivity and specificity of optic disc parameters in chronic glaucoma. Invest. Ophth. Vis. Sci. 1993, 34, 2246–2250. [Google Scholar]
  6. Harizman, N.; Oliveira, C.; Chiang, A.; Tello, C.; Marmor, M.; Ritch, R.; Liebmann, J. The ISNT rule and differentiation of normal from glaucomatous eyes. Arch. Ophthalmol. 2006, 124, 1579–1583. [Google Scholar] [CrossRef] [PubMed]
  7. Pinz, A.; Bernogger, S.; Datlinger, P.; Kruger, A. Mapping the human retina. IEEE Trans. Med. Imaging 1988, 17, 606–619. [Google Scholar] [CrossRef] [PubMed]
  8. Li, H.; Chutatape, O. Automated feature extraction in color retinal images by a model based approach. IEEE Trans. Biomed. Eng. 2004, 51, 246–254. [Google Scholar] [CrossRef] [PubMed]
  9. Bhuiyan, A.; Kawasaki, R.; Wong, T.; Kotagiri, R. A new and efficient method for automatic optic disc detection using geometrical features. In Proceedings of the World Congress on Medical Physics and Biomedical Engineering, Munich, Germany, 7–12 September 2009; pp. 1131–1134. [Google Scholar]
  10. Aquino, A.; Gegúndez-Arias, M.; Marín, D. Detecting the optic disc boundary in digital fundus images using morphological, edge detection, and feature extraction techniques. IEEE Trans. Med. Imaging 2010, 29, 1860–1869. [Google Scholar] [CrossRef] [PubMed]
  11. Roychowdhury, S.; Koozekanani, D.; Kuchinka, S.; Parhi, K. Optic disc boundary and vessel origin segmentation of fundus images. IEEE J. Biomed. Health Inform. 2017, 20, 1562–1574. [Google Scholar] [CrossRef]
  12. Reza, A.; Eswaran, C.; Hati, S. Automatic tracing of optic disc and exudates from color fundus images using fixed and variable thresholds. J. Med. Syst. 2009, 33, 73–80. [Google Scholar] [CrossRef]
  13. Welfer, D.; Scharcanski, J.; Marinho, D. A morphologic two-stage approach for automated optic disk detection in color eye fundus images. Pattern Recognit. Lett. 2013, 34, 476–485. [Google Scholar] [CrossRef]
  14. Srivastava, R.; Cheng, J.; Wong, D.; Liu, J. Using deep learning for robustness to parapapillary atrophy in optic disc segmentation. In Proceedings of the 2015 IEEE 12th International Symposium on Biomedical Imaging, New York, NY, USA, 16–19 April 2015; pp. 768–771. [Google Scholar]
  15. Li, H.; Chutatape, O. Boundary detection of optic disk by a modified ASM method. Pattern Recognit. 2003, 36, 2093–2104. [Google Scholar] [CrossRef]
  16. Yin, F.; Liu, J.; Ong, S.; Sun, Y.; Wong, D.; Tan, N.; Cheung, C.; Baskaran, M.; Aung, T.; Wong, T. Model-based optic nerve head segmentation on retinal fundus images. In Proceedings of the International Conference of the IEEE Engineering in Medicine and Biology Society, Boston, MA, USA, 30 August–3 September 2011; pp. 2626–2629. [Google Scholar]
  17. Yin, F. Automated segmentation of optic disc and optic cup in fundus images for glaucoma diagnosis. In Proceedings of the 2012 25th IEEE International Symposium on Computer-Based Medical Systems (CBMS), Rome, Italy, 20–22 June 2012; pp. 1–6. [Google Scholar]
  18. Abràmoff, M.; Alward, W.; Greenlee, E.; Shuba, L.; Kim, C.; Fingert, J. Automated segmentation of the optic disc from stereo color photographs using physiologically plausible features. Investig. Ophthalmol. Visual Sci. 2007, 48, 1665–1673. [Google Scholar] [CrossRef] [PubMed]
  19. Dutta, M.; Mourya, A.; Singh, A.; Parthasarathi, M.; Burget, R.; Riha, K. Glaucoma detection by segmenting the super pixels from fundus colour retinal images. In Proceedings of the 2014 International Conference on Medical Imaging, m-Health and Emerging Communication Systems (Med Com), Greater Noida, India, 7–8 November 2014; pp. 86–90. [Google Scholar]
  20. Tan, N.; Xu, Y.; Goh, W.; Liu, J. Robust multi-scale superpixel classification for optic cup localization. Comput. Med. Imaging Graph. 2015, 40, 182–193. [Google Scholar] [CrossRef] [PubMed]
  21. Saeed, E.; Szymkowski, M.; Saeed, K.; Mariak, Z. An Approach to Automatic Hard Exudate Detection in Retina Color Images by a Telemedicine System Based on the d-Eye Sensor and Image Processing Algorithms. Sensors 2019, 19, 695. [Google Scholar] [CrossRef] [PubMed]
  22. Zhou, W.; Wu, C.; Chen, D.; Wang, Z.; Yi, Y.; Du, W. Automatic microaneurysm detection using the sparse principal component analysis based unsupervised classification method. IEEE Access 2017, 5, 2169–3536. [Google Scholar] [CrossRef]
  23. Zhou, W.; Wu, C.; Yi, Y.; Du, W. Automatic detection of exudates in digital color fundus images using superpixel multifeatured classification. IEEE Access 2017, 5, 17077–17088. [Google Scholar] [CrossRef]
  24. Mary, M.; Rajsingh, E.; Jacob, J. An empirical study on optic disc segmentation using an active contour model. Biomed. Signal Process. Control 2015, 18, 19–29. [Google Scholar] [CrossRef]
  25. Mendels, F. Identification of the optic disk boundary in retinal images using active contours. In Proceedings of the Irish Machine Vision and Image Processing Conference (IMVIP), Dublin, Ireland, 8–9 September 1999; pp. 103–115. [Google Scholar]
  26. Tang, Y.; Li, X.; von Freyberg, A.; Goch, G. Automatic segmentation of the papilla in a fundus image based on the C-V model and a shape restraint. In Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, China, 20–24 August 2006; pp. 183–186. [Google Scholar]
  27. Wong, D.; Liu, J.; Lim, J. Level-set based automatic cup-to-disc ratio determination using retinal fundus images in ARGALI. In Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada, 20–25 August 2008; pp. 2266–2269. [Google Scholar]
  28. Yu, H.; Barriga, E.; Agurto, C.; Echegaray, S.; Pattichis, M.; Bauman, W.; Soliz, P. Fast localization and segmentation of optic disk in retinal images using directional matched filtering and level sets. IEEE Trans. Inf. Technol. Biomed. 2012, 16, 644–657. [Google Scholar] [CrossRef]
  29. Esmaeili, M.; Rabbani, H.; Dehnavi, A. Automatic optic disk boundary extraction by the use of curvelet transform and deformable variational level set model. Pattern Recognit. 2012, 45, 2832–2842. [Google Scholar] [CrossRef]
  30. Joshi, G.; Sivaswamy, J.; Karan, K.; Krishnadas, R. Optic disk and cup boundary detection using regional information. In Proceedings of the 2010 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Rotterdam, The Netherlands, 14–17 April 2010; pp. 948–951. [Google Scholar]
  31. Thakur, N.; Juneja, M. Optic disc and optic cup segmentation from retinal images using hybrid approach. Expert Syst. Appl. 2019, 127, 308–322. [Google Scholar] [CrossRef]
  32. Wong, D.; Liu, J.; Lim, J.; Li, H.; Jia, X.; Yin, F.; Wong, T. Automated detection of kinks from blood vessels for optic cup segmentation in retinal images. Proc. SPIE Med. Imaging 2009, 7260, 72601J. [Google Scholar]
  33. Liu, J.; Wong, D.; Lim, J.; Li, H.; Tan, N.; Wong, T. Argali—An automatic cup-to-disc ratio measurement system for glaucoma detection and analysis framework. Proc. SPIE Med. Imaging 2009, 7260, 72603K. [Google Scholar]
  34. Mittapalli, P.; Kande, G. Segmentation of optic disk and optic cup from digital fundus images for the assessment of glaucoma. Biomed. Signal Process. Control 2016, 24, 34–46. [Google Scholar] [CrossRef]
  35. Li, C.; Huang, R.; Ding, Z.; Gatenby, C.; Metaxas, C.; Gore, J. A level set method for image segmentation in the presence of intensity inhomogeneities with application to MRI. IEEE Trans. Image Process. 2011, 20, 2007–2016. [Google Scholar] [PubMed]
  36. Wang, L.; Li, C.; Sun, Q.; Xia, D.; Kao, C. Active contours driven by local and global intensity fitting energy with application to brain MR image segmentation. Comput. Med. Imaging Graph. 2009, 33, 520–531. [Google Scholar] [CrossRef]
  37. Li, C.; Kao, C.; Gore, J.; Ding, Z. Implicit active contours driven by local binary fitting energy. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–7. [Google Scholar]
  38. Zhou, W.; Wu, C.; Gao, Y.; Yu, X. Automatic optic disc boundary extraction based on saliency object detection and modified local intensity clustering model in retinal images. IEICE Trans. Fundam. 2017, E100-A, 2069–2072. [Google Scholar] [CrossRef]
  39. Comaniciu, D.; Meer, P. A robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 603–619. [Google Scholar] [CrossRef]
  40. Liu, J.; Wong, D.; Lim, J.; Jia, X. Optic cup and disk extraction from retinal fundus images for determination of cup-to-disc ratio. In Proceedings of the IEEE Conference on Industrial Electronics and Applications, Singapore, 3–5 June 2008; pp. 1828–1832. [Google Scholar]
  41. Sivaswamy, J.; Krishnadas, S.; Joshi, G.; Jain, M.; Tabish, A. Drishti-GS: Retinal image dataset for optic nerve head (ONH) segmentation. In Proceedings of the 2014 IEEE 11th International Symposium on Biomedical Imaging (ISBI), Beijing, China, 29 April–2 May 2014; pp. 53–56. [Google Scholar]
  42. Chr´astek, R.; Wolf, M.; Donath, K.; Michelson, G.; Niemann, H. Optic disc segmentation in retinal images. In Bildverarbeitung für die Medizin; Springer-Verlag: Berlin, Germany, 2002; pp. 263–266. [Google Scholar]
  43. Osareh, A.; Mirmehdi, M.; Thomas, B.; Markham, R. Colour morphology and snakes for optic disc localization. In Proceedings of the 6th Medical Image Understanding and Analysis Conference, Portsmouth, UK, 22–23 July 2002; pp. 21–24. [Google Scholar]
  44. Joshi, G.; Sivaswamy, J.; Krishnadas, S. Optic disk and cup segmentation from monocular color retinal images for glaucoma assessment. IEEE Trans. Med. Imaging 2011, 30, 1192–1205. [Google Scholar] [CrossRef]
Figure 1. Major structures of the optic disc (OD). White line: the OD boundary, black line: the optic cup (OC) boundary, and the area between the white line and the black line is the neuroretinal rim.
Figure 1. Major structures of the optic disc (OD). White line: the OD boundary, black line: the optic cup (OC) boundary, and the area between the white line and the black line is the neuroretinal rim.
Symmetry 11 01267 g001
Figure 2. Intensity inhomogeneity challenges in the OD and the OC segmentation.
Figure 2. Intensity inhomogeneity challenges in the OD and the OC segmentation.
Symmetry 11 01267 g002
Figure 3. Contour initialization. (a) Original retinal image, (b) clipped area of interest (ROI) around the optic disc (OD), (c) vessels eliminated, (d) mean-shift result, (e) convert to the grayscale, (f) canny detection results, (g) marking image of the OD, (h) circular Hough transform result for the OD, (i) rough boundary extraction result for the OD, (j) retained result for the OD region, (k) the binary image of (j) using the thresholding, (l) the edge image of (k), (m) marking image of the optic cup (OC), (n) circular Hough transform result for the OC, (o) rough boundary extraction result for the OC.
Figure 3. Contour initialization. (a) Original retinal image, (b) clipped area of interest (ROI) around the optic disc (OD), (c) vessels eliminated, (d) mean-shift result, (e) convert to the grayscale, (f) canny detection results, (g) marking image of the OD, (h) circular Hough transform result for the OD, (i) rough boundary extraction result for the OD, (j) retained result for the OD region, (k) the binary image of (j) using the thresholding, (l) the edge image of (k), (m) marking image of the optic cup (OC), (n) circular Hough transform result for the OC, (o) rough boundary extraction result for the OC.
Symmetry 11 01267 g003
Figure 4. The segmentation comparisons with different initial contours. They respectively show the comparison segmentation results based on adaptive initial contour and manual initial contour respectively drawing outside of the OD or OC, inside of the OD or OC, or intersect of the OD or OC. The ground truths are respectively marked with white and black lines for OD and OC.
Figure 4. The segmentation comparisons with different initial contours. They respectively show the comparison segmentation results based on adaptive initial contour and manual initial contour respectively drawing outside of the OD or OC, inside of the OD or OC, or intersect of the OD or OC. The ground truths are respectively marked with white and black lines for OD and OC.
Symmetry 11 01267 g004
Figure 5. OD segmentation results. First column: original clipped images; second column: initial contour; third column: Hough transform results [42]; fourth column: gradient vector flow (GVF) model results [43], fifth column: Chan-Vese (C-V) model results [30]; sixth column: level set adaptively regularized kernel-based intuitionistic fuzzy c means (LARKIFCM) results [31]; seventh column: ours. White line: ground truth (GT); blue line: detected result by an approach.
Figure 5. OD segmentation results. First column: original clipped images; second column: initial contour; third column: Hough transform results [42]; fourth column: gradient vector flow (GVF) model results [43], fifth column: Chan-Vese (C-V) model results [30]; sixth column: level set adaptively regularized kernel-based intuitionistic fuzzy c means (LARKIFCM) results [31]; seventh column: ours. White line: ground truth (GT); blue line: detected result by an approach.
Symmetry 11 01267 g005
Figure 6. OC segmentation results. First column: original color image; second column: initial contour; third column: threshold-based [30]; fourth column: ellipse fitting [33]; fifth column: spatially weighted fuzzy c means (SWFCM) clustering [34]; sixth column: LARKIFCM [31], seventh column: Ours. Black line: GT; green line: detected results obtained by an approach.
Figure 6. OC segmentation results. First column: original color image; second column: initial contour; third column: threshold-based [30]; fourth column: ellipse fitting [33]; fifth column: spatially weighted fuzzy c means (SWFCM) clustering [34]; sixth column: LARKIFCM [31], seventh column: Ours. Black line: GT; green line: detected results obtained by an approach.
Symmetry 11 01267 g006aSymmetry 11 01267 g006b
Table 1. OD and OC segmentation results for different initial contour.
Table 1. OD and OC segmentation results for different initial contour.
MethodsF-Score for OD (Average)Boundary-Based Distance for OD (Average)F-Score for OC (Average)Boundary-Based Distance for OC (Average)
Contour intersecting the OD or OC0.9429.5210.81323.213
Contour within the OD or OC0.9449.2810.81523.152
Contour outside the OD or OC0.9459.1620.81822.965
Adaptive contour0.9478.8850.82621.980
Table 2. OD segmentation results.
Table 2. OD segmentation results.
MethodsF-Score (Average)Boundary-Based Distance (Average)
Hough [42]0.83643.126
GVF [43]0.86239.561
C-V [30]0.88127.578
LARKIFCM [31] 0.9409.882
Ours0.9478.885
Table 3. OC segmentation results.
Table 3. OC segmentation results.
MethodF-Score (Average)Boundary-Based Distance (Average)
Thresholding [30]0.61651.347
Ellipse Fitting [33]0.65148.799
SWFCM Clustering [34] 0.76527.376
LARKIFCM [31]0.81123.335
Ours0.82621.980
Table 4. Error estimation in v-CDR and a-CDR (mean: μ E r r o r /standard deviation: σ E r r o r ).
Table 4. Error estimation in v-CDR and a-CDR (mean: μ E r r o r /standard deviation: σ E r r o r ).
Retinal ImageCup-to-Disc Vertical Diameter RatioCup-to-Disc Area Ratio
μErrorσError μ E r r o r σ E r r o r
Normal Images (31)0.1580.1040.1720.127
Glaucoma Images (70)0.0950.0840.1010.091
Total Images (101)0.1100.0990.1180.119

Share and Cite

MDPI and ACS Style

Gao, Y.; Yu, X.; Wu, C.; Zhou, W.; Wang, X.; Zhuang, Y. Accurate Optic Disc and Cup Segmentation from Retinal Images Using a Multi-Feature Based Approach for Glaucoma Assessment. Symmetry 2019, 11, 1267. https://doi.org/10.3390/sym11101267

AMA Style

Gao Y, Yu X, Wu C, Zhou W, Wang X, Zhuang Y. Accurate Optic Disc and Cup Segmentation from Retinal Images Using a Multi-Feature Based Approach for Glaucoma Assessment. Symmetry. 2019; 11(10):1267. https://doi.org/10.3390/sym11101267

Chicago/Turabian Style

Gao, Yuan, Xiaosheng Yu, Chengdong Wu, Wei Zhou, Xiaonan Wang, and Yaoming Zhuang. 2019. "Accurate Optic Disc and Cup Segmentation from Retinal Images Using a Multi-Feature Based Approach for Glaucoma Assessment" Symmetry 11, no. 10: 1267. https://doi.org/10.3390/sym11101267

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop