Next Article in Journal
Cross-Section Deformation Analysis and Visualization of Shield Tunnel Based on Mobile Tunnel Monitoring System
Next Article in Special Issue
Save Muscle Information–Unfiltered EEG Signal Helps Distinguish Sleep Stages
Previous Article in Journal
A Novel Electrochemical Sensor Based on Electropolymerized Ion Imprinted PoPD/ERGO Composite for Trace Cd(II) Determination in Water
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection of Early Signs of Diabetic Retinopathy Based on Textural and Morphological Information in Fundus Images

1
Institute of Research and Innovation in Bioengineering, I3B, Universitat Politècnica de València, 46022 Valencia, Spain
2
Departamento de Comunicaciones, ITEAM Research Institute, Universitat Politècnica de València, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(4), 1005; https://doi.org/10.3390/s20041005
Submission received: 16 December 2019 / Revised: 30 January 2020 / Accepted: 10 February 2020 / Published: 13 February 2020
(This article belongs to the Special Issue Artificial Intelligence in Medical Sensors)

Abstract

:
Estimated blind people in the world will exceed 40 million by 2025. To develop novel algorithms based on fundus image descriptors that allow the automatic classification of retinal tissue into healthy and pathological in early stages is necessary. In this paper, we focus on one of the most common pathologies in the current society: diabetic retinopathy. The proposed method avoids the necessity of lesion segmentation or candidate map generation before the classification stage. Local binary patterns and granulometric profiles are locally computed to extract texture and morphological information from retinal images. Different combinations of this information feed classification algorithms to optimally discriminate bright and dark lesions from healthy tissues. Through several experiments, the ability of the proposed system to identify diabetic retinopathy signs is validated using different public databases with a large degree of variability and without image exclusion.

1. Introduction

At least 2.2 billion people around the world have a vision impairment, of whom at least 1 billion have a vision impairment that could have been prevented or is yet to be addressed [1]. Eye conditions such as cataracts, trachoma and refractive error are the main focus of eye care strategies. The combination of a growing and ageing population will significantly increase the total number of people with eye conditions and vision impairment (estimated blind people in the world will exceed 40 million by 2025). Other common eye conditions reported by the World Health Organization include: myopia (near-sightedness), late detection in poorly integrated eye care services, and diabetic retinopathy (increasing numbers of people are living with diabetes, particularly Type 2; nearly all people with diabetes will have some form of retinopathy in their lifetimes) [1]. For this reason, the early detection of diabetic retinopathy is essential to guarantee the maintenance of the vision. The first signs of diabetic retinopathy can be noticed using fundus photographs acquired by means of a retinal camera.
Diabetes occurs when the pancreas does not secrete enough insulin, or the body is unable to process it properly. Diabetes affects the circulatory system, and therefore to the retina. When fluid leaks from blood vessels into the retina, this is damaged and this medical condition is called diabetic retinopathy (DR) [2,3].
For automatic screening of diabetic retinopathy through fundus imaging, systems are based on the detection of specific anomalous patterns or lesions [4], e.g., microaneurysms (small saccular dilations of capillaries that appears as round spots of dark red color with sharp edges in the retina background), exudates (deposits of lipids and proteins in the retina that produce bright lesions of yellowish-white color with prominent and irregular edges), cotton wool spot (a result of accumulations of axoplasmic material within the nerve fiber layer that appear as fluffy white patches on the retina) and hemorrhages (wide accumulation of blood in the retina). In this paper, we focus on the detection of all these lesions because they are the earliest visible signs of the disease. The goal is to distinguish healthy from pathological areas of the image considering both cases bright and dark retinal damage and proposing a unique feature vector able to encode the relevant information in the two cases. We show an example of the most common DR signs in Figure 1.
Depending on the evolution with time, diabetic retinopathy is classified as Non-Proliferative and Proliferative [5]. Non-Proliferative is the earliest stage of the pathology and it is characterized by deposits of extra fluid and small amounts of blood, from the vessels into the retina, exudates and microaneurysms. It usually progresses to develop the Proliferative one because many blood vessels in the retina are closed hindering the proper blood flow. The retina responses to this fact by growing new abnormal blood vessels (this process is called neovascularization).
Routine eye checks and good diabetes control can protect people’s vision from this condition. The action plan presented by the WHO [6] reflects the eye care services need to become an integral and established the screening campaigns, as key method for detecting retinal pathologies in their early stage [6]. However, these proposals require a large workload of trained experts in the identification of anomalous retinal patterns. This fact along with the increasing of the population at risk, make these proposals economically unfeasible. Moreover, it must be pointed out that this type of retinal disease diagnosis is highly influenced by the inherent subjectivity of each expert. These arguments suggest the need for developing automatic screening systems that can be used in primary health care reducing the working time to the specialists and providing the location and quantification of the pathological retinal tissue propitiated by retinal diseases.
Automatic screening systems based on digital imaging should be supported by acquisition devices of high resolution to guarantee the successful of the diagnosis in difficult cases. Algorithms able to extract the relevant information from fundus images have become increasingly important for the automatic screening and to provide clinicians support in the diagnosis phase.
There are two machine learning paradigms to try to solve the problem: the traditional classification approach where the input is a feature vector obtained from the fundus images [7,8,9], and the deep learning approach (in particular, using convolutional neural networks) [10,11,12]. While the second approach usually gets better classification results, most of these methods do not provide understandable interpretations about the relevant features of the different pathological signs in the retina, so its clinical usefulness is questionable until more research efforts in the interpretation of the high-level features extracted by the convolutional blocks of a CNN will be done. It is unlikely that a black box classification system is going to be accepted by any ophthalmologist in the real world, no matter the good the results are in previous experiments. Another important issue is the limited amount of labeled fundus images required by deep learning strategies. Some recent works are trying to overcome these limitations by creating synthetic data [13,14,15].
The approach presented in this paper belongs to the traditional classification approach. In that case, the most common procedure is to segment these lesions using different methods [16,17,18]. These approaches present a high false-positive rate at pixel level. This fact is the main motivation of the new perspective proposed in this work, in which the characterization of healthy and damaged retinal areas is studied by applying image descriptors in a local way, avoiding the segmentation step. Our approach is different to previous works based on feature extraction and classification. In [19] a previous segmentation of exudates is required to perform the feature extraction. The most habitual procedure is to extract features from a lesion candidate map generated by different techniques such as mathematical morphology [16,20,21], background subtraction [22], clustering [23] or using banks of filters and applying a low adaptive threshold [24,25] among others. To the best of the authors’ knowledge, the work proposed by Quellec et al. [26] is the unique previous attempt which has some resemblance with the strategy proposed in this paper. The main difference is that they use a local analysis without overlapping based on wavelet features; area under the receiver operating curve 0.761 was reported in e-ophtha public database. In our paper, robust descriptors extract texture, shape and roughness features from the visual information of retinal images. This methodology does not require the previous segmentation of retinal lesions or the generation of candidate maps, avoiding error sources due to the segmentation process and computational cost.

2. Materials

Two public fundus databases were used to validate the methods proposed throughout this paper.
  • E-OPHTHA [27] is a database of fundus images especially designed for diabetic retinopathy screening. This public database is divided in two subsets depending on the lesion type: exudates and microaneurysms. These lesions are manually annotated by experts and the ground truth is provided. In this paper, we will use the exudates subset (E-OPHTHA_EX) that is composed by 47 pathological images (Pathological_EX) and 35 images with no lesion (Healthy_EX). All the retinal images were acquired with the same field of view angle 40°, and they present different spatial resolutions: 13 images with 1440 × 960 pixels, 2 with 1504 × 1000 , 9 with 2048 × 1360 and 23 with the highest resolution 2544 × 1696 ).
  • DIARETDB1 public database [28] consists of 89 color fundus images, of which 84 contain at least mild non-proliferative signs of diabetic retinopathy and five are considered to be normal by medical experts. In particular, 41 fundus images show bright lesions (exudates) and 45 contain dark lesions (microaneurysms or hemorrhages). The angle of vision is 50° and resolution 1500 × 1152 pixels. The ground truth used in this paper is the one proposed by the authors of the database (when 3 out of 4 experts label the pixel as pathological, it is considered to be an exudate pixel).

2.1. Data Conditioning

The original fundus images must be preprocessed to take into account three issues that can influence the performance of the classifiers: image resolution, color and the presence of blood vessels.
  • Resolution normalization
    The E-OPHTHA_EX database is composed by images with four different resolutions. The images are resized to the dimensions of the smallest image ( 1440 × 960 ) after a local maximum filter is applied in order to preserve the small bright lesions [29] (complementary a minimum filter is used to keep microaneurysms).
  • Color normalization
    In fundus images, the green component of the RGB representation shows the maximum contrast between lesions and background. The red channel is often saturated and has low contrast, and the blue channel is very noisy and suffers poor dynamic range. For these reasons, the green component is commonly used to segment the lesions [16,20,30] and it is used in this work. However, the green mean value differs significantly for different images even in the same database due to incorrect white balance. To normalize images, we apply a color transformation with the aim of increasing their color homogeneity [31].
  • Impainting blood vessels
    Blood vessels cover a high percentage of the fundus image hindering the automatic detection of important structures as optic disc, optic cup and macula, among others. Vessels are also considered to be noise or artefacts that hamper the segmentation of different lesions such as exudates, microaneurysms and drusen among others and also the classification of pathologies based on background textures. Retinal vessels segmentation techniques aim to separate the different retinal vasculature structure tissues from the fundus image background and aforementioned retinal anatomical structures such as optic disc, macula, and abnormal lesions. See [32] for a review of retinal vessels segmentation algorithms, and [33] for a review of optic disc segmentation for glaucoma detection.
    A possible procedure to avoid blood vessels is to consider these structures as missing pixels and trying to restore them using the background. Image inpainting is a technique for restoring missing or damaged areas in a digital image [34]. Inpainting methods assume that pixels in the known and unknown areas of the image have the same statistical properties or geometrical structures. Different algorithms exist in the literature [35]; we will use a sparse-based inpainting method with spread neighborhoods specifically designed to inpaint blood vessels in fundus images [36].

2.2. Image Patches: Input Features and Class Labels

The lesions induced by diabetic retinopathy present different sizes according to the stage of the disease. In the feature extraction stage of the proposed methodology, descriptors are locally calculated: the image is divided in patches using a sliding window of size N w × N w and overlap of ( Δ x , Δ y ), and image descriptors are computed for each patch. For example, we show the case N w = 64 and 50% overlapping ( Δ x = Δ y = N w / 2 ) in Figure 2a. This procedure can be related to the way in which the human inspects and analyzes an image. From an image-processing point of view, the sliding window strategy is the same as a 2D grid or a dense sampling of the image, in which the samples correspond to the central pixel of the window. Figure 2 represents the whole procedure and equivalent dense image sampling.
It is important to note that patches should contain at least one pixel belonging to retinal texture to be processed; in other words, patches contained entirely outside the lens are discarded. In addition, patches containing optic disk pixels, obtained by the method proposed in [37], are not considered in the process (Figure 3).

3. Methods

In this section, we present the proposed image descriptors (features) and classifiers used to distinguish between the healthy and pathological retinal tissue. The features are obtained from the patches obtained in the previous section. We explore different descriptors: local binary patterns to encode the texture of pathological patches and hierarchical morphological operators to encode granularity. We will use these features as the input to the classifier.

3.1. Local Binary Pattern Variance

Local Binary Patterns (LBP) is a powerful feature for texture classification [38]. LBP establishes a label for each pixel taking into account its neighborhood which is defined by a radius R and number of points P:
L B P P , R ( i , j ) = p = 0 P 1 s ( g p g c ) · 2 p , s ( x ) = 1 i f x 0 0 i f x < 0
where P represents the number of samples on the symmetric circular neighborhood of radious R, g c is the gray value of pixel ( i , j ) and g p the gray value of each neighbor. The final L B P P , R label is obtained by converting the binary string into a decimal value. L B P P , R texture operator describes the occurrence of specific patterns in the neighborhood of each pixel in a P-dimensional histogram.
LBP technique has been previously used in fundus images mostly for the segmentation of retinal vessel [39,40] and the automatic identification of ophthalmic diseases [9,19,41]. We will use the rotation-invariant uniform LBP implementation L B P P , R r u i n [42]. Using this LBP variant, with a level of uniformity of U = 2 and P = 8 , ten different texture labels could be generated depending on the binary string computed in the comparison between a pixel and its neighborhood.
To boost the performance of LBP and to obtain a texture descriptor invariant against shifts in gray scale, the complementary measure V A R P , R is also computed and combined with the L B P P , R r i u 2 obtaining the final feature vector L B P V P , R as follows:
V A R P , R ( i , j ) = 1 P p = 0 P 1 ( g p μ ) 2 , μ = 1 P p = 0 P 1 g p
L B P V P , R ( k ) = i = 1 M 1 j = 1 M 2 w ( L B P P , R ( i , j ) , k ) , k [ 0 , K ]
where:
w ( L B P P , R ( i , j ) , k ) = V A R P , R ( i , j ) , L B P P , R ( i , j ) = k 0 , o t h e r w i s e .
where K is the maximal LBP label. In Figure 4 we show a graphical example of how this feature vector is obtained for a given patch.

3.2. Ganulometric Profile

One of the most interesting techniques based on mathematical morphology is granulometry [43,44]. In [45] this technique is used with the aim of detecting neovascularization, i.e., the abnormal formation of blood vessels in the retina due to the lack of oxygen.
Given a gray-level image f F ( E , T ) , its dilation (erosion) by a flat structuring element B is introduced as the dilation (erosion) of each level set X t ( f ) by B:
δ B ( f ) ( x ) = s u p { t l x δ B ( X t l ) } , t l T ϵ B ( f ) ( x ) = i n f { t l x ϵ B ( X t l ) } , t l T
The two elementary operations of gray-level erosion and dilation can be composed together to yield a new set of gray-level operators with desirable feature extractor properties which are given by, the gray-level opening:
γ B ( f ) ( x ) = ( f B ) ( x ) = δ B ( ϵ B ( f ) ) ( x )
and the gray-level closing:
φ B ( f ) ( x ) = ( f B ) ( x ) = ϵ B ( δ B ( f ) ) ( x )
Making use of the previously explained operators, a shape descriptor can be defined. Let us first consider an opening γ i ( f ) , applied to an image f with a structuring element (SE) of size i. The opening can be computed as a sequence of an erosion followed by a dilation. When opening is computed on the image with a SE of increasing size ( λ ), we obtain a morphological opening pyramid (or granulometry profile) which can be formalized as:
Π γ ( f ) = { Π γ λ : Π γ λ = γ λ ( f ) , λ [ 0 , , n m a x ] }
where n m a x represents the maximum size of the structuring element.
By duality, a closing, φ i ( f ) is defined as the dilation of an original image f with a SE of size i. In the same way, a morphological closing pyramid is an anti-granulometry profile and can be computed on the image performing repeated closings with a SE of increasing size ( λ ) defined as:
Π φ ( f ) = { Π φ λ : Π φ λ = φ λ ( f ) , λ [ 0 , , n m a x ] }
In Figure 5 and Figure 6 we show the different levels of the pyramids Π γ and Π φ for n = 0 , 2 , 4 , , 22 for a particular fundus image, respectively.
Let m ( f ) be the Lebesge measure of a discrete image f: m ( f ) is the area of f in the binary case (number of pixels) and the volume in the grey-scale case (sum of pixel values). Making use of the morphological pyramids established above, a shape descriptor can be defined. The granulometry curve, or pattern spectrum of f with respect to Γ is defined as the following (normalized) mapping:
P S Γ ( f , n ) = P S ( f , n ) = m ( Π γ n ( f ) ) m ( Π γ n + 1 ( f ) ) m ( f ) , n 0
The pattern spectrum P S Γ ( f , n ) (also called size density of f) maps each size n to some measure of the bright image structures with this size: loss of bright image structures between two successive openings. It is a probability density function (a histogram) in which a large impulse in the pattern spectrum at a given scale indicates the presence of many image structures at that scale.
By duality, the concept of pattern spectrum extends to anti-granulometry curve P S Φ ( f ) , by closings φ n ( f ) , stacked in the morphological pyramid Π φ :
P S Φ ( f , n ) = P S ( f , n ) = m ( Π φ n ( f ) ) m ( Π φ n 1 ( f ) ) m ( f ) , n 0
This spectrum characterizes the size of dark image structures. Both granulometry curve and anti-granulometry curve can be appended into a unique curve, with closings versus size on the left side (negative side) and openings versus size on the right side (positive side) of the diagram; i.e.:
{ n , 0 , n } P S ( f , n ) = { P S Φ ( f , n ) , 0 , P S Γ ( f , n ) } , n 0
From the morphological pyramids, a local description of the shape and size of the retinal texture is performed by computing the pattern spectrum of squared patches. In particular, P S Γ ( f , n ) and P S Φ ( f , n ) are locally computed for each patch of the green channel of fundus images (extracted according to the explanation in Section 2.2), following Equations (10) and (11) respectively. In addition, the combination of both descriptors takes place to the curve P S ( f , n ) according to Equation (12). Please note that the ⋄ symbol denotes the locality of the descriptor. In Figure 7a,c, the local extraction of the granulometry and anti-granulometry curves from the morphological pyramids Π γ and Π φ is represented. P S Γ ( f , n ) and P S Φ ( f , n ) descriptors extracted from the patch marked in red in a fundus image are reported in Figure 7b,d. In addition, the whole pattern spectrum P S ( f , n ) is shown in Figure 8.

3.3. Classifiers

We tested three different classifiers: random forests (RF), support vector machines (SVM) and Gaussian processes for classification (GPCs). RF was chosen as the most used tree-structured predictor in the literature to address classification problems. One of its major strengths is the low computational time required in the training stage while its principal disadvantage is that RF usually suffer from overfitting when using high-dimensional feature vectors. To make face this problem, a kernel-based method was additionally selected as robust method against overfitting, i.e., SVM with Radial Basis Function (RBF) kernel. Finally, to solve the problem under study from other point of view, a probabilistic classification algorithm was also involved in the experimental stage, i.e., GPCs.
  • Random Forests are a combination of tree-structured predictors { h ( x , Θ k ) , k = 1 , , K } so that each tree depends on the training set and values of a random vector, { Θ k } , independently sampled of the past random vectors { Θ 1 , , Θ k 1 } and with the same distribution for all trees in the forest [46]. To predict a new instance, it is pushed down the tree. Then, the label assigned to the instance will be the label corresponding to the terminal node of the tree. This process is iterated by the K predictors or trees and the final classification label will be obtained by majority voting of the trees.
  • Support Vector Machine (SVM) builds a hyperplane of separation in the input space maximizing the distance or margin with respect to the supported vectors of the different classes [47,48]. The input data is usually kernelized in order to take into account classes non-linearly separable. We will use the radial basis function (RBF) kernel K ( x i , x j ) = e γ x i x j 2 , γ > 0 . Classification experiments using a linear kernel are also performed to establish a baseline method for comparison purposes.
  • Gaussian processes for classification. Random forests and SVM are examples of discriminative classifiers (they look for the decision boundary between the classes). A generative classifier obtains a probabilistic model of the classes that is used for future classification purposes maximizing the posterior probability. As an example of a probabilistic classification method, we use Gaussian processes for classification purposes (GPCs) [49].

3.4. Hand-Driven Learning Procedure

The 47 images of E-OPHTHA exudates database were divided in K = 5 partitions. External cross-validation, using the “leave-one-out” technique, allowed us to carry out a fair validation of the proposed method. During preprocessing, the images are resized so all of them have the same angle of view and color transformation and image inpainting procedures are carried out. After the preprocessing, the green component is extracted, the patches are obtained and the descriptors for each patch are calculated. The size of the patches is 64 × 64 pixels and the displacement between patches is ( Δ x , Δ y ) = ( 32 , 32 ) pixels.
Since pathological areas represent only small regions of the whole retinal image (less than one percent of the total number of pixels that compose the retinal image in most cases), the patch extraction process and corresponding feature vector extraction will result in an imbalanced dataset. Training a classifier with an imbalanced dataset can produce overfitting to the majority class (“healthy” in our case) [50]. To avoid this problem, we proceed as follows. Let us assume that the number of healthy and pathological samples are M and N respectively where M > > N . Thus, the set of all healthy samples is randomly permuted and partitioned into T = r o u n d M / N subsets with the same cardinality as the number of pathological samples. A committee of T classifiers is then learned with training sets formed by joining all pathological training samples and each partition of healthy training samples. In the test stage, testing samples are evaluated for each of the T models and soft majority voting is applied to the output probabilities as final criterion. If the obtained probability is higher than a given threshold δ (typically δ = 0.5 ), the patch is assigned to the class “pathological”. An overview of the whole process can be observed in Figure 9. In our case, T = 9 classifiers compose the decision committee to classify each testing instance.

4. Results

4.1. Performance Measures

Common metrics in the field of machine learning are used to evaluate the classification performance in each experiment.
Accuracy is determined by the sum of correct predictions divided by the total number of cases. It shows how correct a diagnostic test identifies and excludes a given condition.
A c c u r a c y = T P + T N T P + T N + F P + F N
where True Positives (TP) and True Negatives (TN) are the correctly predicted values while False Positives (FP) and False Negatives (FN) are the mistakes.
Sensitivity or Recall and Specificity measure the proportion of positive and negative cases which are correctly identified as such, respectively.
S e n s i t i v i t y = T P T P + F N
S p e c i f i c i t y = T N T N + F P
The receiver operating curve or ROC curve is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. The ROC curve is created by plotting the Sensitivity against the 1-Specificity at various threshold settings. The enclosing area of this curve is known as area under ROC curve (AUC) and it is an extended way to measure the predictive modelling accuracy [51]. This measure allows the establishment of fair comparisons where there is a strong imbalance between classes [52]. Based on [53], a diagnostic test in the medical field with an AUC value of 0.5 suggest no discrimination. If the AUC value ranges from 0.7 to 0.8 the test is considered acceptable, if the test presents an AUC value included in the interval 0.8-0.9 it is considered excellent, and more than 0.9 is considered outstanding.

4.2. Analysis of Descriptors

In a first set of tests, the behavior of the morphological descriptors to discriminate bright lesions is assessed. For this purpose, E-OPHTHA exudates database was used and four morphological pyramids were computed from the green component of each preprocessed image: two pyramids of openings using both an isotropic ( Π γ B ) and an angular ( Π γ L ) structuring element and analogously two pyramids of closings ( Π φ B , Π φ L ). These morphological pyramids were calculated using an increasing SE size defined by a step ( s = 2 ) and a maximum value ( n m a x = 22 ). These parameters were optimized making use of the ground truth of the training image dataset. Please note that angular granulometry was computed in directions 0 , 45 , 90 and 135 . From the morphological pyramids, a local description of the shape and size of the retinal texture was performed by computing the pattern spectrum of squared patches ( P S ) as explained previously (see Figure 7). The classification results obtained for the linear-SVM classifier with different configurations of the feature vector are reported in Table 1. For simplicity in the notation, the P S symbol has been omitted. Please note that feature vectors composed by more than one operator are generated by concatenation of their pattern spectra.
As can be interpreted from the results, when the feature vector is composed by a single pattern spectrum, isotropic granulometries detect better the bright lesion than the directional ones. When a combination of two granulometric patterns is used to feed the SVM classifier, an improvement in the performance classification can be observed, especially when the granulometry and anti-granulometry curves using an isotropic SE are collated in a unique feature vector. The best description of the bright lesions taking into account only size and shape local information is given when all morphological descriptors are used ( γ B γ L φ B φ L ). Accuracy and specificity values higher than 76%, a sensitivity of 66% and an AUC value around 0.8 are reported as best classification results in Table 1. These results suggest the need for extracting another kind of information able to support and strengthen the performance of the morphological descriptors.
We test the ability of feature based on local binary patterns LBPV to capture the texture information of healthy and pathological tissues. The parameters are: radius R = 1 and number of points in the LBP feature vector P = 8 , i.e., the dimension of the LBP feature vector is 10 ( P + 2 ). Several tests combining shape and texture descriptors were carried out and the obtained results can be read in Table 2.
As we can see, texture information is key in the ability of discriminating between regions containing exudates and non-damaged areas. Moreover, the best classification results are obtained when texture and shape/size information are combined. Improvements around 10% in all the evaluation metrics are registered in comparison with the previous test in which only morphological information was taken into account.

4.3. Performance of Different Classifiers

The third experiment aims to analyze if results differ when using different classifiers for the most discriminative feature vector (texture and morphological information).
The K-fold procedure and the random permutation for balancing the data (see the procedure represented in Figure 9) were performed using a fix seed to allow fair comparisons among the different classification methods. Table 3 shows the results related to the classification performance of each classifier evaluated on the five external folds of the E-OPHTHA_EX database.
As it can be observed in that table, different machine learning algorithms maximize specific evaluation metrics, making difficult an overall analysis of their behavior. AUC values provide information about the global performance of each classifier because the ROC curve is calculated by evaluating a sweep of decision thresholds ranging from zero to one (see Figure 10). However, in the medical field, sensitivity and specificity are key parameters for measuring the goodness of proposed methods. For this reason, from the ROC curve of each fold, an optimal decision threshold δ for each ML method was obtained according to the best trade-off between sensitivity and specificity. The same procedure used to optimize the meta-parameters involved in the feature extraction and classification stages was used to optimize the threshold δ . In particular, it was optimized by using the validation dataset for each partition of the external K-fold cross-validation. The obtained results can be observed in Table 4.
To evaluate the robustness of the proposed approach, our bright-lesion detection model trained on E-OPHTHA_EX database was used to predict the 47 images containing exudates of DIARETDB1 database. This test allowed us to perform a deep validation of the presented approach and to establish an exhaustive comparison with the state-of-the-art works related to the automatic exudate detection. Table 5 shows the obtained results using a decision threshold δ = 0.5 while Table 6 shows the results maximizing the trade-off between sensitivity and specificity by searching the optimal decision threshold δ for each classification method.
Table 7 compares the exudate detection results obtained in this paper with other works in the same problem, where PPV (positive predicted values) measure the proportion of positive correct predictions with respect to the total number of positive predictions determined by the classification system. Results must be analyzed with care, especially when only some metrics results are included in some papers. The importance of the trade-off between sensitivity and specificity is well-known in the computer-aid diagnostic systems development. In fact, is more convenient to register a higher sensitivity than specificity because the consequences of diagnosing a pathological patient as healthy can be very damaging. As Table 7 shows, the proposed methodology registers the best trade-off between the aforementioned figures of merit. The sensitivity of Walter et al. [16],Welfer et al. [17] and Ghafourian and Pourreza [18] are outperformed by the sensitivity value reported from our system. The only work which makes face our results of exudate detection is Sopharak et al. [21]. However, it is important to highlight that the authors of Sopharak et al. [21] proposed a set of optimally adjusted morphological operators to be used exclusively for exudate detection. In contrast, our method can identify exudates, microaneurysms and hemorrhages using the same feature vector (i.e., encoding textural and morphological information).
We show a visual example of the result of the classification process in Figure 11. Two images correspond to the E-OPHTHA exudates database and two to the DIARETDB1 database (note that the patches for the DIARETDB1 images are smaller since they have higher resolution). Red patches are detected pathological patches; green ones are healthy patches misclassified as pathological, and blue the pathological patches that were not detected. The rest of the patches not shown in the figure correspond to healthy patches labeled correctly as healthy ones.

4.4. Generalization Ability of the Proposed Feature Vector: Microaneurysms and Hemorrhages Detection

In the same way, the proposed feature vector and the classification methods studied in the previous section were used to discriminate between healthy and dark-damaged areas (i.e., patches containing hemorrhages and/or microaneurysms). The material employed to train and validate these models were the 45 images of DIARETDB1 due to the lack of enough dark lesions in the E-OPHTHA database. Table 8 reports the obtained results using a decision threshold δ = 0.5 while Table 9 shows the results maximizing the trade-off between sensitivity and specificity by searching the optimal decision threshold δ for each classification method.
The optimal configuration of the system for dark-lesion detection (i.e., the same feature vector and classification algorithm as for bright-lesion detection) is also compared with some state-of-the-art works, in which the detection/segmentation of these lesions is performed by using classical techniques such as filtering and mathematical morphology. Table 10 summarizes the dark-lesion detection results achieved by the proposed method and by other works of the literature.
The proposed method presents comparable results with respect the representative methods involved in the comparison. In Table 10, we can see as our method achieved better results (in terms of sensitivity-specificity trade-off) than Rocha et al. [54] and Ashraf et al. [56] while the works presented in Roychowdhury et al. [55] and Junior and Welfer [57] register outperforming respect our system. In the case of Junior and Welfer [57], the presented method is only able to detect dark signs of diabetic retinopathy, i.e., an ad-hoc algorithm to identify microaneurysms and hemorrhages is developed. On contrary, in this work, a generic feature vector and classification algorithm have been proposed for describing both kind of retinal lesions: bright and dark. The work proposed in Roychowdhury et al. [55] needs a previous candidate map generation before the classification step. This kind of approach presents a high false-positive rate at pixel level and it is the main motivation of our patch-based analysis.

5. Conclusions

We have presented a classification procedure to identify pathological patches containing exudates, microaneurysms and hemorrhages in retinal fundus images. The discriminative texture between the healthy and pathological areas of the image is encoded in the proposed feature vector. We have seen that the combination of morphological (granulometry) and texture descriptors improve the results with respect to using only one of them. To assure the results, we used cross-validation and tested different families of classifiers.
We have seen that the classifier obtained with a particular database can be extended to be used with other databases. This fact provides to our detection system a high level of robustness and simplicity. In addition, the local analysis in which the image description and classification stages are based not only makes unnecessary the stage of segmentation or candidate generation, but it also provides an accurate location of the damaged retinal area as we have seen in Figure 11.
The proposed methodology will be the basis of high-level computer aided diagnostic algorithms under development. In particular, the output of the resulting models presented here with along additional information will be employed in the identification of the different DR stages (i.e., early, mild, moderate and severe) as well the classification between non-proliferative and proliferative DR.

Author Contributions

Conceptualization, A.C., J.I. and V.N.; Data curation, A.C.; Formal analysis, A.C. and V.N.; Funding acquisition, V.N.; Investigation, A.C.; Methodology, A.C., J.I. and V.N.; Software, A.C. and V.N.; Supervision, J.I. and V.N.; Validation, J.I. and V.N.; Writing – original draft, A.C. and J.I.; Writing – review & editing, A.C., J.I. and V.N. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been partially supported by the Spanish Ministry of Economy and Competitiveness through project DPI2016-77869 and GVA through project PROMETEO/2019/109.

Acknowledgments

The Titan V used for this research was donated by the NVIDIA Corporation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. World Health Organization (WHO). World Report on Vision. Technical Report, 2019. Available online: https://www.who.int/publications-detail/world-report-on-vision (accessed on 25 August 2019).
  2. Fong, D.S.; Aiello, L.; Gardner, T.W.; King, G.L.; Blankenship, G.; Cavallerano, J.D.; Ferris, F.L.; Klein, R. Retinopathy in diabetes. Diabetes Care 2004, 27, s84–s87. [Google Scholar] [CrossRef] [Green Version]
  3. Cogan, D.G.; Toussaint, D.; Kuwabara, T. Retinal vascular patterns: IV. Diabetic retinopathy. Arch. Ophthalmol. 1961, 66, 366–378. [Google Scholar] [CrossRef]
  4. Wilkinson, C.; Ferris, F.L.; Klein, R.E.; Lee, P.P.; Agardh, C.D.; Davis, M.; Dills, D.; Kampik, A.; Pararajasegaram, R.; Verdaguer, J.T. Proposed international clinical diabetic retinopathy and diabetic macular edema disease severity scales. Ophthalmology 2003, 110, 1677–1682. [Google Scholar] [CrossRef]
  5. Preferred Practice Pattern Guidelines. Diabetic Retinopathy; American Academy of Ophthalmology: San Francisco, CA, USA, 2016; Available online: www.aao.org/ppp (accessed on 20 August 2019).
  6. World Health Organization (WHO). Universal Eye Health: A Global Action Plan 2014–2019. Technical Report. Available online: https://www.who.int/blindness/actionplan/en/ (accessed on 25 August 2019).
  7. Salamat, N.; Missen, M.M.S.; Rashid, A. Diabetic retinopathy techniques in retinal images: A review. Artif. Intell. Med. 2019, 97, 168–188. [Google Scholar] [CrossRef]
  8. Qureshi, I.; Ma, J.; Shaheed, K. A Hybrid Proposed Fundus Image Enhancement Framework for Diabetic Retinopathy. Algorithms 2019, 12, 14. [Google Scholar] [CrossRef] [Green Version]
  9. Morales, S.; Engan, K.; Naranjo, V.; Colomer, A. Retinal Disease Screening Through Local Binary Patterns. IEEE J. Biomed. Health Inf. 2017, 21, 184–192. [Google Scholar] [CrossRef] [PubMed]
  10. Asiri, N.; Hussain, M.; Adel, F.A.; Alzaidi, N. Deep learning based computer-aided diagnosis systems for diabetic retinopathy: A survey. Artif. Intell. Med. 2019, 99, 101701. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Gulshan, V.; Peng, L.; Coram, M.; Stumpe, M.C.; Wu, D.; Narayanaswamy, A.; Venugopalan, S.; Widner, K.; Madams, T.; Cuadros, J.; et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 2016, 316, 2402–2410. [Google Scholar] [CrossRef] [PubMed]
  12. Prentašić, P.; Lončarić, S. Detection of exudates in fundus photographs using deep neural networks and anatomical landmark detection fusion. Comput. Meth. Programs Biomed. 2016, 137, 281–292. [Google Scholar] [CrossRef]
  13. Costa, P.; Galdran, A.; Meyer, M.I.; Niemeijer, M.; Abràmoff, M.; Mendonça, A.M.; Campilho, A. End-to-end Adversarial Retinal Image Synthesis. IEEE Trans. Med. Imaging 2017, 37, 781–791. [Google Scholar] [CrossRef]
  14. De la Torre, J.; Valls, A.; Puig, D. A deep learning interpretable classifier for diabetic retinopathy disease grading. Neurocomputing 2019. [Google Scholar] [CrossRef] [Green Version]
  15. Diaz-Pinto, A.; Colomer, A.; Naranjo, V.; Morales, S.; Xu, Y.; Frangi, A.F. Retinal Image Synthesis and Semi-Supervised Learning for Glaucoma Assessment. IEEE Trans. Med. Imaging 2019, 38, 2211–2218. [Google Scholar] [CrossRef] [PubMed]
  16. Walter, T.; Klein, J.C.; Massin, P.; Erginay, A. A Contribution of Image Processing to the Diagnosis of Diabetic Retinopathy - Detection of Exudates in Color Fundus Images of the Human Retina. IEEE Trans. Med. Imaging 2002, 21, 1236–1243. [Google Scholar] [CrossRef]
  17. Welfer, D.; Scharcanski, J.; Marinho, D.R. A coarse-to-fine strategy for automatically detecting exudates in color eye fundus images. Comput. Med. Imaging Graph. 2010, 34, 228–235. [Google Scholar] [CrossRef] [PubMed]
  18. Ghafourian, M.; Pourreza, H. Localization of Hard Exudates in Retinal Fundus Image by Mathematical Morphology Operations. In Proceedings of the 2nd International eConference on Computer and Knowledge Engineering, Mashhad, Iran, 18–19 October 2012; pp. 185–189. [Google Scholar]
  19. Mookiah, M.R.K.; Acharya, U.R.; Martis, R.J.; Chua, C.K.; Lim, C.M.; Ng, E.Y.K.; Laude, A. Evolutionary algorithm based classifier parameter tuning for automatic diabetic retinopathy grading: A hybrid feature extraction approach. Knowl. Based Syst. 2013, 39, 9–22. [Google Scholar] [CrossRef]
  20. Zhang, X.; Thibault, G.; Decencière, E.; Marcotegui, B.; Laÿ, B.; Danno, R.; Cazuguel, G.; Quellec, G.; Lamard, M.; Massin, P.; et al. Exudate detection in color retinal images for mass screening of diabetic retinopathy. Med. Image Anal. 2014, 18, 1026–1043. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Sopharak, A.; Uyyanonvara, B.; Barman, S.; Williamson, T.H. Automatic detection of diabetic retinopathy exudates from non-dilated retinal images using mathematical morphology methods. Comput. Med. Imaging Graph. 2008, 32, 720–727. [Google Scholar] [CrossRef]
  22. Giancardo, L.; Meriaudeau, F.; Karnowski, T.P.; Li, Y.; Garg, S.; Tobin, K.W.; Chaum, E. Exudate-based diabetic macular edema detection in fundus images using publicly available datasets. Med. Image Anal. 2012, 16, 216–226. [Google Scholar] [CrossRef]
  23. Amel, F.; Mohammed, M.; Abdelhafid, B. Improvement of the Hard Exudates Detection Method Used For Computer- Aided Diagnosis of Diabetic Retinopathy. Int. J. Image Graph. Signal Process. 2012, 4, 19–27. [Google Scholar] [CrossRef] [Green Version]
  24. Akram, M.U.; Khalid, S.; Tariq, A.; Khan, S.A.; Azam, F. Detection and classification of retinal lesions for grading of diabetic retinopathy. Comput. Biol. Med. 2014, 45, 161–171. [Google Scholar] [CrossRef]
  25. Akram, M.U.; Tariq, A.; Khan, S.A.; Javed, M.Y. Automated detection of exudates and macula for grading of diabetic macular edema. Comput. Meth. Programs Biomed. 2014, 114, 141–152. [Google Scholar] [CrossRef]
  26. Quellec, G.; Lamard, M.; Abràmoff, M.D.; Decencière, E.; Lay, B.; Erginay, A.; Cochener, B.; Cazuguel, G. A multiple-instance learning framework for diabetic retinopathy screening. Med. Image Anal. 2012, 16, 1228–1240. [Google Scholar] [CrossRef] [PubMed]
  27. Decencière, E.; Cazuguel, G.; Zhang, X.; Thibault, G.; Klein, J.C.; Meyer, F.; Marcotegui, B.; Quellec, G.; Lamard, M.; Danno, R.; et al. TeleOphta: Machine learning and image processing methods for teleophthalmology. IRBM 2013, 34, 196–203. [Google Scholar] [CrossRef]
  28. Kauppi, T.; Kalesnykiene, V.; Kamarainen, J.K.; Sorri, L.L.I.; Raninen, A.; Voutilainen, R.; Uusitalo3, H.; Kalviainen, H.; Pietil, J. DIARETDB1 diabetic retinopathy database and evaluation protocol. Proc. Med. Image Underst. Anal. 2007, 1, 61–65. [Google Scholar]
  29. Zhang, X.; Thibault, G.; Decencière, E.; Quellec, G.; Cazuguel, G.; Erginay, A.; Massin, P.; Chabouis, A. Spatial normalization of eye fundus images. In Proceedings of the 9th IEEE International Symposium on Biomedical Imaging, Barcelone, Spain, 2–5 May 2012. [Google Scholar]
  30. Abràmoff, M.D.; Folk, J.C.; Han, D.P.; Walker, J.D.; Williams, D.F.; Russell, S.R.; Massin, P.; Cochener, B.; Gain, P.; Tang, L.; et al. Automated analysis of retinal images for detection of referable diabetic retinopathy. JAMA Ophthalmol. 2013, 131, 351–357. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Colomer, A.; Naranjo, V.; Angulo, J. Colour normalization of fundus images based on geometric transformations applied to their chromatic histogram. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 3135–3139. [Google Scholar]
  32. Almotiri, J.; Elleithy, K.; Elleithy, A. Retinal Vessels Segmentation Techniques and Algorithms: A Survey. Appl. Sci. 2018, 8, 155. [Google Scholar] [CrossRef] [Green Version]
  33. Thakur, N.; Juneja, M. Survey on segmentation and classification approaches of optic cup and optic disc for diagnosis of glaucoma. Biomed. Signal Process. Control 2018, 42, 162–189. [Google Scholar] [CrossRef]
  34. Bertalmio, M.; Sapiro, G.; Caselles, V.; Ballester, C. Image Inpainting. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques; ACM Press/Addison-Wesley Publishing Co.: New York, NY, USA, 2000; pp. 417–424. [Google Scholar] [CrossRef]
  35. Qureshi, M.A.; Deriche, M.; Beghdadi, A.; Amin, A. A critical survey of state-of-the-art image inpainting quality assessment metrics. J. Vis. Commun. Image Represent. 2017, 49, 177–191. [Google Scholar] [CrossRef]
  36. Colomer, A.; Naranjo, V.; Engan, K.; Skretting, K. Assessment of sparse-based inpainting for retinal vessel removal. Signal Process. Image Commun. 2017, 59, 73–82. [Google Scholar] [CrossRef]
  37. Morales, S.; Naranjo, V.; Angulo, J.; Alcaniz, M. Automatic Detection of Optic Disc Based on PCA and Mathematical Morphology. IEEE Trans. Med. Imaging 2013, 32, 786–796. [Google Scholar] [CrossRef]
  38. Ojala, T.; Pietikäinen, M.; Harwood, D. A comparative study of texture measures with classification based on featured distributions. Pattern Recognit. 1996, 29, 51–59. [Google Scholar] [CrossRef]
  39. Moin, M.S.; Tavakoli, H.R.; Broumandnia, A. A new retinal vessel segmentation method using preprocessed Gabor and local binary patterns. In Proceedings of the 2010 6th Iranian Conference on Machine Vision and Image Processing, Isfahan, Iran, 27–28 October 2010; pp. 1–6. [Google Scholar]
  40. Hatami, N.; Goldbaum, M. Automatic identification of retinal arteries and veins in fundus images using local binary patterns. arXiv 2016, arXiv:1605.00763. [Google Scholar]
  41. Garnier, M.; Hurtut, T.; Tahar, H.B.; Cheriet, F. Automatic multiresolution age-related macular degeneration detection from fundus images. In Proceedings of the Medical Imaging 2014: Computer-Aided Diagnosis. International Society for Optics and Photonics, San Diego, CA, USA, 18 March 2014; pp. 9035–9042. [Google Scholar]
  42. Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  43. Matheron, G. Elements Pour Une Théorie Des Milieux Poreux; Masson: Paris, France, 1967. [Google Scholar]
  44. Serra, J. Image Analysis and Mathematical Morphology; Academic Press, Inc.: Orlando, FL, USA, 1983. [Google Scholar]
  45. Agurto, C.; Yu, H.; Murray, V.; Pattichis, M.S.; Barriga, S.; Bauman, W.; Soliz, P. Detection of neovascularization in the optic disc using an AM-FM representation, granulometry, and vessel segmentation. In Proceedings of the 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Diego, CA, USA, 28 August–1 September 2012; pp. 4946–4949. [Google Scholar]
  46. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  47. Scholkopf, B.; Smola, A.J. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond; MIT Press: Cambridge, MA, USA, 2001. [Google Scholar]
  48. Chang, C.C.; Lin, C.J. LIBSVM: A Library for Support Vector Machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 1–27. [Google Scholar] [CrossRef]
  49. Bishop, C. Pattern Recognition and Machine Learning; Springer: New York, NY, USA, 2010. [Google Scholar]
  50. Tapia, S.L.; Molina, R.; Blanca, N.P.d.l. Detection and localization of objects in Passive Millimeter Wave Images. In Proceedings of the 2016 24th European Signal Processing Conference (EUSIPCO), Budapest, Hungary, 29 August–2 September 2016; pp. 2101–2105. [Google Scholar] [CrossRef]
  51. Huang, J.; Ling, C.X. Using AUC and accuracy in evaluating learning algorithms. IEEE Trans. Knowl. Data Eng. 2005, 17, 299–310. [Google Scholar] [CrossRef] [Green Version]
  52. Prati, R.C.; Batista, G.E.; Monard, M.C. A Survey on Graphical Methods for Classification Predictive Performance Evaluation. IEEE Trans. Knowl. Data Eng. 2011, 23, 1601–1618. [Google Scholar] [CrossRef]
  53. Mandrekar, J.N. Receiver Operating Characteristic Curve in Diagnostic Test Assessment. J. Thorac. Oncol. 2010, 5, 1315–1316. [Google Scholar] [CrossRef] [Green Version]
  54. Rocha, A.; Carvalho, T.; Jelinek, H.F.; Goldenstein, S.; Wainer, J. Points of Interest and Visual Dictionaries for Automatic Retinal Lesion Detection. IEEE Trans. Biomed. Eng. 2012, 59, 2244–2253. [Google Scholar] [CrossRef]
  55. Roychowdhury, S.; Koozekanani, D.D.; Parhi, K.K. Screening fundus images for diabetic retinopathy. In Proceedings of the 2012 Conference Record of the Forty Sixth Asilomar Conference on Signals, Systems and Computers (ASILOMAR), Pacific Grove, CA, USA, 4–7 November 2012; pp. 1641–1645. [Google Scholar]
  56. Ashraf, M.N.; Habib, Z.; Hussain, M. Texture Feature Analysis of Digital Fundus Images for Early Detection of Diabetic Retinopathy. In Proceedings of the 2014 11th International Conference on Computer Graphics, Imaging and Visualization, Singapore, 6–8 August 2014; pp. 57–62. [Google Scholar]
  57. Junior, S.B.; Welfer, D. Automatic Detection of Microaneurysms and Hemorrhages in Color Eye Fundus Images. Int. J. Comput. Sci. Inf. Technol. 2013, 5, 21–37. [Google Scholar] [CrossRef]
Figure 1. Example of exudates, microaneurysms and hemorrhages in fundus images.
Figure 1. Example of exudates, microaneurysms and hemorrhages in fundus images.
Sensors 20 01005 g001
Figure 2. Illustration of the local analysis in fundus images: (a) Visual representation of how the sliding window of dimensions N w × N w loops a fundus image and (b) the resulting image grid composed by the centers of the sliding window in each position. This process is equivalent to a dense sampling. For the representation N w = 64 and ( Δ x , Δ y ) = ( 32 , 32 ) were used.
Figure 2. Illustration of the local analysis in fundus images: (a) Visual representation of how the sliding window of dimensions N w × N w loops a fundus image and (b) the resulting image grid composed by the centers of the sliding window in each position. This process is equivalent to a dense sampling. For the representation N w = 64 and ( Δ x , Δ y ) = ( 32 , 32 ) were used.
Sensors 20 01005 g002
Figure 3. (a) Binary mask used to exclude the patches containing optic disk pixels and the patches located out of the field of view and (b) center of the patches composing the final grid in which image descriptors will be applied.
Figure 3. (a) Binary mask used to exclude the patches containing optic disk pixels and the patches located out of the field of view and (b) center of the patches composing the final grid in which image descriptors will be applied.
Sensors 20 01005 g003
Figure 4. Local texture analysis. (a) Original fundus image, (b) LBP image and (c) VAR image with a patch highlighted in red and (d) the LBPV normalized histogram computed for the red patch.
Figure 4. Local texture analysis. (a) Original fundus image, (b) LBP image and (c) VAR image with a patch highlighted in red and (d) the LBPV normalized histogram computed for the red patch.
Sensors 20 01005 g004
Figure 5. Pyramid of openings computed for a fundus image using an isotropic structuring element. It is composed by 12 images ( n = 0 , 2 , 4 , , 22 ). (a) Original image; i.e., γ 0 is the identity mapping; (b–k) are the results of applying the opening operator with increasing size (according to s = 2 ) and (l) is the last image of the pyramid corresponding to the operation γ n m a x .
Figure 5. Pyramid of openings computed for a fundus image using an isotropic structuring element. It is composed by 12 images ( n = 0 , 2 , 4 , , 22 ). (a) Original image; i.e., γ 0 is the identity mapping; (b–k) are the results of applying the opening operator with increasing size (according to s = 2 ) and (l) is the last image of the pyramid corresponding to the operation γ n m a x .
Sensors 20 01005 g005
Figure 6. Pyramid of closings computed from a fundus image using an isotropic structuring element. It is composed by 12 images ( n = 0 , 2 , 4 , , 22 ). (a) Original image; i.e., φ 0 is the identity mapping; (b–k) are the results of applying the closing operator with increasing size (according to s = 2 ) and (l) is the last image of the pyramid corresponding to the operation φ n m a x .
Figure 6. Pyramid of closings computed from a fundus image using an isotropic structuring element. It is composed by 12 images ( n = 0 , 2 , 4 , , 22 ). (a) Original image; i.e., φ 0 is the identity mapping; (b–k) are the results of applying the closing operator with increasing size (according to s = 2 ) and (l) is the last image of the pyramid corresponding to the operation φ n m a x .
Sensors 20 01005 g006
Figure 7. (a,c) represent the local extraction of the granulometry and anti-granulometry curves from the morphological pyramids Π γ and Π φ . (b,d) show the pattern spectrum computed for the patch marked in red in each case.
Figure 7. (a,c) represent the local extraction of the granulometry and anti-granulometry curves from the morphological pyramids Π γ and Π φ . (b,d) show the pattern spectrum computed for the patch marked in red in each case.
Sensors 20 01005 g007
Figure 8. A global pattern spectrum extracted by the combination of the granulometric and anti-granulometric profiles.
Figure 8. A global pattern spectrum extracted by the combination of the granulometric and anti-granulometric profiles.
Sensors 20 01005 g008
Figure 9. Process of creating the machine learning models using a generic dataset. Green and red samples are used in the creation of the model while blue instances refer to the samples used in the testing stage.
Figure 9. Process of creating the machine learning models using a generic dataset. Green and red samples are used in the creation of the model while blue instances refer to the samples used in the testing stage.
Sensors 20 01005 g009
Figure 10. ROC curves for the different tests.ROC curves for the different tests
Figure 10. ROC curves for the different tests.ROC curves for the different tests
Sensors 20 01005 g010
Figure 11. Automatic lesion detection in four retinal images using the best configuration of feature vector and gaussian processes for classification. (a,b) Two representative images (DS000U30.jpg and DS0009NU.jpg) from the E-OPHTHA exudates database and (c,d) two representative image (image014.png and image016.png) from DIARETDB1 database. Red squares indicate the true positives, green squares the false positives and the blue squares reveal the false negative detections.
Figure 11. Automatic lesion detection in four retinal images using the best configuration of feature vector and gaussian processes for classification. (a,b) Two representative images (DS000U30.jpg and DS0009NU.jpg) from the E-OPHTHA exudates database and (c,d) two representative image (image014.png and image016.png) from DIARETDB1 database. Red squares indicate the true positives, green squares the false positives and the blue squares reveal the false negative detections.
Sensors 20 01005 g011
Table 1. AUC, accuracy, sensitivity and specificity related to the exudate detection for E-OPHTHA exudates database taking into account different morphological input vectors to the SVM classifier.
Table 1. AUC, accuracy, sensitivity and specificity related to the exudate detection for E-OPHTHA exudates database taking into account different morphological input vectors to the SVM classifier.
AccuracySensitivitySpecificityAUC
γ B 0.6405 ± 0.05720.6137 ± 0.13780.6489 ± 0.08130.6701 ± 0.0617
γ L 0.6684 ± 0.03530.6009 ± 0.14490.6759 ± 0.05210.6907 ± 0.0601
φ B 0.7014 ± 0.02470.6396 ± 0.14200.7090 ± 0.04260.7337 ± 0.0777
φ L 0.5794 ± 0.05880.5105 ± 0.08870.5914 ± 0.08230.5783 ± 0.0366
γ B φ B 0.7554 ± 0.01610.6592 ± 0.11790.7698 ± 0.02920.7877 ± 0.0524
γ B φ L 0.6550 ± 0.05280.6223 ± 0.12430.6636 ± 0.07450.6946 ± 0.0546
γ B γ L 0.7321 ± 0.02080.6394 ± 0.08390.7465 ± 0.02710.7543 ± 0.0461
γ L φ B 0.7112 ± 0.02620.6446 ± 0.13440.7194 ± 0.04510.7482 ± 0.0684
γ L φ L 0.6685 ± 0.03200.5935 ± 0.13630.6774 ± 0.04990.6882 ± 0.0564
φ B φ L 0.7014 ± 0.02560.6346 ± 0.13760.7099 ± 0.04320.7349 ± 0.0749
γ B γ L φ B φ L 0.7620 ± 0.01650.6648 ± 0.11240.7762 ± 0.03000.7924 ± 0.0493
Table 2. AUC, accuracy, sensitivity and specificity related to the exudate detection for E-OPHTHA exudates database taking into account different input vectors to the SVM classifier. In particular, they are composed by texture and shape descriptors.
Table 2. AUC, accuracy, sensitivity and specificity related to the exudate detection for E-OPHTHA exudates database taking into account different input vectors to the SVM classifier. In particular, they are composed by texture and shape descriptors.
AccuracySensitivitySpecificityAUC
LBPV0.8205 ± 0.03200.7389 ± 0.10340.8331 ± 0.05340.8684 ± 0.0435
LBPV- φ B 0.8369 ± 0.03360.7603 ± 0.08970.8481 ± 0.05190.8834 ± 0.0397
LBPV- γ B φ B 0.8445 ± 0.02760.7657 ± 0.08940.8561 ± 0.04490.8872 ± 0.0384
LBPV- γ B γ L 0.8447 ± 0.02280.7496 ± 0.10790.8591 ± 0.04320.8803 ± 0.0409
LBPV- γ B γ L φ B φ L 0.8533 ± 0.02450.7721 ± 0.08570.8651 ± 0.03990.8948 ± 0.0351
Table 3. AUC, accuracy, sensitivity and specificity related to the bright-lesion detection on E-OPHTHA exudates database for each classification method using a decision threshold δ = 0.5 .
Table 3. AUC, accuracy, sensitivity and specificity related to the bright-lesion detection on E-OPHTHA exudates database for each classification method using a decision threshold δ = 0.5 .
AccuracySensitivitySpecificityAUC
Random Forests0.9508 ± 0.00840.4785 ± 0.10150.9921 ± 0.00430.9256 ± 0.0173
Linear-SVM0.8533 ± 0.02450.7721 ± 0.08570.8651 ± 0.03990.8948 ± 0.0351
RBF-SVM0.8796 ± 0.02290.8118 ± 0.06180.8851 ± 0.02960.9240 ± 0.0161
Gaussian Processes0.8762 ± 0.02060.8348 ± 0.06500.8795 ± 0.02660.9353 ± 0.0174
Table 4. AUC, accuracy, sensitivity and specificity related to the bright-lesion detection on E-OPHTHA exudates database. Results were optimized to the best sensitivity/specificity trade-off.
Table 4. AUC, accuracy, sensitivity and specificity related to the bright-lesion detection on E-OPHTHA exudates database. Results were optimized to the best sensitivity/specificity trade-off.
AccuracySensitivitySpecificity δ
Random Forests0.8410 ± 0.01810.8418 ± 0.01780.8411 ± 0.01810.8999 ± 0.0214
Linear-SVM0.8242 ± 0.02790.8243 ± 0.02840.8242 ± 0.02780.5771 ± 0.0958
RBF-SVM0.8529 ± 0.01900.8531 ± 0.01930.8529 ± 0.01900.5835 ± 0.1012
Gaussian Processes0.8581 ± 0.02210.8579 ± 0.02210.8579 ± 0.02220.5348 ± 0.0747
Table 5. AUC, accuracy, sensitivity and specificity related to the bright-lesion detection on DIARETDB1 database for each classification method using a decision threshold δ = 0.5 .
Table 5. AUC, accuracy, sensitivity and specificity related to the bright-lesion detection on DIARETDB1 database for each classification method using a decision threshold δ = 0.5 .
AccuracySensitivitySpecificityAUC
Random Forests0.9370 ± 0.01550.5096 ± 0.04780.9889 ± 0.00560.8852 ± 0.0314
Linear-SVM0.8702 ± 0.02170.7396 ± 0.05280.8849 ± 0.02600.8879 ± 0.0301
RBF-SVM0.8834 ± 0.03110.0.7509 ± 0.04890.8903 ± 0.02110.8901 ± 0.291
Gaussian Processes0.8703 ± 0.01890.7705 ± 0.05000.8814 ± 0.02240.8971 ± 0.0286
Table 6. AUC, accuracy, sensitivity and specificity related to the bright-lesion detection on DIARETDB1 database. Results were optimized to the best sensitivity-specificity trade-off.
Table 6. AUC, accuracy, sensitivity and specificity related to the bright-lesion detection on DIARETDB1 database. Results were optimized to the best sensitivity-specificity trade-off.
AccuracySensitivitySpecificity δ
Random Forests0.8037 ± 0.03190.8027 ± 0.03240.8038 ± 0.03190.9006 ± 0.0146
Linear-SVM0.8112 ± 0.03160.8107 ± 0.03150.8112 ± 0.03160.6207 ± 0.0440
RBF-SVM0.8166 ± 0.03220.8154 ± 0.03210.8155 ± 0.03220.6334 ± 0.0401
Gaussian Processes0.8184 ± 0.03240.8183 ± 0.03240.8184 ± 0.03240.6023 ± 0.0518
Table 7. Comparison of exudate detection methods for the 47 retinal images with exudates of DIARETDB1 database.
Table 7. Comparison of exudate detection methods for the 47 retinal images with exudates of DIARETDB1 database.
MethodsSensitivitySpecificityPPV
Sopharak et al. [21]0.84820.99310.2548
Walter et al. [16]0.66000.98640.1945
Welfer et al. [17]0.70480.98840.2132
Ghafourian and Pourreza [18]0.7828--
Proposed method0.8184 ± 0.03240.8183 ± 0.03240.4373 ± 0.1374
Table 8. AUC, accuracy, sensitivity and specificity related to the dark-lesion detection (microaneurysms and hemorrhages) on DIARETDB1 database for each classification method studied in this work using a decision threshold δ = 0.5 .
Table 8. AUC, accuracy, sensitivity and specificity related to the dark-lesion detection (microaneurysms and hemorrhages) on DIARETDB1 database for each classification method studied in this work using a decision threshold δ = 0.5 .
AccuracySensitivitySpecificityAUC
Random Forests0.9016 ± 0.03110.1718 ± 0.08000.9818 ± 0.00350.8150 ± 0.0356
Linear-SVM0.7404 ± 0.01340.6969 ± 0.08650.7462 ± 0.02450.7975 ± 0.0367
RBF-SVM0.7603 ± 0.02230.7299 ± 0.08210.7576 ± 0.03110.8209 ± 0.344
Gaussian Processes0.7612 ± 0.02440.7489 ± 0.08440.7630 ± 0.03500.8344 ± 0.0330
Table 9. AUC, accuracy, sensitivity and specificity related to the dark-lesion detection (microaneurysms and hemorrhages) on DIARETDB1 database. Results were optimized to the best sensitivity-specificity trade-off.
Table 9. AUC, accuracy, sensitivity and specificity related to the dark-lesion detection (microaneurysms and hemorrhages) on DIARETDB1 database. Results were optimized to the best sensitivity-specificity trade-off.
AccuracySensitivitySpecificity δ
Random Forests0.7397 ± 0.03280.7391 ± 0.03200.7398 ± 0.03290.8645 ± 0.0329
Linear-SVM0.7262 ± 0.03290.7261 ± 0.03240.7262 ± 0.03300.5170 ± 0.0591
RBF-SVM0.7498 ± 0.02770.7375 ± 0.03010.7374 ± 0.03020.5554 ± 0.0601
Gaussian Processes0.7562 ± 0.02900.7561 ± 0.02890.7562 ± 0.02900.5036 ± 0.0712
Table 10. Comparison of dark-lesion detection methods for the 45 retinal images with microaneurysms or hemorrhages of DIARETDB1 database.
Table 10. Comparison of dark-lesion detection methods for the 45 retinal images with microaneurysms or hemorrhages of DIARETDB1 database.
MethodsSensitivitySpecificityAUC
Rocha et al. [54]0.90000.60000.7640
Roychowdhury et al. [55]0.75500.93730.8263
Ashraf et al. [56]0.67630.62780.6500
Junior and Welfer [57]0.87690.9244-
Proposed method0.7561 ± 0.03010.7562 ± 0.02900.8344 ± 0.0330

Share and Cite

MDPI and ACS Style

Colomer, A.; Igual, J.; Naranjo, V. Detection of Early Signs of Diabetic Retinopathy Based on Textural and Morphological Information in Fundus Images. Sensors 2020, 20, 1005. https://doi.org/10.3390/s20041005

AMA Style

Colomer A, Igual J, Naranjo V. Detection of Early Signs of Diabetic Retinopathy Based on Textural and Morphological Information in Fundus Images. Sensors. 2020; 20(4):1005. https://doi.org/10.3390/s20041005

Chicago/Turabian Style

Colomer, Adrián, Jorge Igual, and Valery Naranjo. 2020. "Detection of Early Signs of Diabetic Retinopathy Based on Textural and Morphological Information in Fundus Images" Sensors 20, no. 4: 1005. https://doi.org/10.3390/s20041005

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop