Next Article in Journal
BL0K: A New Stage of Privacy-Preserving Scope for Location-Based Services
Previous Article in Journal
PPCS: A Progressive Popularity-Aware Caching Scheme for Edge-Based Cache Redundancy Avoidance in Information-Centric Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Approach to Automatic Hard Exudate Detection in Retina Color Images by a Telemedicine System Based on the d-Eye Sensor and Image Processing Algorithms

1
Department of Ophthalmology, Faculty of Medicine, Medical University of Bialystok, 24A Curie-Sklodowskiej Street, 15-276 Bialystok, Poland
2
Bialystok University of Technology, Faculty of Computer Science, 45A Wiejska Street, 15-351 Białystok, Poland
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(3), 695; https://doi.org/10.3390/s19030695
Submission received: 14 December 2018 / Revised: 31 January 2019 / Accepted: 5 February 2019 / Published: 8 February 2019
(This article belongs to the Section Intelligent Sensors)

Abstract

:
Hard exudates are one of the most characteristic and dangerous signs of diabetic retinopathy. They can be marked during the routine ophthalmological examination and seen in color fundus photographs (i.e., using a fundus camera). The purpose of this paper is to introduce an algorithm that can extract pathological changes (i.e., hard exudates) in diabetic retinopathy. This was a retrospective, nonrandomized study. A total of 100 photos were included in the analysis—50 sick and 50 normal eyes. Small lesions in diabetic retinopathy could be automatically diagnosed by the system with an accuracy of 98%. During the experiments, the authors used classical image processing methods such as binarization or median filtration, and data was read from the d-Eye sensor. Sixty-seven patients (39 females and 28 males with ages ranging between 50 and 64) were examined. The results have shown that the proposed solution accuracy level equals 98%. Moreover, the algorithm returns correct classification decisions for high quality images and low quality samples. Furthermore, we consider taking retina photos using mobile phones rather than fundus cameras, which is more practical. The paper presents an innovative approach. The results are introduced and the algorithm is described.

1. Introduction

Diabetic retinopathy is one of the commonest reasons for blind registration in the world. Four hundred million people in the world have diabetes, and this number is expected to rise by 2035. 80% of people have retinopathy after 20 years of having the disease [1,2]. It is still a challenge for ophthalmologists, as diabetic retinopathy should be diagnosed before it is symptomatic. Screening examinations are very important but people often forget to see their ophthalmologist until they notice decreased visual acuity. Diabetic retinopathy is a vascular disease. Development of microaneurysms (widening of the retinal vessels) in the capillary network allows plasma and lipids to leak out from the vessels into retina [1]. Those lipids are called hard exudates. They are yellow, shiny flecks in the macular area. Diabetes is the main cause of the hard exudates but they may also be caused by retinal vein occlusion, neuroretinitis, or radiation-induced retinal vasculopathy. Plasma leakage results in retinal oedema. Vision may be significantly reduced. Other lesions may also appear in diabetic retinopathy, such as white cotton-wool spots (as a result of vessel occlusion) or haemorrhages [3]. The earlier and more precisely the disease is diagnosed, the more accurate and successful the first steps taken to deal with the patient’s state will be. Detecting disease in its early stages and providing proper treatment gives a chance of stopping its progression and improving the patient’s visual acuity [4]. The authors’ algorithm could succeed in identifying such cases. Taking a retinal photo using a phone camera may detect small lesions and retinal changes, and suggest whether patients require referral to an ophthalmologist. Treatment depends on visual acuity and the extent of the retinal abnormalities. In some cases, it is enough to observe the patient rather than treating them.
In the literature, different approaches to retina color image processing and hard exudate extraction were found. These solutions can be classified into two groups: the first based on image processing and analysis and the second using artificial intelligence. However, despite the fact that there are multiple diversified approaches connected with retina color image processing, little research has been done in this particular field with high accuracy results. The authors took into consideration only approaches published between 2007 and 2018. Scopus, Web of Science, Springer Link, and Nature databases have been searched. Keywords by which the databases were explored are: retina, hard exudates, pathological changes, ophthalmologic system, telemedicine, and automatic diagnosis.
The main question is connected with the current state of the art; namely, what techniques were used in currently published algorithms. Moreover, the authors would like to present a few representative examples of both groups of algorithms mentioned above.
At the beginning we were looking for different approaches to retina color image processing and analysis. The most interesting was [5]. It presented an algorithm to remove the optic disk and vascular pattern from a retina color image before hard exudate detection. The main disadvantage of the proposed method was its time consuming nature and the complexity of its algorithm.
The authors in [6] proposed an interesting approach to retina color image processing towards human recognition. In this work, a fully automated system was presented. The basic algorithm in this approach is scale invariant feature transform (SIFT) used to make retina images invariant of scale and rotation. Moreover, the authors proposed a novel preprocessing method based on the improved circular gabor transform (ICGF). Their approach was fast and was of high accuracy for retina color image processing applied for the sake of user recognition. However, contrary to our approach, it did not deal with pathological changes in the course of human identification.
The second group of algorithms was connected directly with hard exudate detection. The first analyzed paper was based on the Haar wavelet [7]. The algorithm consisted of seven steps in which the authors performed simple operations in the image (like conversion to grayscale or color normalization) as well as more advanced algorithms (like Haar wavelet decomposition and reconstruction). The results presented in [7] show that this approach gave no more than 22.48% accuracy. Hence, the main disadvantage of this approach was definitely its low accuracy level.
Another interesting algorithm is presented in [8]. The authors proposed an approach to hard exudate detection based on morphological feature extraction. Their algorithm consisted of 10 steps, each of them easily classified as either typical or basic image processing methods. Beyond these steps, the authors claimed they needed to implement another algorithm, by which optic disk and bright structures were removed. The main disadvantages of this approach were connected with the high level of complexity and low accuracy in images with different levels of brightness.
An approach using deep learning was presented in [9]. In it, the authors created a deep neural network for hard exudate detection. They used TensorFlow framework for implementation. This solution was time consuming and did not detect small hard exudates in their initial stage.
Literature review has shown there are multiple different techniques used for retina color image processing. We observed simple algorithms based on filtering or morphological operations. There also are approaches that use more advanced techniques, like Haar wavelet decomposition or support vector machine [10] and discrete cosine transform [11]. Moreover, during the initial stage of the research, it was observed that most of the approaches are based on simple image processing methods [12,13,14]. There are also different representatives of the second group of solutions that are soft computing-based. Multiple different techniques were used to detect hard exudates; most of them were connected with neural networks [15,16], genetic algorithms [17], machine learning [18,19], and deep learning [20,21]. They present interesting ideas for hard exudate detection with artificial intelligence methods. However, all of these algorithms are time-consuming, with the disadvantage that small hard exudates in their initial stage are not detected by their solutions.
Another question is connected with the currently presented approaches to telemedicine in ophthalmology. These solutions can be divided into two groups. The first uses intelligent techniques and is fully-automated, while the second needs stationary devices.
The most interesting representatives of the first group are [22,23]. In these approaches, smart algorithms for hard exudate detection were introduced. The second group is represented by [24]. The authors proposed a new approach to telemedicine in ophthalmology, in which a high-resolution fundus camera needs to be used. The authors proposed to use a device placed in a specialized van. When the image is obtained it is sent to ophthalmologist by the internet connection.
In the literature, authors also found approaches to current state-of-the-art review [25,26]. In the case of [26], a comparison between different approaches to smartphone usage in telemedicine was presented. The authors also compared results of different solutions in diabetic retinopathy automatic diagnosis. Neither of these approaches has accuracy higher than 95%.
In this paper, we present our own robust high accuracy (98%) algorithm by which hard exudates can be detected with simple and fast methods. Moreover, description of the fully-automated telemedicine system based on d-Eye sensor is presented. Experiments related to low quality images are also conducted. The results and discussion are described in Section 3 and Section 4, respectively.

2. Materials and Methods

2.1. Proposed Telemedicine System

Telemedicine is one of the fastest developing branches of science, and combines information technology and medicine. An interesting description of this idea was presented in [27]. We also have contributed to this idea. Our approach is based on novel d-Eye sensor and fully-automated image processing system for pathological change detection in retina color images.
The general architecture of the proposed smart system is presented in Figure 1. It can be divided into two main parts. The first is connected with the end user (patient) and the second consists of image processing, feature extraction and classification algorithms. The software part was implemented with Java programming language (cloud part) and Swift (smartphone part). Moreover, the application was deployed on two cloud platforms: Google Cloud and Microsoft Azure. The authors decided to test these two popular services; both of them allowed results to be achieved in similar times. The cloud part was created as a set of REST (Representational State Transfer) services that are called by the application after the user obtains their retina photo. Each procedure is called using standard HTTP methods (GET, POST, PUT, DELETE). However, to use all services implemented in the cloud, the user has to validate themself. This is done on the basis of third party (Google, Microsoft, GitHub) identity providers and JSON Web Token obtained after successful validation on external pages.
The customer-available components are the smartphone and the d-Eye [28] sensor. In this case, the crucial point of the approach is the d-Eye, a smartphone-based retinal screening system. This sensor can be easily attached to a smartphone. Connection between devices allows the user to change their mobile phone into real ophthalmic camera. After fusion it is easily possible to conduct routine eye examinations and retinal screening. Moreover, the user will get a high resolution retina color image. The d-Eye sensor is composed of two parts: the bumper and the d-Eye lens. This smart sensor uses the camera and the light source provided by the smartphone. The light is redirected from the flash and is projected coaxially to the lens. This allows reflection of the retina image. Moreover, the d-Eye lens eliminates corneal glare, which is a common problem in standard ophthalmoscopes. The d-Eye sensor works properly when it is placed about 1 cm in front of the human eye. Currently, the authors of this device have only prepared overlays for Apple iPhone (from Apple iPhone 5 to Apple iPhone 7), although they are working on versions for other smartphones.
When a retina color image is obtained by the user, the image processing and analysis procedure starts. At the beginning, the image is sent by remote connection to the cloud-based ophthalmic system. Before this, the data is encrypted using public key cryptography. The next step is connected with image processing, and is fully-described in Section 2.2 of this article. When image processing is finished, an additional process is carried out, dealing with retina vein removal. Finally, the results obtained by the last procedure are applied to the image preprocessed in the first stage. Another step allows removal of the optic disk. After this method, the processing stage is finished. As the last step, stage classification based on image analysis is run. It gives the decision on whether the image contains pathological changes or not. Classification is also described in Section 2.2 of this article.
The proposed system is also ready to use with stationary devices like the Kowa VX-10 Fundus Camera. This means that the physician takes photos with this device and sends them by web portal to the processing system. The diagnosis result is then sent back to their smartphone. In Section 3, we present the results of the proposed approach on the basis of images from different sensing devices, handheld and stationary (i.e., Digital Eye Center Microclear Handheld Ophthalmic Camera HNF and Kowa VX-10 Fundus Camera). The devices used during the research are presented in Figure 2. The parameters of each device are presented in Table 1. All devices were used because the authors would like to apply their solution to a variety of images (with different resolutions and qualities). The results of these experiments are described in Section 3 of this document.

2.2. Methodology

Our approach consists of a few algorithms. The first is used to preprocess the image. With this step we obtain an image without additional superfluous elements that show the retina vascular pattern. The next step gives us the result after removal of the retina vascular pattern and optic disk from the image obtained after the first stage. Finally, we have the classification part, which provides information on whether the eye contains pathological changes or not. The activity diagram of the approach is presented in Figure 3.

2.2.1. Image Preprocessing

The image preprocessing part is mainly connected with our experience acquired during previous research [29,30,31]. The whole algorithm was prepared in Java programming language and was adapted to cloud-based solutions. The block diagram of this algorithm is presented in Figure 4.
The first step is to load the image. It is then converted to grayscale with green channel. In the literature we have found that this channel is normally used in medical image processing [32,33,34]. The experiments conducted by us have also shown that it gives the best results. It means that the image contains the largest number of the details we are looking for. The visual comparison results between different grayscale conversion methods are shown in Figure 5.
The second stage of the preprocessing part of our approach is the histogram stretching. The algorithm is used to increase the contrast of the image [35,36]. This operation is used to cover all possible gray levels (from 0 to 255). New values for each available grayscale level were calculated as in (1).
S k = S k k m i n k m a x k m i n Z k
where S k is the new value for k-th level, S k is the original value for k-th level, k m i n and k m a x are minimum and maximum level, respectively, in the original histogram, whilst Z k is number of new possible ranges. In this case, Z k = 256 because we use all values from 0 to 255. The result obtained with this operation is presented in Figure 6.
The next step is connected with noise removal from the image. We deal with this task using a median filtering operation. The result is presented in Figure 7.
The last step of the image preprocessing stage is gamma correction. Retina color images can be represented in different levels of brightness (from bright red to even blood red). This fact requires us to unify the color of the image. This was done with a gamma correction method, described in [37]. The gamma parameter value was calculated as in (2). Each pixel was multiplied by the gamma value. The result of this step is presented in Figure 8.
γ = 0.3 l o g 10 X
where X is the mean pixel value calculated on the basis of all pixels in the image.
This step ends the image preprocessing stage. The image is now moved to another module that extracts the vascular pattern. Another stage will concern vascular pattern and optic disk removal.

2.2.2. Retina Vascular Pattern Extraction

The second module of the proposed approach is needed to extract the retina vascular pattern. During discussion with ophthalmologists, it was pointed out that it is not possible to observe hard exudates on vascular patterns. This concludes that information about vascular patterns is not useful for detection of pathological changes, meaning that we can remove such information from our image. The block diagram of the proposed algorithm for retina vascular pattern extraction is presented in Figure 9.
The first step is once again conversion of the image to grayscale on the basis of green channel value. As was presented in Section 2.2.1, this value is used because it guarantees the most precise results and that the number of the details in the image is the highest.
Another step, noise removal, is also done with median filtering. This algorithm is an efficient way to get precise results. Original image and images obtained after both of these steps are presented in Figure 10.
The third and the fourth steps are connected with image enhancement before vessel segmentation. It is done on the basis of histogram equalization (which allows enhancement of the image contrast) and brightness correction. Images obtained after both of these stages are presented in Figure 11.
The main aim of this algorithm is vascular pattern extraction. The next procedure allows segmentation of vessels from the retina image. This is done with a Gaussian matched filter [38]. All twelve masks were used to detect vessels in the retina image. The result is presented in Figure 12.
To obtain proper vascular patterns, a local entropy binarization algorithm was used to get proper white veins on a black background. However, this has caused a few small elements that do not belong to retinal veins to be shown in the image, alongside the marked veins. Additional elements are removed on the basis of their length. If they are too short (the length in pixel number is selected arbitrarily), they are deleted–we assume that these elements are not parts of real veins. The results of both of these steps are presented in Figure 13.
Via this step, the vascular pattern of the processed retina was obtained. In the form presented in Figure 13b, it will be applied to the obtained image after the preprocessing stage (Figure 8b). Both of these images are considered in the next module.

2.2.3. Removal of Retina Vascular Pattern and Optic Disk

This module is responsible for image preparation through to final classification. The first step of the algorithm is to remove the vascular pattern that was extracted in the second stage from the final image in the first step. Removal means that all pixels belonging to vessels will be marked in black color. The image obtained after vascular pattern removal is presented in Figure 14.
The following stage is image binarization. Via this step, pathological changes can be extracted. The authors tested a few different automatic binarization methods and the experiments showed that the best results were obtained when the binarization threshold was set to 13. We observed that pathological changes were shown as well as the optic disk. The results after binarization are presented in Figure 15. The last step of this procedure is optic disk removal. The ophthalmologist opinion is that there is no possibility of hard exudates occurring on the optic disk, and hence it can be removed.
The optic disk is removed on the basis of data entered by the user regarding whether they are dealing with the left or right eye (these differ in terms of the optic disk position in the retina color image). In our program, the user (patient) sets this information at the beginning of the diagnostic procedure. Before removal, we calculate image variance of the last binarized sample. It allows observation of which parts of the image are characterized by the greatest variability. This image is presented in Figure 16.
After image variance calculation we can remove the optic disk. We are looking for the first white pixel from the side (left or right eye) selected by the user. When this pixel is found, we remove the 80 × 80 square [39] by marking all of these pixels in black color. The image after optic disk removal is presented in Figure 17.

2.2.4. Classification

The last part of our approach is connected to classification. As the first step, the image obtained from the third module is applied to the original retina color image. All changes are marked in blue color. The resulting image is presented in Figure 18. The classification algorithm is simple; if blue points are visible in the image obtained after optic disk removal in retina color image, we can conclude that the image contains pathological changes. In the other case (i.e., no blue points in retina color image), the retina is healthy.

3. Results

The first experimental part was conducted on a database consisting of 100 samples (50 healthy retinas and 50 with pathological changes). All of them were obtained with Kowa VX-10. The database was created on the basis of samples from 67 patients (some of them are represented by two samples taken at different times–the first at the beginning of treatment and the second one after a certain time from the treatment beginning). All samples were acquired during medical examinations at Białystok University Clinical Hospital. There, we used the whole presented method with classification part. As was mentioned in the second chapter, the decision was made on the basis of blue points that could be observed in retina color image after applying the results of our approach. The experiment showed that the proposed approach had an accuracy level of 98%. It was observed that only two healthy images were evaluated as images with pathological changes.
For the first experiment the authors answered the following question: Does the proposed system correctly classify images with pathological changes? It was necessary to measure the values of two parameters: false acceptance rate (FAR) and false rejection rate (FRR). On the basis of the conducted experiment, the conclusion was that FAR parameter level was 2%. False acceptance means recognizing a retina color image as a sample with pathological changes when it represented a healthy eye; that is, the system would incorrectly recognize pathological lesions in healthy eyes. Moreover, we checked the value of the FRR parameter. False rejection is a situation when retina color image contains pathological changes but in fact it is recognized as a healthy eye. FRR for our algorithm was equal to 0.
The first author, an ophthalmologist, pointed out that it is better to have higher FAR than FRR because patients with diabetes can never have too many eye fundus examinations, especially when the duration of their disease is significantly long. The duration of the diabetes mellitus is the most valid risk factor for development of diabetic retinopathy.
The second experiment was connected to checking whether our algorithm could be used for different images—ones with high resolution and high quality, and ones obtained by devices with worse parameters. Our database, obtained by the lower quality devices, contained 60 samples (50 healthy retinas and 10 with pathological changes). Thirty healthy samples were obtained with Digital Eye Center Microclear Handheld Ophthalmic Camera HNF whilst the rest of the low quality images were acquired with the d-Eye sensor. All of samples were acquired during medical examinations at Białystok University Clinical Hospital. Comparison was done to check if it was possible to observe pathological changes in retina images from worse devices. The sample image is presented in Figure 19.
The results showed that in the case of retina images from devices with lower precision, none of the healthy pictures was classified as a sample with pathological changes. Moreover, all retinas with diabetic retinopathy were classified as having pathological changes. This experiment confirmed that our solution can also be easily used with lower quality images. Moreover, it was pointed out that no additional adjustment of the proposed approach was needed.
It should also be pointed out that all samples (high quality and low quality) were obtained in a clinical setting. This allowed us to obtain retina images taken with the highest precision by an experienced ophthalmologist. High quality samples were taken by the device with the highest possible resolution (Kowa VX-10) whilst low quality images were obtained with the other two devices.
The summary of the results obtained for both experiments is shown in Table 2.
The authors calculated values for parameters FAR and FRR as well as for sensitivity and specificity. Statistical classification functions were calculated as in (3) and (4). Obtained results are presented in Table 3.
S e n s i t i v i t y = | T r u e   P o s i t i v i t e   s a m p l e s | | T r u e   P o s i t i v e   s a m p l e s | + | F a l s e   N e g a t i v e   s a m p l e s |
S p e c i f i c i t y = | T r u e   N e g a t i v e   s a m p l e s | | T r u e   N e g a t i v e   s a m p l e s | + | F a l s e   P o s i t i v e   s a m p l e s |

4. Discussion

The purpose of the study was to create and implement an algorithm which helps diabetic patients examine their eye fundus. Our method can extract pathological lesions such as hard exudates in diabetic retinopathy and separate them from healthy eyes. The algorithm is introduced to color fundus retina photos from patients of Medical University Clinical Hospital. We take into consideration diabetes mellitus, as it is one of the commonest and most dangerous diseases. Little research has been done by other researchers so far in this particular field.
During our research we were mainly looking for the answers to four questions. These were:
Would it be possible to create a fast and precise algorithm for hard exudate detection in retina color images?
Can we implement an algorithm that can be used in high quality images in low quality samples without precision reduction?
Should we expect higher false acceptance rate (FAR) than false rejection rate (FRR) in the case of pathological change detection in retina color images?
Can we use simple overlay on the smartphone for retina color image acquisition, and is it possible to implement a fully automated diagnostic system?
The conducted experiments provided the answer to all questions. The authors had various meetings with other experienced ophthalmologists. We asked them if it is better to detect some false changes in healthy retina images than to miss some little changes in the images with pathological lesions. The answer was that we should prepare a solution that could detect some false changes in the healthy samples rather than the second case. This was connected to the fact that it is better to inform the patient about possible pathological changes than to miss them and send information that eye is healthy.
In the case of the first question, it has to be pointed out that our solution has 98% accuracy. On a 100-element database, only two healthy images were classified as samples with pathological changes. Moreover, we measured the time needed to obtain the results. The first measurement was done only for the whole algorithm (without smartphone and web communication). The time after which the decision was presented was equal to 6 seconds. This test was run on a personal computer with one Intel Core i7 CPU, 16 GB RAM and 256 GB SSD hard drive. The second measurement was done for the whole system. The authors took into consideration communication between the smartphone, the d-Eye sensor, and the cloud-based program. As was mentioned in the description of the system architecture, the sensor was an overlay that used smartphone flash (so here we do not observe any delay), although the smartphone program calls REST services with standard HTTP methods and the data is encrypted before sending over the network (so here we can note some delays). Moreover, the user has to validate themself before they can use the cloud-based software. The examination results were obtained after 1 min and 15 s. For this case, we concluded that the web connection, data encryption, and communication procedures can be reasons for the additional time required to finish the procedure.
The second question was resolved by another experiment. As was mentioned in the third chapter, we also used low quality images for the pathological change recognition. We had healthy retina samples as well as diabetic retinopathy images. The results showed that our solution could be used in low quality images without any additional adjustment. During the experiments the authors did not observe any reduction in the algorithm accuracy.
The last question was in fact the most crucial and basic one. The authors implemented a fully automated pathological change detection algorithm and the whole cloud-based application together with the program for an iOS operating system. The proposed solution is easy to use with the d-Eye sensor and can be a real help for every person who would like to check the state of their health, not necessarily just those suffering from diabetes.
Another experiment that was done by the authors was a comparison with other solutions. We took into consideration the algorithms’ accuracies, sensitivities, and specificities. The results of our experiment have shown that the approach proposed in this article is competitive in comparison with the other solutions described in various research papers. A comparison of the proposed solution with others is presented in Table 4.
In Table 5, the authors present a summary of the techniques used in this paper, and others to which they are compared.
To sum up this section, the proposed cloud-based solution can be used by everyone because it is simple and its usage allows for the user to visit the ophthalmologist with initial examination results. The patients do not need to wait for examinations; they can do them on their own. Moreover, in comparison with the other researchers’ solutions, the proposed approach has higher accuracy level. Also, no images were mistakenly rejected (even if they had pathological changes).

5. Conclusions

The proposed algorithm detected hard exudates with an accuracy of 98%, which is significantly more precise than known ones. This may help patients with diabetic retinopathy get to an ophthalmologist before symptoms occur. Moreover, the solution is characterized by a high accuracy level and also short time of operation to get the precise examination results.
The approach presented in this paper was implemented in real development environment and was tested for accuracy and for data safety. We used different encryption algorithms to make data safer.
In the future, the authors would like to create a fully automated eye diagnostic system that can be used for detection of much more different diseases. We will take into consideration diseases other than diabetic retinopathy.
The authors’ current work is to improve the proposed algorithm with soft computing methods for accuracy results even higher than 98%. Moreover, we are working on implementing the hardware of our own sensor device for retina color image acquisition.

Author Contributions

E.S. proposed the subject and the medical part of the manuscript, discussed the achieved results, and suggested amendments; M.S. wrote the algorithm, cloud-based system, and technical part of the manuscript; K.S. reviewed and supervised the manuscript technical part and the implementation of the system and algorithm; Z.M. supervised the manuscript medical part.

Funding

This work was partially supported by grant N/ST/MN/17/001/1157 from Medical University of Białystok and by grant S/WI/3/2018 from Białystok University of Technology, and funded with resources for research by the Ministry of Science and Higher Education in Poland.

Acknowledgments

The authors would like to express their sincere thanks to Medical University of Białystok, Department of Ophthalmology. No work could be done without their generous help in sharing their data to process as well as their expertise.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. James, B.; Chew, C.; Bron, A. Lecture Notes. Ophthalmology 2007, 1, 172–173. [Google Scholar]
  2. Klein, B.E. Overview of epidemiologic studies of diabetic retinopathy. Ophthalmic Epidemiol. 2007, 14, 179–183. [Google Scholar] [CrossRef] [PubMed]
  3. Chen, Y.; Li, Y.; Yan, Y.; Shen, X. Diabetic macular morphology changes may occur in the early stage of diabetes. BMC Ophthalmol. 2016, 16. [Google Scholar] [CrossRef] [PubMed]
  4. Sasaki, M.; Kawasaki, R.; Noonan, J.E.; Wong, T.Y.; Lamourex, E.; Wang, J.J. Quantitative measurement of hard exudates in patients with diabetes and their associations with serum lipid levels. Invest. Ophthalmol. Vis. Sci. 2013, 54, 5544–5550. [Google Scholar] [CrossRef] [PubMed]
  5. Marupally, A.G.; Vupparaboina, K.K.; Peguda, H.K.; Richhariya, A.; Jana, S.; Chhablani, J. Semi-automated quantification of hard exudates in colour fundus photographs diagnosed with diabetic retinopathy. BMC Ophthalmol. 2017, 17, 172. [Google Scholar] [CrossRef] [PubMed]
  6. Meng, X.; Yin, Y.; Yang, G.; Xi, X. Retinal Identification Based on an Improved Circular Gabor Filter and Scale Invariant Feature Transform. Sensors 2013, 13, 9248–9266. [Google Scholar] [CrossRef] [PubMed]
  7. Rokade, P.; Manza, R. Automatic Detection of Hard Exudates in Retinal Images Using Haar Wavelet Transform. Int. J. Appl. Innov. Eng. Manage. 2015, 4, 402–410. [Google Scholar]
  8. Joshi, S.; Karlue, P.T. Detection of Hard Exudates Based on Morphological Feature Extraction. Biomed. Pharmacol. J. 2018, 11, 215–225. [Google Scholar] [CrossRef]
  9. Benzamin, A.; Chakraborty, C. Deep Learning for Hard Exudates Detection. Available online: https://arxiv.org/ftp/arxiv/papers/1808/1808.03656.pdf (accessed on 21 November 2018).
  10. Mansour, R.F.; Abdelrahim, E.M.; Al-Johani, A.S. Identification of Diabetic Retinal Exudates in Digital Color Images Using Support Vector Machine. J. Intell. Learn. Syst. Appl. 2013, 5, 135–142. [Google Scholar] [CrossRef]
  11. Rodriguez, L.; Serrano, G. Exudates and Blood Vessel Segmentation in Eye Fundus Images Using the Fourier and Cosine Discrete Transforms. Computation y Sistemas 2016, 20, 697–708. [Google Scholar]
  12. Kekre, H.; Sarode, T.; Parkar, T. Hybrid Approach for Detection of Hard Exudates. Int. J. Adv. Comput. Sci. Appl. 2013, 4, 250–255. [Google Scholar]
  13. Eadgahi, M.G.F.; Pourreza, H. Localization of Hard Exudates in Retinal Fundus Image by Mathematical Morphology Operations. J. Theor. Phys. Cryptography 2012, 1, 185–189. [Google Scholar]
  14. Partovi, M.; Rasta, S.H.; Javadzadeh, A. Automatic detection of retinal exudates in fundus images of diabetic retinopathy patients. J. Anal. Res. Clin. Med. 2016, 4, 104–109. [Google Scholar] [CrossRef]
  15. Garcia, M.; Sanchez, C.I.; Lopez, M.I.; Abasolo, D.; Hornero, R. Neural network based detection of hard exudates in retinal images. Comput. Methods Programs Biomed. 2009, 93, 9–19. [Google Scholar] [CrossRef] [PubMed]
  16. Wang, X.; Lu, Y.; Wang, Y.; Chen, W. Diabetic Retinopathy Stage Classification Using Convolutional Neural Networks. In Proceedings of the 2018 IEEE International Conference on Information Reuse and Integration (IR), Salt Lake City, UT, USA, 6–9 July 2018. [Google Scholar] [CrossRef]
  17. Jestin, V.K.; Anitha, J.; Hemanth, D.J. Genetic Algorithm for Retinal Image Analysis. Int. J. Comput. Appl. Technol. 2011, 2, 48–52. [Google Scholar]
  18. Kanagasingam, Y.; Xiao, D.; Vignarajan, J.; Preetham, A.; Tay-Kearney, M.; Mehrotra, A. Evaluation of Artificial Intelligence-Based Grading of Diabetic Retinopathy in Primary Care. JAMA Netw. Open 2018, 1, e182665. [Google Scholar] [CrossRef]
  19. Akila, T.; Kavitha, G. Detection and Classification of Hard Exudates in Human Retinal Fundus Images Using Clustering and Random Forest Methods. Int. J. Emerging Technol. Adv. Eng. 2014, 4, 24–29. [Google Scholar]
  20. Benzamin, A.; Chakraborty, C. Detection of Hard Exudates in Retinal Fundus Images Using Deep Learning. In Proceedings of the 2018 IEEE International Conference on System, Computation, Automation and Networking (ICSCAN) Proceedings 2018, Pondicherry, India, 6–7 July 2018. [Google Scholar] [CrossRef]
  21. Khojasteh, P.; Aliahmad, B.; Kumar, D.K. Fundus images analysis using deep features for detection of exudates, hemorrhages and microaneurysms. BMC Ophthalmol. 2018, 18, 288. [Google Scholar] [CrossRef]
  22. Kanagasingam, Y.; Bhuiyan, A.; Abramoff, M.D.; Smith, R.T.; Goldschmidt, L.; Wong, T.Y. Progress on retinal image analysis for age related macular degeneration. Prog. Retinal Eye Res. 2013. [Google Scholar] [CrossRef]
  23. Jin, K.; Lu, H.; Su, Z.; Cheng, C.; Ye, J.; Qian, D. Telemedicine screening of retinal diseases with a handheld portable non-mydriatic fundus camera. BMC Ophthalmol. 2017, 17, 89. [Google Scholar] [CrossRef]
  24. Prathiba, V.; Rema, M. Teleophthalmology: A Model for Eye Care Delivery in Rural and Underserved Areas of India. Int. J. Family Med. 2011. [Google Scholar] [CrossRef] [PubMed]
  25. Park, C.H.; Rahimy, E.; Shahlaee, A.; Federman, J.L. Telemedicine in Ophthalmology. Retina Today 2017, 4, 55–58. [Google Scholar]
  26. Mohammadpour, M.; Heidari, Z.; Mirghorbani, M.; Hashemi, H. Smartphones, tele-ophthalmology, and VISION 2020. Int. J. Ophthalmol. 2017, 10, 1909–1918. [Google Scholar] [PubMed]
  27. Telemedicine. Available online: http://www.who.int/goe/publications/goe_telemedicine_2010.pdf (accessed on 28 November 2018).
  28. d-Eye Opthalmic Sensor. Available online: https://www.d-eyecare.com/ (accessed on 28 November 2018).
  29. Szymkowski, M.; Saeed, E. A Novel Approach of Retinal Disorder Diagnosing Using Optical Coherence Tomography Scanners. In Transactions on Computational Science XXXI; Springer: Berlin/Heidelberg, Germany, 2018; Volume 31, pp. 31–40. [Google Scholar]
  30. Szymkowski, M.; Saeed, E.; Saeed, K. Retina Tomography and Optical Coherence Tomography in Eye Diagnostic System. In Advanced Computing and Systems for Security; Chaki, R., Cortesi, A., Saeed, K., Chaki, N., Eds.; Springer: Patna, India, 2018; pp. 31–42. [Google Scholar]
  31. Misztal, K.; Spurek, P.; Saeed, E.; Saeed, K.; Tabor, J. Cross Entropy Clustering Approach to Iris Segmentation for Biometrics Purpose. Schedae Inform. 2015, 24, 31–40. [Google Scholar]
  32. Xu, L.; Luo, S. A novel method for blood vessels detection from retinal images. BioMed. Eng. Online 2010, 9, 9–14. [Google Scholar] [CrossRef]
  33. Raja Sundhara Siva, D.; Vasuki, S. Automatic Detection of Blood Vessels in Retinal Images for Diabetic Retinopathy Diagnosis. Comput. Math. Methods Med. 2015, 2015, 12. [Google Scholar] [CrossRef]
  34. Zhang, J.; Cui, Y.; Jiang, W.; Wang, L. Blood Vessels Segmentation of Retinal Images Based on Neural Network. In Proceedings of the ICIG—8th International Conference on Image and Graphics, Tianjin, China, 13–16 August 2015; pp. 11–17. [Google Scholar]
  35. Saleh, D.; Eswaran, C.; Mueen, A. An Automated Blood Vessel Segmentation Algorithm Using Histogram Equalization and Automatic Threshold Selection. J. Digital Imaging 2011, 24, 564–572. [Google Scholar] [CrossRef]
  36. Operations on Histograms. Available online: https://www.tutorialspoint.com/dip/histogram_stretching.htm (accessed on 28 November 2018).
  37. Babakhani, P.; Zarei, P. Automatic gamma correction based on average of brightness. ACSIJ Adv. Comput. Sci. Int. J. 2015, 4, 156–159. [Google Scholar]
  38. Cruz-Aceves, I.; Cervantes-Sanchez, F.; Hernandez-Aguirre, A.; Perez-Rodriguez, R.; Ochoa-Zezzatti, A. A novel Gaussian matched filter based on entropy minimization for automatic segmentation of coronary angiograms. Comput. Electr. Eng. 2016, 53, 263–275. [Google Scholar] [CrossRef]
  39. Sinthanayouthin, C.; Boyce, J.; Willamson, T. Automated localization of the optic disk, fovea, and retinal blood vessels from digital colour fundus images. British J. Ophthalmol. 1999, 83, 902–910. [Google Scholar] [CrossRef]
Figure 1. The architecture of the proposed system.
Figure 1. The architecture of the proposed system.
Sensors 19 00695 g001
Figure 2. All devices used during research: (a) d-Eye overlay for smartphone, (b) Kowa VX-10 Fundus Camera, and (c) Digital Eye Center Microclear Handheld Ophthalmic Camera HNF.
Figure 2. All devices used during research: (a) d-Eye overlay for smartphone, (b) Kowa VX-10 Fundus Camera, and (c) Digital Eye Center Microclear Handheld Ophthalmic Camera HNF.
Sensors 19 00695 g002aSensors 19 00695 g002b
Figure 3. The activity diagram of the proposed approach.
Figure 3. The activity diagram of the proposed approach.
Sensors 19 00695 g003
Figure 4. The block diagram of the image preprocessing algorithm.
Figure 4. The block diagram of the image preprocessing algorithm.
Sensors 19 00695 g004
Figure 5. Visual comparison between different grayscale conversion methods: (a) original image; (b) green channel; (c) red channel; (d) blue channel; (e) average value of all channels.
Figure 5. Visual comparison between different grayscale conversion methods: (a) original image; (b) green channel; (c) red channel; (d) blue channel; (e) average value of all channels.
Sensors 19 00695 g005
Figure 6. (a) Grayscale image and (b) its form after histogram stretching.
Figure 6. (a) Grayscale image and (b) its form after histogram stretching.
Sensors 19 00695 g006
Figure 7. Image after (a) histogram stretching and (b) median filtering.
Figure 7. Image after (a) histogram stretching and (b) median filtering.
Sensors 19 00695 g007
Figure 8. Image (a) after median filtering and (b) after gamma correction.
Figure 8. Image (a) after median filtering and (b) after gamma correction.
Sensors 19 00695 g008
Figure 9. The block diagram of the proposed retina vascular pattern extraction.
Figure 9. The block diagram of the proposed retina vascular pattern extraction.
Sensors 19 00695 g009
Figure 10. (a) Original image, (b) image after conversion to grayscale with green channel and (c) image after noise removal.
Figure 10. (a) Original image, (b) image after conversion to grayscale with green channel and (c) image after noise removal.
Sensors 19 00695 g010
Figure 11. Image after (a) histogram equalization and (b) after brightness correction.
Figure 11. Image after (a) histogram equalization and (b) after brightness correction.
Sensors 19 00695 g011
Figure 12. Image after (a) brightness correction and (b) Gaussian matched filter.
Figure 12. Image after (a) brightness correction and (b) Gaussian matched filter.
Sensors 19 00695 g012
Figure 13. Image (a) after binarization and (b) after short vessel removal.
Figure 13. Image (a) after binarization and (b) after short vessel removal.
Sensors 19 00695 g013
Figure 14. Image (a) after the first step and (b) after vascular pattern removal.
Figure 14. Image (a) after the first step and (b) after vascular pattern removal.
Sensors 19 00695 g014
Figure 15. Image (a) after vascular pattern removal and (b) after binarization.
Figure 15. Image (a) after vascular pattern removal and (b) after binarization.
Sensors 19 00695 g015
Figure 16. Image (a) after binarization and (b) calculated image variance.
Figure 16. Image (a) after binarization and (b) calculated image variance.
Sensors 19 00695 g016
Figure 17. Image after (a) calculating its variance and (b) after optic disk removal.
Figure 17. Image after (a) calculating its variance and (b) after optic disk removal.
Sensors 19 00695 g017
Figure 18. Retina color image with marked pathological changes.
Figure 18. Retina color image with marked pathological changes.
Sensors 19 00695 g018
Figure 19. Retina color image obtained by the device with worse parameters. No pathological changes exist.
Figure 19. Retina color image obtained by the device with worse parameters. No pathological changes exist.
Sensors 19 00695 g019
Table 1. Comparison of all device parameters.
Table 1. Comparison of all device parameters.
Device NamePriceQuality (in Comparison to the Best)WeightRefractive CompensationImage Resolution
Kowa VX-1040,000 EURHigh quality35.5 kg−32D~+35D3000 × 2008
Digital Eye Center Microclear Handheld Ophthalmic Camera HNF5000 EURLow quality0.45 kg−20D~+20D1920 × 1080
d-Eye Sensor400 EURLow quality0.1 kg−10D~+8D864 × 1536
Table 2. The summary of the conducted experiments.
Table 2. The summary of the conducted experiments.
Type of ImagesSource of the ImagesNumber of SamplesClassification as Healthy RetinaClassification as Sample with Pathological Changes
High quality healthy retina imagesKowa VX-1050482
High quality retina images with pathological changesKowa VX-1050050
Low quality healthy retina imagesDigital Eye Center Microclear Handheld Ophthalmic Camera HNF, d-Eye Sensor50500
Low quality retina images with pathological changesd-Eye Sensor10010
Table 3. Measured parameters of the proposed algorithm.
Table 3. Measured parameters of the proposed algorithm.
ParameterFARFRRSensitivitySpecificityAccuracy
Value2%0%100%96%98%
Table 4. Comparison of the proposed solution with different ones described in other works.
Table 4. Comparison of the proposed solution with different ones described in other works.
AlgorithmAccuracySensitivitySpecificity
Joshi, Karlue [8]91%96.7%85.4%
Benzamin, Chakraborty [9]98.6%98.29%41.35%
Garcia, Sanchez et al. [15]97.01%100%92.59%
Khojasteh, Aliahmad, Kumar [21]98%96%98%
Our proposed algorithm98%100%96%
Table 5. Summary of the used techniques.
Table 5. Summary of the used techniques.
AlgorithmTechniques
Joshi, Karlue [8]Image processing techniques that are mainly based on morphological feature extraction.
Benzamin, Chakraborty [9]Deep learning techniques for hard exudate detection implemented with TensorFlow framework.
Garcia, Sanchez et al. [15]An approach that uses neural network: multilayer perceptron (MLP), radial basis function (RBF), and support vector machine (SVM).
Khojasteh, Aliahmad, Kumar [21]Convolutional neural network was trained to detect hard exudates in retina color images.
Our proposed algorithmImage processing techniques, veins, and optic disk removal before hard exudate detection.

Share and Cite

MDPI and ACS Style

Saeed, E.; Szymkowski, M.; Saeed, K.; Mariak, Z. An Approach to Automatic Hard Exudate Detection in Retina Color Images by a Telemedicine System Based on the d-Eye Sensor and Image Processing Algorithms. Sensors 2019, 19, 695. https://doi.org/10.3390/s19030695

AMA Style

Saeed E, Szymkowski M, Saeed K, Mariak Z. An Approach to Automatic Hard Exudate Detection in Retina Color Images by a Telemedicine System Based on the d-Eye Sensor and Image Processing Algorithms. Sensors. 2019; 19(3):695. https://doi.org/10.3390/s19030695

Chicago/Turabian Style

Saeed, Emil, Maciej Szymkowski, Khalid Saeed, and Zofia Mariak. 2019. "An Approach to Automatic Hard Exudate Detection in Retina Color Images by a Telemedicine System Based on the d-Eye Sensor and Image Processing Algorithms" Sensors 19, no. 3: 695. https://doi.org/10.3390/s19030695

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop