Next Article in Journal
Classification Efficiency of Pre-Trained Deep CNN Models on Camera Trap Images
Next Article in Special Issue
Segmentation-Based vs. Regression-Based Biomarker Estimation: A Case Study of Fetus Head Circumference Assessment from Ultrasound Images
Previous Article in Journal
Few-Shot Object Detection: Application to Medieval Musicological Studies
Previous Article in Special Issue
Automatic Aortic Valve Cusps Segmentation from CT Images Based on the Cascading Multiple Deep Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Literature Review on Artificial Intelligence Methods for Glaucoma Screening, Segmentation, and Classification

by
José Camara
1,2,
Alexandre Neto
2,3,
Ivan Miguel Pires
3,4,
María Vanessa Villasana
5,6,
Eftim Zdravevski
7 and
António Cunha
2,3,*
1
R. Escola Politécnica, Universidade Aberta, 1250-100 Lisboa, Portugal
2
Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal
3
Escola de Ciências e Tecnologia, University of Trás-os-Montes e Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal
4
Instituto de Telecomunicações, Universidade da Beira Interior, 6200-001 Covilhã, Portugal
5
Centro Hospitalar Universitário Cova da Beira, 6200-251 Covilhã, Portugal
6
UICISA:E Research Centre, School of Health, Polytechnic Institute of Viseu, 3504-510 Viseu, Portugal
7
Faculty of Computer Science and Engineering, University Ss Cyril and Methodius, 1000 Skopje, North Macedonia
*
Author to whom correspondence should be addressed.
J. Imaging 2022, 8(2), 19; https://doi.org/10.3390/jimaging8020019
Submission received: 17 December 2021 / Revised: 11 January 2022 / Accepted: 17 January 2022 / Published: 20 January 2022
(This article belongs to the Special Issue Current Methods in Medical Image Segmentation)

Abstract

:
Artificial intelligence techniques are now being applied in different medical solutions ranging from disease screening to activity recognition and computer-aided diagnosis. The combination of computer science methods and medical knowledge facilitates and improves the accuracy of the different processes and tools. Inspired by these advances, this paper performs a literature review focused on state-of-the-art glaucoma screening, segmentation, and classification based on images of the papilla and excavation using deep learning techniques. These techniques have been shown to have high sensitivity and specificity in glaucoma screening based on papilla and excavation images. The automatic segmentation of the contours of the optic disc and the excavation then allows the identification and assessment of the glaucomatous disease’s progression. As a result, we verified whether deep learning techniques may be helpful in performing accurate and low-cost measurements related to glaucoma, which may promote patient empowerment and help medical doctors better monitor patients.

1. Introduction

In addition to today’s discussions related to problems in the doctor–patient relationship and the deficiency of clinical examination that makes the diagnosis more dependent on complementary tests in the context of public health [1], new problems are emerging related to the use of new technologies to support medical diagnoses [2,3,4,5]. These issues can be associated with the security of electronic medical records, the exponential increase in the production of new data arising from these new technologies, and how these data will be processed [6,7,8,9].
In one’s family history, glaucoma in first-degree relatives indicates an increased possibility of developing the disease compared to an adverse family history of illness [10,11]. One of the risk factors is systemic diseases, such as high blood pressure and diabetes, rheumatic diseases, autoimmune diseases, and the use of steroids which can be predisposing factors for the development of the disease [12]. In addition, eye diseases, such as cataracts, tumors, inflammatory processes, trauma, and ocular hypertension can also be risk factors for glaucoma [12].
Considering the importance of vision in modern society, the disease at an advanced stage can potentially reduce one’s quality of life to different degrees, causing emotional and workforce damage and certainly a greater use of health resources [13]. The risk of blindness depends on susceptibility factors, such as family history, the way the disease progresses, the level of eye pressure and age at disease onset, previous eye diseases and injuries, topical and systemic use of corticosteroids, consumption of tobacco, alcohol, and drugs, diabetes, lung disease, heart disease, cerebrovascular disease, and high blood pressure [14,15,16]. Treatment is based on clinical and surgical strategies to reduce intraocular pressure, the only factor susceptible to change [17,18]. In addition, it aims to reduce the progression of optic nerve damage and maintain vision for a more extended period of time [19,20].
However, the patient’s record and the storage and sharing of medical data and images must be well protected [9]. Therefore, electronic health records require precautions that include controlling access to data, the insertion and deletion of information, storage, and the transmission of data and images [21,22,23].
Artificial intelligence (AI) algorithms tend to improve the processing of this large amount of data and propose increasingly accurate diagnostic hypotheses [4,24,25]. AI is sometimes heralded as the new industrial revolution [8,24]. Deep learning (DL) is the steam engine of this revolution which can process a large amount of data that operates data analysis by representing successive layers inspired by the human brain [25].
DL started in 1943 and emerged with the evolution of neural networks which became deeper with new layers in a subset of AI [26]. In 2012, it became more popular with the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC), and the scientific community adopted it. It yields results which are close to state of the art or better in many areas [27]. These technologies are used in image recognition, real-time translations, voice recognition services such as Siri from Apple, Alexa from Amazon, and Cortana from Microsoft [28,29]. In addition, multiple studies have shown that DL algorithms are state of the art in breast histopathology analysis, skin cancer classification, risk of cardiovascular disease prediction, the detection of lung cancer, and several applications in the ophthalmic area with the potential to revolutionize the diagnosis of eye diseases [30,31,32].
DL has become an essential tool for interpreting data derived from digital photos, optical coherence tomography, and visual field, contributing to tracking diseases such as senile macular degeneration, diabetic retinopathy, and glaucoma [33,34,35]. In Canada, it is used for teleophthalmology to screen populations that live in more distant regions, improve access to under-served communities, and alleviate the shortage of doctors [36,37,38].
In ophthalmology, DL algorithms have shown their enormous potential for screening and diagnosing pathologies such as diabetic retinopathy and glaucoma through the analysis and processing of retinal images and problems related to the prediction of daily pressure changes in the eye [39,40,41,42,43].
This study aimed to perform a literature review of artificial intelligence and DL methods for glaucoma screening, segmentation, and classification. This was included in a project for glaucoma screening and classification to be implemented in a system that will be easy to use for the patients, promoting patient empowerment with these tools [44], enabling the healthcare professional to monitor the evolution of glaucoma disease via remote consultation. Medicine is a vast science, and the team has different projects related to various fields [45,46,47,48,49], including ophthalmology.
This study shows that different databases containing retina images are available that allow the research and creation of the automatic method. However, it was only initially used because medical people are involved in this project to provide accurate data for the study of glaucoma disease. Furthermore, this study also reviews the methods for the segmentation and classification of glaucoma disease by images.
This paragraph ends the introductory section. Section 2 will then present some public databases that can be used for glaucoma screening and classification. Furthermore, some studies related to DL to analyze glaucoma are presented in Section 3. Sequentially, Section 4 outlines the segmentation methods for this study. In addition, Section 5 presents the use of classification methods, followed by the discussion of the combination of the different methods in Section 6. Finally, Section 7 concludes this review.

2. Public Databases

Public databases contain eye images for study, research, and methodological standardization [50]. Databases of public photos can be obtained for different nationalities, from multiple patients, in different locations, by different cameras, which can vary in size, in the centralization of the focus of interest, and the delimitation of the area of interest [51]. The multiplicity of conditions under which photos are taken in different places using cameras with different settings, taken from various groups of patients, with objectives defined by multiple specialists, can interfere with the results [52]. The databases for the study of pathologies of the posterior pole of the eye show the retina, vessels, macula, and optic disc [53]. Other image databases for the study of glaucoma centered on the optic papilla with demarcations of the papilla and excavation boundaries made by specialists represent the gold standard, serve as reference standards for segmentation measurements, and can exploit the implementation of DL networks [54].
The authors of [55] identified 94 open databases containing 507,724 ophthalmic images and 125 videos from 122,364 patients. These databases are often used in various eye research studies using machine learning. However, few public fundus image sets for evaluating glaucoma contain classification into groups of normal papillae and disease evolutionary stages, the marking of optic disc segmentation, and excavation in both eyes.
Several databases are frequently used to classify and segment the optical disc and for excavation in automated systems for detecting optical papillary features. One of them is RIM-One, an open retinal image database for optic nerve evaluation [56]. The manual segmentation of each image is used to form the gold standard obtained for each image by centralizing all contours and reference points, eight axis directions from one reference point, and the insertion axes of the shapes are delineated in each order of each of the five manual contours. Intersections and reference points are computed at five distances in each direction and form a radial average. The RIM-One database photographs are accessible at [57] and consist of three sets of pictures, such as RIM-One-r1, RIM-One-r2, and RIM-One-r3, published in 2011, 2014, and 2015, respectively.
RIM-One-r1, published in 2011, is a database for evaluating, locating, and detecting the optic disc formed by retinal images obtained from 118 healthy eyes and 51 patients with various stages of glaucoma through a Nidek AFC-210 non-mydriatic camera with Canon EOS 5D Mark II 21.1 megapixels with 45 degrees horizontal and vertical field [56]. In addition to diagnosis, five specialists also include the manual segmentation of the optic disc. The patients were from three Spanish hospitals: Hospital Universitario de Canarias (HUC) in Tenerife; Hospital Universitario Miguel Servet (HUMS) in Zaragoza; and Hospital Clínico Universitario San Carlos (HCSC) in Madrid.
RIM-One-r2, published in 2014, is an extension of the first version having some duplicated images, containing 255 photographs of healthy eyes and 200 photos of patients with glaucoma at HUC and Hospital Universitário Miguel Servet using the same cameras as the previous version. Furthermore, medical experts manually segmented images [56,58].
RIM-One-r3, published in 2015, contains 85 images of healthy eyes (without glaucoma) and 74 images of patients with glaucoma [34]. The main difference between this and previous versions is that the images were exclusively obtained at the HUC with a Kowa WX 3D non-mydriatic camera [56]. The images were centered on the optic nerve head using a 34-degree angle, obtaining horizontal stereoscopic images of 20 degrees of vision and a vertical sun of 27 degrees with a total resolution of 2144 × 1424 pixels per image. Stereoscopic photographs were submitted to manual segmentation of the optic disc E of the excavation by two specialists, and some photos are contained in RIM-One-r2.
RIM-ONE DL was created in 2020 to optimize the previous ones for use in DL. It consists of 313 normal and 172 glaucoma fundus from patients [59]. Two experts manually segmented all images (disc and excavation) again, and in cases of doubt, another specialist with 20 years of experience was consulted, who made the final decision. One image of each eye per patient was kept. All images were cropped around the optic nerve head using the same proportionality and stored in .png and the file names fixed in r1, r2, and r3 according to the RIM-version 1 that was extracted which made a clearer division between the training group (samples from the HUC) and test group (samples from the Hospitals of Madrid and Zaragoza). In the two groups, the manual segmentation of the disc and excavation were included and divided into two sets. The first set was divided into training and testing in a ratio of 70:30. Additionally, the second set from the HUC was used for training (195 normal and 116 with glaucoma). Finally, the images used from two other hospitals were divided (118 normal and 56 with glaucoma). Images were rescaled in intensity between 0 and 1 and submitted to several models with DL architecture: Xception, VGG16, VGG19, ResNet50, Inception-V3, InceptionResNetV2, MobileNet, DenseNet121, NASNetMobile, and MobileNetV2 resized to an 224 × 224 × 3 input layer. A 2D global average pooling layer was connected to two output layers using SoftMax to distinguish between normal and glaucoma classes with satisfactory results in and with maximum values obtained with the VGG19 network, revealing an AUC of 98.67% and an accuracy of 93.15% for the set of random tests, versus an AUC of 92.72% and an accuracy of 85.63% for the Madrid and Zaragoza test set.
DRISHTI-GS is a retinal image dataset for optic nerve head segmentation [60] developed and distributed to provide means for the evaluation of CAD systems in the detection of glaucoma, evaluated by four experts. It presents a set of retinal images for papilla evolution in normal and glaucomatous eyes with manual segmentation performed by specialists, which allows performing measurements of the diameter ratio and cupping and optic disc area, to establish the presence or absence of presence or absence or glaucoma, called DRISHTI-GS1 [61]. It consists of 50 training and 51 test images, in which the papilla was manually segmented by different experts. It is an extension of DRISHTI-GS. Regarding the lack of more comprehensive data, the author warns about the difficulty of comparing the performance of individual methods.
DRIONS-DB [62] is a public database for the comparative evaluation of optic nerve head segmentation from digital retinal images. The database is open to test algorithms, as well as compare and share results with other researchers. It contains 110 retinal images with 600 × 400 pixel resolution, with outer optical disk boundaries delimited by two medical experts using image annotation software [63]. Additionally, it includes images in digital format and scanned using HP-PhotoSmart-S20 high-resolution scanner, RGB format, 600 × 400 pixel resolution, and an 8 bits/pixel, centered on the optical disk [63].
Messidor database [64] contains hundreds of fundus images and has been publicly distributed since 2008. It was created by the Messidor project to assess the automatic targeting of retinal lesions and diabetic retinopathy [65]. It is an implementation program for testing segmentation algorithms. It contains 1200 color fundus images, 800 with pupil dilation and 400 without dilation. It is mainly used for the classification of diabetic retinopathy.
High-Resolution Fundus (HRF) contains 15 images from healthy patients, 15 images from glaucomatous patients, and 15 images from patients with diabetic retinopathy [66]. The photos have 3504 × 2336 pixel resolution with binary images of segmentation of vessels.
The Optic Nerve Head Segmentation Dataset (ONHSD database) comprises 99 fundus images from 50 patients of a variety of ethnic backgrounds (Asian, Afro-Caribbean, Caucasian) with the respective annotations of the optic disc made by clinical experts. The images were acquired with a field angle lens of 45 degrees and a resolution of 640 × 480 pixels [67].
ACRIMA database [68] contains 705 images, including 396 images of patients with glaucoma and 309 of patients without glaucoma, and does not include disc segmentation and excavation.
REFUGE database [69] contains 120 images of glaucoma patients and 1080 images of healthy patients with a segmented disc and cup, which were captured from different cameras and show a clear division between training and test data. There is a need to expand the number of public image data available that meet these requirements.
ORIGA database [70] comprises 168 images of patients with glaucoma and 482 photos of healthy patients. Its data include disc and cup segments. The problem with this database is that although it appears to have been public at one point, to the best of our knowledge, it stopped being public some time ago.
Sjchoi86-HRF database [71] has 601 fundus images in 4 subsets: normal (300 images); glaucoma (101 images); cataract (100 images); and retinal disease (100 images). The images were captured using 8 bits per color plane at 2592 × 1728, 2464 × 1632, or 1848 × 1224 pixels.
Other databases used in the literature include the Singapore Chinese Eye Study (SCES) database [72], composed of 1676 images with 46 glaucoma cases, and the Singapore Indian Eye Study (SINDI) [72], consisting of 5783 images of 113 eyes with glaucoma, which were used with the test.
Table 1 summarizes the characteristics of the presented public databases, where it is possible to verify that 50% of the analyzed databases contain images related to optic disc and cup, 40% of them did not include any of them, and 10% (one database) include only images related to the optic disc.
The public databases of optic papillary photographs constitute a reference standard that provides elements of analysis and differentiation from the normal optic papilla of glaucomatous eyes. However, DL methods have been highly representative of automated diagnoses. Nonetheless, they depend on a sufficient amount and variety of data to train and test the system as reference standards for comparison. Furthermore, it is observed that the standards offer images in different resolutions and formats and present a limited amount of data, increasing the difficulty of comparing them.
In [59], it is recommended that the public databases of retinal images meet the following requirements:
  • Availability of publicly accessible image sets, labeled by various experts, sufficient for use in DL methods;
  • Clear separation between training and testing sets to increase reliability between training and testing data;
  • Presence of diversity in the set of images, with variety meaning images captured by various devices involving patients of different ethnicities, and images captured in different conditions of lighting, contrast, noise, etc., in addition to having a preliminary diagnosis and including the segmentation of disc and cup manual reference.
The authors of [59] also made some criticisms of the other versions of RIM-ONE, including:
  • The combination of the other arrangements for DL problem solving and the indiscriminate variety of images from the three versions can lead to inconsistent results since different experts performed the segmentation;
  • Since RIM-ONE was not initially designed for DL, a clear division of training and testing images was never established;
  • Images were taken in different hospitals with different cameras, but only one camera was used in each version;
  • Only the r3 version had the cup segmented manually, and the previous versions only had the optic disc segmented. The experts involved in manual segmentation were not the same in all cases.
The RIM-One-r3 and DRISHTI-GS databases have an optical papilla centered and a periphery of approximately 30 degrees around the papilla that allow for the visualization of the excavation characteristics and sectorial defects in the fiber layers, an element to be considered in the classification of glaucoma initials.
Of all the databases mentioned, only the REFUGE and RIM-One DL databases meet the additional requirements of offering images from different cameras and a clear division of training and testing data. However, given the importance of the quantity and the representativeness of data required in automated systems, we cannot abandon the various optical papilla images contained in other databases, even if they do not meet all the requirements to be used in DL methods.
There seems to be a clear need to standardize and expand the number of public image databases that meet all these requirements and add the most significant data containing the greatest number of optic papillary region changes.

3. Deep Learning Methods

The evolution of glaucomatous neuropathy leads to an enlargement of the cup (inner part) compared to growth of the optic disc (outer part). The region between the cup and the outer limit of the papilla is called the neural rhyme. It comprises approximately 1,200,000 ganglion cell fibers from the retina and travels along the optic nerve carrying visual information to the brain’s occipital lobe, where it will be interpreted.
The following are measures for evaluating glaucomatous papillae:
  • The ratio between the cupping and the total vertical or horizontal diameter of the papilla or cup-to-disc ratio (CDR) indicates the presence of glaucoma if it has high values;
  • The ratio of the cup area to the papilla area, also called the cup-to-disc area ratio (CDAR);
  • Inferior, superior, nasal, and temporal (ISNT) rule describes a feature of the healthy optic disc thicker in the inferior pole, followed by superior, nasal, and temporal ones. When this sequence is altered (in this order), either by a change in diameter or area, it can be an early sign of injury. Optical Disc Damage Probability Scale or Disc Damage Likelihood Scale (DDLS) is based on the probability of optic disc damage by comparing the neural rim diameter with the optic disc diameter and the shortest distance between the optical disc contour and excavation.
The automated methods using DL techniques were grouped into methods that were based on:
  • Appearance or glaucoma screening (Section 3.1);
  • The segmentation of the outer limits and measurement calculations of optic disc structures (Section 3.2);
  • The segmentation of the outer limits of the optic disc and the excavation by detecting glaucomatous features of the papilla (Section 3.3);
  • Identification of early forms of glaucoma by CNN through fiber layer defects (Section 5).

3.1. Glaucoma Screening

Glaucoma screening is very ponderous since it only has symptoms when the disease is quite advanced. Therefore, the precocious diagnosis of this disease is essential. The glaucoma screening with digital fundus photograph (DFP) is a non-invasive method which suitable for large-scale screening. An automated system can decide whether there are any signs of glaucoma. For glaucoma screening based on DFP with disc parametrization, the optic disk and cup regions are segmented for the future evaluation of disc parameters. The optic disc appears as a bright circular or elliptical area, partially occluded by blood vessels, and the retinal nerve fibers converge to the optic disc and form the region called the cup. After the optic disc and cup segmentation, the cup-to-disc ratio can be calculated and used to estimate glaucoma. Glaucoma can be detected through automatic classification algorithms that learn features from DFP-labeled images [73].
Automated classification methods allow the glaucomatous papilla to be detected by its appearance [74], improving results and enabling mass tracking. However, the authors of [31] warn that non-segmented methods based on image characteristics need a large dataset for learning DL networks.
Regarding glaucoma screening, several studies have been performed to avoid blindness. The authors of [74] developed a diagnostic method for glaucoma using an image automation procedure obtained by relatively inexpensive cameras and automatic feature extraction by retinal image segmentation to analyze its geometric features. The study differs from other state-of-the-art methods by analyzing the glaucomatous characteristics of the papilla used to train the net—namely the diameter and area of the disc and cup—obtaining high accuracy in training under real conditions. It concludes that even in the images obtained at high resolution, there was no significant increase in accuracy experiencing neural rhyme variations as a database. However, the performance is lower in the photos with lower resolution, and the results seem to improve by adding features’ appearance geometrics.
The authors of [75] used several retinal fundus images available in ORIGA, RIM-One-r3, and DRISHTI-GS databases to fuse the results of different deep learning networks to classify glaucoma evolution. It shows that the results are promising, reporting an AUC of 94%.
In [76], the authors presented the DeepLabv3+ and MobileNet methods for the optic disc segmentation, and it was tested with RIM-ONE, ORIGA, DRISHTI-GS, and ACRIMA databases. The obtained results are reliable, presenting an accuracy of 97.37% (RIM-ONE), 90.00% (ORIGA), 86.84% (DRISHTI-GS), and 99.53% (ACRIMA). Additionally, the method presents an AUC of 100% (RIM-ONE), 92.06% (ORIGA), 91.67% (DRISHTI-GS), and 99.98% (ACRIMA).
The authors of [77] used a modified U-Net architecture with SE-ResNet50 for the segmentation of optic disc and optic cup for glaucoma diagnosis. They used DRISHTI-GS, REFUGE, and RIM-One-r3 databases to test the methods. They used DRISHTI-GS, REFUGE, and RIM-One-r3 databases to test the methods. The average results reported by the implemented method consisted of an AUC of 94%, showing the method’s robustness.
In [78], the authors implemented a direct cup-to-disc ratio (CDR) estimation method composed of an unsupervised feature representation of a fundus image with a convolutional neural network (MFPPNet) and a CDR value regression by random forest regressor. They used two datasets, namely Direct-CSU and ORIGA, for the testing and validation, reporting an AUC of 0.905.
The authors of [79] presented a fuzzy broad learning system-based technique for optic disc and optic cup segmentation with glaucoma screening, implementing it with the RIM-One-r3 dataset and SCRID dataset from the Shanghai Sixth People’s Hospital. It reports an AUC of 90.6% (RIM-One-r3) and 92.3% (SCRID).
In [80], a deep learning method was implemented for glaucoma screening featuring CDR and optic disc regions. They used different types of networks, including ResNet and segmentation network (SegNet), to calculate the vertical CDR. Additionally, as the leading architecture, it implements the U-Net, reporting a minimum DICE value of 89.6% and a precision of 95.12%.
The authors of [81] implemented a deep learning method with a binary classifier for glaucoma screening with a dataset of 5716 images from Asian and Caucasian populations. The results reported an AUC of 94% in images with detectable glaucoma.
In [82], a platform was designed for glaucoma screening that implements deep learning techniques for glaucoma diagnosis. The authors implemented mathematical models and a threshold classifier with 933 healthy and 754 glaucoma images, reporting a sensitivity of 73% and a specificity of 83%.
The authors of [83] used the M-Net method to segment the optic disc and optic cup tested with the REFUGE dataset for glaucoma screening. The results reported a DICE coefficient of 94.26% for the optic disc and 85.65% for the optic cup. Additionally, it reports an AUC of 96.37% and a sensitivity of 90%.
In [84], the HRF database was used to test the DL-ML hybrid model for glaucoma screening in 30 images. The results reported an accuracy of 100% and a sensitivity of 1.0. The system is reliable to help medical doctors in the diagnosis of glaucoma.
The authors of [85] implemented the patch-based deep network (GlaucoNet) framework to perform glaucoma screening with the DRISHTI-GS, RIM-ONE, and ORIGA datasets. The results reported an overlapping score of 91.06% (DRISHTI-GS), 89.72% (RIM-ONE), and 88.35% (ORIGA) for optic disc segmentation, and 82.29% (DRISHTI-GS), 74.01% (RIM-ONE), and 81.06% (ORIGA) for optic cup segmentation.
In [86], a deep ensemble network with an attention mechanism for the detection of glaucoma with optic nerve head stereo images was implemented. It consists of a convolutional neural network and an attention-guided network. The authors used a stereo glaucoma image dataset from Tan Tock Seng Hospital, Singapore. It comprised 282 images with 70 glaucoma cases and 212 normal cases. The results reported a sensitivity of 95.48%.
The summarization of the automated DL methods that help perform glaucoma screening is presented in Table 2.

3.2. Segmentation of the Outer Limits and Measurement Calculations of Optic Disc Structures

The segmentation of the outer limits is one of the techniques for detecting and analyzing glaucoma in images. It allows the measurement calculations of the optic disc structures.
In [87], the authors found that the detection methods of optic disc abnormalities result in the complex analysis of algorithms, operating costs, and dependence on vessel segmentation. In addition, this limits the applicability to several images and epidemiological studies. Therefore, the authors proposed an end-to-end methodology based on the CNN’s detection of location and disk abnormalities using two deep learning architectures. The first locates the papilla using DRIVE, Stare, DIRETDB1, and MESSIDOR databases. Additionally, the other identifies papillary abnormalities through a detector trained to classify the optic disc into three classes: normal, suspicious, and abnormal. The results had good quality in normal images, but there was a drop in performance under variable imaging conditions. The authors also used other databases, including the HAPIEE and PAMDI datasets. HAPIEE was collected from Lithuania (eastern Europe) and PAMDI from Italy (Europe). They used the CNN method and the Adaboost method with Haar-like features to segment glaucoma images. The results of these databases reported an accuracy of 86.5% (HAPIEE) and 97.8% (PAMPI).
The authors of [88] used the segmentation of blood vessels and optical disks using the VGG 19 model with slight modifications in the layers, named Deep Retinal Image Understanding (DRIU). The DRIU is a CNN capable of segmenting vessels and the optic disc with higher performance in terms of IoU and DICE for investigations of datasets than those obtained by human experts and used by [89] for results comparison.
In [90], the authors proposed using the automated segmentation of anatomical structures of background images, such as blood vessels and optical disks, based on the generative adversarial network (cGAN), which consists of two successive generations of networks: the generator and discriminator. The generator learns to map input characteristics from observation (retinal background color) to output (binary mask). The discriminator works with a loss function to train the process against the accurate discrimination of the image. The method was applied to two networks, DRISHTI-GS and RIM-ONE, and reached results of IoU (Jaccard) of 96%, DICE of 98%, and accuracy of 95.64%. The optical disc was segmented using fuzzy C-means clustering (FCM) in hospital samples and compared to standards.
Singh et al. [91] proposed a conditional generative adversarial network (cGAN) model to segment the optic disc. The cGAN is composed of a generator and a discriminator and can learn statistically invariant features such as the color and texture of an input image and segment the region of interest. The authors optimized a loss function that combines a conventional binary cross-entropy loss with an adversarial term that encourages the generator to produce output that cannot be distinguished from the ground truth. Skip connections were used for this method, concatenating the feature maps of a convolutional layer with those resulting from the corresponding deconvolutional layer. To train and evaluate the model, the DRISHTI-GS and RIM-ONE databases were used, reducing the size of the images to 256 × 256 and normalizing the value of each pixel to between 0 and 1. For optic disc segmentation, the model reaches values above 90% for accuracy, DICE and IoU for both databases.
In [92], deep neural networks are implemented for optic disc segmentation. Thus, the authors implemented faster R-CNN with different features extracted from the images available from the ORIGA dataset, reporting an accuracy of 91.3%.
The authors of [72] proposed a method of optical disk segmentation using the ResNet algorithm based on a CNN with a disc-aware and ensemble network (DENet). It compares the results with another five image databases, obtaining an AUC of 91.83% on SECS and 81.73% on SINDI.
The authors of [93] used a U-Net architecture to segment the optic disc and cup-based cGAN network consisting of two successive networks: the generator and discriminator. The proposal U-Net has fewer filters in all convolutional layers and does not possess an increasing number of filters for decreasing resolution. This model used three databases: DRIONS-DB, DRISHTI-GS, and RIM-One-r3. The images were then pre-processed with a contrast-limited adaptive histogram equalization and was then cropped by bounding boxes in the region of interest (ROI). The optic disc segmentation reached good results and the cup segmentation on both databases. The generator learned to map input characteristics from observation (retinal background color) to output (binary mask). The discriminator works with a loss function to train the process against the accurate discrimination of the image. The method reached IoU (Jaccard) results of 96% and DICE of 98%.
In [94], the authors proposed a retinal image synthesizer and a semi-supervised learning method for glaucoma assessment with deep convolutional generative adversarial networks (GANs). This allows the image synthesis and semi-supervised learning techniques, in which 86,926 publicly available images were used to create the model and evaluate them. The generated methods cropped retinal images for the identification of glaucoma. The implemented method reported an AUC of 90.17%, revealing its reliability.
The authors of [95] evaluated the perimetry with a combination of Laguna-ONhE and Cirrus-OCT, analyzing a total of 477 normal eyes, 235 confirmed, and 98 suspected glaucoma cases. The best results were verified with the combination of the “Globin Distribution Function” (GDF) and the threshold coefficient of variation (TCV). It was automatically analyzed with deep learning techniques, reporting an AUC of 99.5% (GDF) and 93.5% (TCV) with high sensitivity.
In [96], the authors intended to detect open-angle glaucoma with Bruch’s membrane opening (BMO)-based disc parameters, such as the BMO-minimum rim width (BMO-MRW) and the BMO-minimum rim area (BMO-MRA), in the Chinese population, in order to compare it with the retinal nerve fiber layer (RNFL) from optical coherence tomography (OCT) and the rim area (RA) from the Heidelberg retinal tomograph-III (HRT-III). The experiments were performed with 200 eyes of 77 healthy and 123 primary open-angle glaucoma (POAG) subjects analyzed with deep learning techniques and compared using the DeLong test. The reported results have an AUC of 94.6% (BMO-MRW) and 92.1% (BMO-MRA).
Several authors used different DL architectures and public and private databases with good results, as presented in Table 3. The location of the limits of the optic papilla is practical in pathologies that make it difficult to delimit the papilla, such as in myopic degeneration, myopic crescent, atrophic chorioretinitis, and sectorial papilla atrophy.

3.3. Segmentation of the Outer Limits of the Optic Disc and the Excavation by Detecting Glaucomatous Features of the Papilla

The segmentation of the outer limits combined with the excavation techniques allows the detection of the glaucomatous features of the papilla.
The authors of [97] implemented the Automatic Feature Learning for Glaucoma Detection Based on Deep Learning (ALADDIN), which uses a deep convolutional neural network (CNN) for the detection of glaucoma. The experiments were performed with the ORIGA and SCES datasets, reporting an AUC of 83.8% (ORIGA) and 89.8% (SCES).
The authors of [89] used a U-Net architecture for the segmentation of the optic disc and cup. The proposal U-Net has fewer filters in all convolutional layers and does not possess an increasing number of filters for decreasing resolution. This model used three databases: DRIONS-DB, DRISHTI-GS, and RIM-One-r3. The images were then pre-processed with a contrast-limited adaptive histogram equalization and then cropped by bounding boxes in the region of interest (ROI). The optic disc segmentation reached good results and cup segmentation on both databases. DICE reaches values above 80%, proving to be a reliable model.
The authors of [34] used a model based on the GoogleNet network for optical papilla detection and another for the detection of glaucoma. The results showed promising results with worst-quality images showing 90% accuracy in the HRF database, 94.2% in RIM-One-r1, 86.2% in RIM-One-r2, and 86.4% in RIM-One-r3.
In [98], the authors presented a U-Net-based convolutional neural network to trace corneal nerves to detect glaucoma. It is a fully automated framework which performs nerve segmentation with a sensitivity higher than 95% regarding manual tracing. It was revealed to be helpful for the detection of different features.
The authors of [99] implemented a deep learning system using fully convolutional neural networks (FCNs) to perform the segmentation of optic disk and cup regions. The authors analyzed the RIM-ONE dataset using one and two layers. The results reported an accuracy of 95.6% with one layer and 96.9% with two layers. The results reported 98% with one layer and 97.8% with two layers regarding the AUC.
In [100], the authors used the Transfer Learning, GoogleNet, and Inception-V3 techniques with multi-modal data from 1542 retinal fundus images for the glaucoma diagnosis. The results reported an accuracy of 84.5% and an AUC of 93%.
In [101], the authors used 1542 photos (with different sizes and cropped to the size of 240 × 240 pixels) using a Nidek AFC-330 non-mydriatic camera, divided into groups of 754 for training, 324 for validation, and 464 for tests—among which 786 were normal, 467 showed advanced glaucoma; and 289 showed early glaucoma. The authors classified them through CNN using TensorFlow, and the same datasets were used to pre-trained GoogleNet models. Inception-V3 reported 82.9% accuracy for training, 79.9% for validation, and 77.2% for test. Transfer Learning, GoogleNet, and Inception-V3 then achieved 99.7% accuracy with training data, 87.7% with validation data, and 84.5% with test data. Finally, they concluded that both early and advanced glaucoma could be correctly detected by machine learning using only background photos. Nonetheless, the model has shown greater efficiency in detecting early glaucoma than previously published models and argues that Transfer Learning is an attractive option in building a classification model of images.
The authors of [102] analyzed 48,116 background photographs by trained ophthalmologists using a deep learning algorithm. Glaucomatous papillae were defined with a vertical diameter > 0.7 and other typical changes. The deep learning system detected glaucomatous papillae with a high sensitivity of 95.6% and a specificity of 92.0%. It also reported an AUC of 98.6% with 87 false negatives.
In [32], the authors performed glaucoma diagnosis with a CNN with eighteen layers. It extracted robust features from 1426 fundus images, among which 589 were normal and 837 showed glaucoma. The methods reported an accuracy of 98.13%, a sensitivity of 98%, and a specificity of 98.3%.
The authors of [103] proposed a method with DenseNet incorporated with an FCN with a U-shaped architecture that is a CNN with nineteen layers. This deep network encourages feature re-use and reduces the number of parameters to improve the optic disc and cup segmentation. The approach of Al-Bander used five databases of color fundus images: ORIGA, DRIONS-DB, DRISHTI-GS, ONHSD, and RIM-ONE. For the pre-process, only the green channel of the color images was considered since the other color channels contain less helpful information. The images were then cropped in the ROI, and to artificially increase the number of images, augmentation processes with vertical flips and random crops were performed. For optic disc segmentation, the model reached better results in DICE and IoU for the DRISHTI-GS database than RIM-ONE, and the cup segmentation confirmed the same thing. DICE and IoU still obtained lower values compared to the optic disc segmentation. The system was first trained and tested with the same database (ORIGA) and then tried in other databases, reaching a DICE of 87.23%, a Jaccard score of 77.88%, an accuracy of 99.86%, a sensitivity of 87.68%, and a specificity of 99.94% for optic papilla; and a DICE of 96.4%, a Jaccard score of 93.11%, an accuracy of 99.89%, the sensitivity of 96.96%, and the specificity of 99.94% for excavation, respectively.
The authors of [104] used CNN with U-Net architecture and a reduced number of filters in each convolution to perform segmentation in the DRIONS-DB, RIM-ONEv3, and DRISHTI-GS databases with results (IoU of 89% and DICE of 94% in DRIONS-DB and 95% in RIM-One-r3) comparable to other state-of-the-art methods including Maninis’ DRIU and Zilly’s BCF. In addition, the technique showed reliable segmentation quality and applicability in image identification tasks.
In [105], the authors developed a hierarchical deep learning system (HDLS) using 1791 fundus photographs for glaucoma diagnosis. Its recognition accuracy was 53% for the optic cup, 12% for the optic disc, and 16% for retinal nerve fiber layer defects. The authors needed to test the methods with a significant sample rather than an extremely small dataset.
The authors of [106] used a deep neural network to detect glaucoma and calculate the vertical cup-disc ratio. They used the UZL test set of 2643 images, reporting an AUC of 94% for glaucoma detection.
In [107], the diagnosis and localization of glaucoma were performed with the acquisition of fundus, retinal nerve fiber layer (RNFL), optical coherence tomography (OCT) disc, OCT macula, perimetry, and perimetry deviation images. The implemented methods were convolutional neural networks (CNNs) and gradient-weighted class activation mapping (Grad-CAM) with a large dataset from the Samsung Medical Center (SMC). The results reported an accuracy of 96%, a sensitivity of 96%, and a specificity of 100% for optic disc images.
Several automated DL methods have been presented that can help locate the outer limits of the papilla and excavation and directly classify the existence or not of glaucoma, as shown in Table 4.
Thus, automated methods using DL architectures have obtained good results in tracking and classifying glaucomatous papilla and can help the specialist organize glaucoma through the characteristics in the images. Nonetheless, few studies have included clinical records or even results of subsidiary exams in the networks.

4. Segmentation Methods

Optical disc and cup segmentation use processes that help locate and evaluate the papilla through its metric characteristics determined by nuances of color, texture, optical disc delimitation, and cupping [108]. At first, in the ophthalmological routine, the specialist qualitatively calculates it through the direct observation of the papilla to compare the two eyes. The segmentation of the excavation is more complex in the optic disc due to the nerve fiber layer [89]. This causes imprecision in the excavation limits which is impaired by the presence of blood vessels and conditions that result in the alteration of the limits of the papilla and the excavation as atrophy, drusen, or edema hemorrhage [109].
Whether manual or automated, segmentation methodologies have limitations and low image resolution, media opacity, and photographic artifacts, such as lighting problems and image distortion [110]. Pathological cases with manifestations at the limits of the optic disc, such as papillary atrophy, influence the precision of segmentation [109]. The segmentation of the excavation is hampered by the presence of blood vessels that cover part of the excavation, and the variation in the intensity of the edge color of the hole makes it challenging to delineate the excavation limits, making the detection of the excavation’s external boundaries a challenging task [111]. Bock et al. used methodologies inspired by facial and object recognition which do not require papilla segmentation and obtained 75% sensitivity and 85% reproducibility in detecting glaucomatous papillae [112]. However, this depends on more expensive cameras and machinery.
Furthermore, this requires many positive or negative glaucoma samples for screening. The recognition of papilla is more susceptible to errors because of the subtlety of its appearance [113]. These factors make it challenging to recognize the papilla by a method only based on appearance [114]. However, in the future, it is possible that, with the advancement of technology for obtaining high-resolution images, glaucoma can only be detected by appearance [115].
The metrics for the automatic segmentation methods of the main structures of the optic nerve head formed by the optic disc and the cupping are a process which can aid in tracking, identifying, and evaluating the progression of glaucomatous disease [116]. However, this is a complex process which requires consideration of the subtlety and variability of the anatomy of the nerve fiber layer, irregular contours due to glaucomatous damage to nerve fiber [117]. In addition, the presence of blood vessels can impair the excavation delineation or visualize the limits and excavation of the papilla by deflecting the regions of atrophy of the fiber layer [118].
Manual or automated methods have limitations, and the low image resolution and anatomical noises constitute a rule [110]. In addition, fixing the exact limits that characterize the normal glaucomatous papilla becomes a constant challenge due to local anatomical variability, both under normal conditions and in the face of some pathologies that make it difficult to establish the exact limits of the excavation [118]. Finally, optical disc segmentation requires a pre-processing step, including image channel selection, illumination normalization, contrast enhancement, and the extraction of blood vessels [111].
The authors of [119] didactically grouped the segmentation methods into five large groups. The superpixel technique includes many studies for segmenting the optic disc regions and excavation, followed by clustering techniques, mathematical morphology, an active contour, and a convolutional neural network. However, there is no consensus on the best approach [120]. The different groups are as follows.
  • Clustering algorithms: Segmentation is performed pixel by pixel, using information from readings through RGB and HSV color channels. These have the advantages of simplicity in terms of implementation and low requirements in terms of computational time; and, as disadvantages, problems in defining the best set of attributes, sensitivity to noise, initialization of centroids and which group represents each region. For example, the authors of [121] obtained an excavation F-Score of 97.50% in 59 images from the local Ophthalmological Hospital, DIARETDBo, and RIM-One-r1, whilst the authors of [122] obtained an excavation accuracy of 97.04%, evaluating the CDR in 209 images from the DRIHTI-GS and RIM-One-r3 databases. Among the main clustering algorithms are:
    • K-Means: Unsupervised algorithm that divides images into parts based on a model created by averaging each piece. Its disadvantage is its sensitivity to inconsistent values, noise, and initial centroids;
    • Fuzzy K-Means: Unsupervised algorithm often used in medical images based on the mean of each group which groups similar data values using fuzzy logic to calculate this similarity. It has the advantage of being efficient in the segmentation of images with noise.
  • Superpixel: Based on partitioning the image into multiple pixel clusters and analyzing the image to be examined by regions, it has the advantage of less interference from image noise and the disadvantage of a pre-processing step with the risk of data loss related to the image edge. The best results were obtained by [123] using CDR and ISNT evaluation metrics in 101 DRISHT images with a cupping accuracy of 98.42% and an optic disc accuracy of 97.23%;
  • Active contour: The detection and imaging using curved evolution techniques can represent the curve as it allows a topology change. The disadvantage is that any change in the initial curve and the object to be detected modifies the result, making the method extremely sensitive to initialization. The best results were obtained by [124], who obtained an accuracy of 99.22% with the advantage of allowing the segmentation of the optic disc regions and the cup using low-quality images;
  • Mathematical morphology: The image is improved through morphological operations, including dilation, erosion, opening, and closing. It has the advantage of the simplicity of its implementation and the disadvantage of choosing the right structural element to transform intellectual intuition into practical application. The authors of [125] obtained excellent results in detecting glaucoma with 96% correct answers in the CDR and ISNT ratios.
  • Convolutional neural network: The use of neural networks to recognize and classify images and videos requires less pre-processing to homogenize optical disc images in terms of image quality, brightness, and contrast. Moreover, the same network can recognize patterns with different photos of different objects compared to other methods. For example, the authors of [126] obtained an F-score of 83.5% for the excavation, 94.5% for the optic disc, 72% for the excavation overlay, and 89% for the optic disk overlay—evaluating 319 images with F-Score evaluation metrics and overlay on DRIONS-DB, DRISHTI-GS, and RIM-ONEv3. In [127], it was verified that convolutional neural networks have been gaining ground and proving to be a powerful tool for segmentation, emphasizing that a large set of images is needed to train these networks.
Different algorithms offer good results in detecting and segmenting the optic disc and cupping. Nonetheless, many have limitations due to the use of images with varying sharpness along the edge of the optical disc and cupping, variability in the structure of the optic papilla in normal eyes, of the presence of peripapillary atrophy, of the papillary drusen—structures that alter the limits of the optical disc, which may cover it entirely in more advanced cases—and of the path of blood vessels whose deflection plays an essential role in delimiting the limits of the excavation—which by another side may mask and make it difficult to delimit the inner border of the papilla.
Retinal pathological images that evolve with changes in the optic nerve head should be considered to obtain correct CDR and ISNT calculations for glaucoma screening, including sectoral and diffuse papillary atrophy, peripapillary atrophy, papilla insertion changes, and papilla drusen. According to [128], most current methods were tested on a limited number of datasets, such as DRIVE and STARE. These datasets do not provide images with many different characteristics. Furthermore, the generally low resolution of the images (ranging from 0.4 to 0.3 megapixels) made the segmentation process even more challenging. Most retinal images used to assess segmentation methods were taken from adults, and it was not always possible to compare the two eyes. The retinas of babies have different morphological characteristics to those of adults, and this difference must be considered in segmentation methodologies.
Chakravarty et al. [129] proposed joining disc segmentation, optical cupping, and glaucoma prediction by dividing CNN characteristics into different tasks to ensure better learning. The segmentation masks were placed on separate channels, and the CNN and encoder outputs were combined and fed to a single neuron to predict glaucoma.
In [130], the authors used one of the first deep learning architecture models for papilla segmentation to calculate cupping and the presence of glaucoma to overcome the need for artisanal methods of papilla segmentation. Since then, there have been significant advances in the architectures used in neural networks.
The authors of [131] reported some disadvantages of [103] when using grayscale images and dropout layers at different stages that resulted in data loss and proposed batch normalization on CNN for optical disk segmentation.
In [132], the authors used the U-Net in neural networks to segment the optic disc and excavation in images from the REFUGE database, divided into two training and validation groups with 400 pictures each. A backpropagation function generated a segmented image closer to the true one. Two successive networks were used: the first network works as a generator network used to segment the input images; and the second network works as a simple CNN network used to extract predicted features. The method distinguished between the disc and cup regions with an accuracy of 93.40% and 83.42% and MAE CDR 6.05%.
The authors of [133] proposed a deep learning model with an architecture called a fully convolutional network (FCN) and another one called dilated residual inception (DRI) to estimate the depth of the monocular excavation. For ORIGA, the authors obtained an AUC of 81.39% and an AUC of 85.08% for the ResUnet network.
In [31], the authors used a multi-branch neural network (MB-NN) model to extract the areas of images relevant to measuring different features. The model included a Faster-R-CNN method in a dataset of 2000 images. The technique reported an accuracy of 91.51%, a sensitivity of 92.33%, and a specificity of 90.90%.
The authors of [134] selected the VGG19, GoogleNet (also known as Inception-V1), ResNet50, and DENet models for the automatic classification of Glaucoma. Valverde compared the performance of Transfer Learning and training from scratch with this model. To confirm the performance of VGG19, 10-fold cross-validation (CV) was applied. Valverde used 2313 retinal images from three different databases: RIM-ONE, DRISHTI-GS (public), and Esperanza (private dataset). In the RIM-ONE database, the images classified as suspect were considered for the study as glaucomatous. The photos did not suffer any correction or modification of illumination or contrast enhancement; they were simply processed to a common and standard format to train the networks homogeneously. The best result was obtained with the VGG19 model using Transfer Learning.
In [135], the authors analyzed fundus photographs for retinal blood vessel segmentation using contrast-limited adaptive histogram equalization (CLAHE) with local property-based intensity transformation (LPBIT) and K-means clusters. They used four datasets, including Structured Analysis of the Retina (STARE); Digital Retinal Images for Vessel Extraction (DRIVE); CLAHE; and LPBIT. They implemented the K-means clustering (KNN), reporting accuracy of 95.47% in the segmentation of glaucoma.
The authors of [135] proposed a method for automatically diagnosing an optic disc that is segmented with intensity thresholding and morphological operations. They used the ORIGA, RIM-ONE-r3, DRISHTI-GS, Messidor, DRIONS-DB, and DIARETDB1 to apply intensity thresholding and morphological operations optic disc segmentation. The results reported 98.75% accuracy in the segmentation of glaucoma.
In [136], the fundus photographs were analyzed with a computer-aided diagnosis (CAD) pipeline capable of diagnosing glaucoma with mobile devices. They used different datasets, including ORIGA, DRISHTI-GS dataset, iChallenge, RIM-ONE, Retinal Fundus Images for Glaucoma Analysis (RIGA), and other methods including CNN, MobileNetV2, VGG16 and VGG19, Inception-V3, and ResNet50. Finally, it reported 90% of accuracy in glaucoma recognition.
The authors of [137] executed experiments for automated glaucoma diagnosis and the segmentation of the optic disc and optic cup with different datasets, including G1020 and ORIGA. The implemented methods were the region-based convolutional neural network (R-CNN), Restnet50, and Inception-V3, reporting an F1-score of 88.6%.
In [138], the authors used a threshold-based algorithm to segment the optic disc and a modified region growing algorithm for optic cup segmentation. The segmentations were followed by infilling blood vessels and morphological operations. The datasets used were DRISHTI-GS, reporting a DICE score of 94% with SVM in glaucoma classification.
The authors of [139] used the cup-disc encoder-decoder network (CDED-Net) architecture with dense connections for the joint segmentation of the optic disc and optic cup. For the model training, the authors used the DRISHTI-GS, RIM-ONE, and REFUGE datasets, and the model includes the SegNet (VGG16) method, which reports 95.97% accuracy.

5. Classification Methods

Sun et al. [140] used the Inception-V3 architecture to detect glaucomatous optic neuropathy. The researchers underwent image analysis by specialists in ophthalmology before applying the algorithms. Color subtraction techniques were applied during pre-processing to even out the varied lighting.
The authors of [141] developed an algorithm for classifying glaucomatous papillae with great sensitivity and comparable specificity and proved the algorithm’s effectiveness in front of specialists. The algorithm’s performance reported an AUC of 0.945 for glaucoma detection using a reference standard from the glaucoma experts, and 0.855 using a reference standard from other ocular care providers.
In [142], the authors proposed a DL method to screen glaucoma in retinal fundus images using a database granted by the Singapore National Diabetic Retinopathy Screening Program with several eye diseases. The DL system recognizes the characteristic of referable diabetic retinopathy, possible glaucoma, and AMD, and showed the results that can be used to screen glaucoma.
In [143], the ResNet50 and GoogleNet models were selected, training them with two public databases: a database from Kim’s Eye Hospital (a total of 1542 images, including 786 photos from normal patients and 756 from glaucoma patients) and RIM-One-r3. All fundus images were histogram equalized, and the database from Kim’s Eye Hospital was used to train the two models. For the performance evaluation, the models were tested with the RIM-One-r3 database. GoogleNet obtained better results for early stage glaucoma than for the advanced-stage glaucoma.
The authors of [144] used a DL network with Transfer Learning with the weights of ImageNet. The two DL models used were VGG19 and Inception ResNet V2. These two models were pre-trained and then fine-tuned. For this work, two databases were used: one from the University of California Los Angeles (UCLA) and the other, publicly available, called high-resolution fundus (HRF). The authors randomly selected 70% of the images the UCLA database for training, 25% for validation, and the remaining 5% for testing. The models were then re-tested with the HRF database to bolster work. The Inception ResNet V2 model for the UCLA database obtained a specificity and sensitivity above 90% even when re-tested with the HRF database.
In [68], the authors applied five different ImageNet-trained models (VGG16, VGG19, Inception-V3, ResNet50, and Xception) for glaucoma classification using a ten-fold cross-validation strategy to validate the results. These models were fine-tuned, and the last fully connected layer of each CNN was changed for a global average pooling layer followed by a fully connected layer of two nodes representing two classes with a SoftMax classifier. Diaz did two experiments which varied the number of fine-tuned layers and epochs. This work collected five databases: ACRIMA, HRF, DRISHTI-GS, RIM-ONE, and Sjchoi86-HRF. The images were cropped around the optic disc using a bounding box of 1.5 times the optic disc radius. The photos were augmented using random rotations, zooming in a range between 0 and 0.2, and horizontal and vertical flipping to avoid overfitting. All the models passed 96% of AUC—which is an excellent result.
In [145], a CNN architecture was introduced based on boosting which shared some of the characteristics of ensemble learning systems. An entropy sampling method was shown to obtain results superior to those of the uniform sampling. The proposed method was a practical approach to learning convolutional filters without large extensive data required for training. Instead of backpropagation, each stage of the filters is learned sequentially using boosting. Each step considers the final classification error to update itself and not the backpropagated error, and instead of image-level data, this method operates on patch-level data. The RIM-One-r3, DRISHTI-GS, and Messidor databases were used to train the models. First, the optic disc is localized with a circular Hough transform on the green channel in each database. The image is cropped so that the optic disc is central in the picture. Then, the image is converted from RGB to L * a * b color space using a nonlinear transformation that mimics the nonlinear perspective response of the eye. The intensities are then normalized between 0 and 1. For the optic disc and the cup segmentation, the DRISHTI-GS achieved better results in terms of DICE and Intersection-over-Union (IoU) than RIM-One-r3.
The authors of [132] proposed neural network constructs utilizing the FCN and the inception building blocks in GoogleNet. The FCN is the main body of this method’s deep neural network architecture. They added several convolution kernels for feature extraction after deconvolution, based on the Inception structure in GoogleNet. The authors’ experiments used two databases: REFUGE and one from the Second Affiliated Hospital of the Zhejiang University School of Medicine. This technique used a fully automatic method—namely the Hough Circle Transform—which recognizes and cuts the image to obtain a picture of the ROI. The image data are increased by rotating, flipping, and adjusting the contrast, before using the Laplacian for image enhancement. Since the red channel contains less helpful information, this paper only uses blue-green media. In the optic disc and cup segmentation, the model reached values above 90% for the DICE and the IoU.
In [108], a modified U-Net was developed with a pre-trained ResNet-34. This work was composed of two steps: first, one single label-modified U-Net model applied to segment an ROI around the optic disc, followed by a cropped image used in a multi-label model with the objective of simultaneously segmenting the optic disc and cup. In the study of Yu, the RIGA database was used to train and evaluate the CNN but then, to achieve robust performance, the model trained on RIGA was applied on the DRISHTI-GS and RIM-One-r3 database. These image databases were pre-processed with a contrast enhancement and resized to 512 × 512 dimensions. Data augmentation was applied with rotation tricks, upside-down flip, and a left-right flip. Since the segmentation was considered a pixel-level classification problem, the binary cross-entropy logistic loss function was used. In this method, the segmentation of the optic disc and the cup reached better results with DRISHTI-GS than with RIM-One-r3.
In [146], an automated method is proposed for identifying retinal fiber layer defects (RNFLDs) using a classifier called a recurrent neural network (RNN), previously trained in 5200 regions with 13 training images. The proposed method successfully detected 14 out of 16 RNFLD bands and 1 false negative with an accuracy of 87.5%.
The authors of [147] used a backpropagation neural network to classify the retinal nerve fiber layer (RNFL). This uses 40 background images as a test and 160 sub-images (80 with normal RNFL and 80 with diminished RNFL). This resulted in 94.52% accuracy due to high myopia (42.6%), diabetic retinopathy (4.6%), DMS (3.4%), and false positives due to increased physiological cupping.
In [148], the authors used a gradient-based classification activation map (Grad-CAM), applied attention mining (AM) based on these Grad-CAM results, and performed dissimilarity (DISSIM) loss for training. They used a private dataset from 13 universities, and complementarily, they used deep convolutional neural networks (DCNNs) and the VGG19 model to obtain the recognition of glaucoma with an accuracy of 96.2%.
The authors of [149] evaluated a smartphone application-based deep learning system (DLS) for detecting glaucomatous visual field changes named iGlaucoma. The mobile application executes CNN and a modified ResNet-18 to classify glaucoma. They used a private dataset, which reported an AUC of 87.3%.
In [150], the authors used OCT images with two deep learning networks for scleral spur localization and angle-closure classification. The dataset used was the Angle-Closure Glaucoma Evaluation Challenge (AGE), in which the deep convolutional neural network (DCNN) and ResNet18 methods were applied for reporting with 100% accuracy.
The authors of [151] used the Ocular Hypertension Treatment Study (OHTS) dataset for the implementation of deep archetypal analysis (DAA) for feature extraction and the class-balanced bagging for classification of glaucoma diseases. The results reported an AUC of 71%.
The choice of automated methodology was based not only on the results obtained in the networks but also on clinical data being the closest form to the methods used by specialists. However, when adding data from the clinical history, there was no significant difference in the AUC curve. Nonetheless, this increased the sensitivity and specificity values, indicating that this information can improve classification values [134].

6. Discussion

After reconstruction, the quality of photographic images may be improved by adding certain features through deep learning. For example, in the “active acquisition” described by [152], multiple photos of the same structure will be automatically reconstructed by a learning algorithm, resulting in the best quality image and emphasizing vital diagnostic features—for example, MRI and 3D tomography images. In ophthalmology, it is necessary to define the “loss of function” to minimize the error in automatic reconstruction. However, image restoration and classification processes have not been applied simultaneously, and no authors have used real-time image reconstruction.
Some traditional image processing algorithms can be used in a deep learning structure for image restoration, such as the BM3D80 algorithm used by [153]. This has been shown to outperform many noise removal networks that have been replaced by actual photographic noise. However, this requires the creation of a multi-modality multi-frame database from multiple manufacturers for a realistic assessment of general image restoration networks.
Deep learning techniques have been proven capable of solving several image artifacts caused by movements of acquired images (a common source of blurring of retinal images), static blurs, mirroring, and aberrations caused by ocular means.
Eyes with pre-perimeter open-angle glaucoma (OAG) had a better diagnostic performance using deep learning rather than machine learning techniques.
Optical disc characteristics in background images to reduce the influence of optic disc misalignment for diagnosing glaucoma had an AUC of 83.84%, a value which is very close to the results of manual detection.
Existing CNN architectures used in the medical image recognition field include AlexNet, VGG, ResNet, and GoogleNet. The AUC curve is the most used evaluation metric to assess a diagnostic model of AI. It varies from 50% to 100%, and the higher the model’s performance, the better it is. Sensitivity and false-positive values are compared.
Despite the excellent performance of the DL, it has several limitations that limit its use in practical applications, namely:
  • The need for continuous learning with the help of systems so that models can improve;
  • Potential forgetfulness when updating models;
  • The high dependence on data quality, as different image services containing other noises can affect different image protocols and influence models and performance;
  • Incorrect results arising from learning the network with multi-referential training data, that is, with biased characteristics pointed out by several experts;
  • Possibility of adding other factors such as visual acuity, refractometry, presence of familial glaucoma, ocular history (e.g., genetic and degenerative diseases of the anterior segment, cataracts, and choroidal diseases), and systemic factors (e.g., glycemic control, and diabetic vascular diseases), and other comorbidities that current algorithms may not incorporate, the severity of the illness, and the urgency of the referral;
  • Particular (individual) image-processing techniques are required according to the severity of the disease;
  • Errors inherent in training networks with only one type of image, for example, images with a slightly temporal optic disc, cause the network to incorrectly learn to associate the temporal location of the disc with the presence of the disease;
  • Existing datasets are still insufficient and should contain a more significant number of images with normal anatomical variations of the papillary region;
  • Population characteristics and phenotypes should be considered when input data are selected. DL architectures are based on training data from different databases. There is a lack of more robust studies that consider individual clinical particularities to classify into disease and non-disease, in addition to these databases requiring permanent data updating.
A single abnormality detected using an imaging technique cannot always guarantee the correct diagnosis of retinal diseases such as glaucoma.
Increasing the number of images in the database and the number of manual segmentations by experts from different countries can increase the robustness of future results [97].
Recent works have confirmed that machine learning has shown promise for aiding in the diagnosis of glaucoma and as a future instrument for monitoring the disease, with greater population inclusion.
Additionally, recent literature reviews have not emphasized the details of the functioning of the different deep learning algorithms used in CADs for ophthalmological diagnoses through optical papilla images [132]. Although they have not offered robust criteria to discuss the advantages and disadvantages of different algorithms, CNNs have been proven to be a valuable alternative in the automated classification of glaucomatous papillae. However, more powerful tests of CNN architectures trained in other databases and papilla images are needed to better establish the reproducibility of the methods.
Other studies have shown reliable results in identifying and performing the automatic segmentation of optical disc structures through machine learning using only background photos. In addition, this can differentiate normal and glaucomatous papillae as well as classify different stages of early (Gi), moderate (Gm), and severe glaucoma (Gse).
The authors reached reasonable accuracy, sensitivity, and specificity levels when using neural network models with different architectures and algorithms to classify normal and glaucomatous papillae.
The RIM-ONE, DRISHTI-GS, and DRIONS-DB databases were widely used as image standards, and U-Net with FCN architecture was the algorithm that seemed to offer better performance. However, other studies will be essential to increase the credibility of the methods used against the subtleties of papilla structures [93] and reduce the resistance of professionals to the use and reliability of the automated procedure.
Among the different classification methods proposed, most were based on the extraction of anatomical features and a small part on image textures. Perhaps combining these techniques could result in the ideal recognition of glaucoma. Furthermore, unlike past subjective assessments, ophthalmic imaging provides objective, accurate, reproducible, and quantitative data that can be tracked with statistics [154].
The automatic tracking of glaucomatous papilla through photographs has revolutionized the diagnosis of glaucoma. Furthermore, it has been proven to be an essential auxiliary diagnostic tool due to the subtleties resulting from the anatomical variants in the papilla “in vivo”, capable of obscuring the excavation limits.
Although ophthalmic diagnoses are feasible and effective, none of the authors have dealt with detailed reviews of different state-of-the-art deep learning algorithms used in retinal imaging (including glaucoma) ophthalmological diagnoses. The authors of [132] pointed out that the most used architectures in studies of the diagnosis with background images were the FCN, ResNet, AE, and lists the following limitations:
  • The lack of availability of large datasets is a problem because the model learns from large amounts of data. The model proposed by [155] may be an essential solution to this problem, but little effort has been made to synthesize new annotated background images and adequate clinical relevance. The generative contradictory network of automatic variational encoders is a trendy architecture for imaging. Their application can generate large amounts of clinically relevant synthetic data that will help increase the amount of data and prevent issues of data privacy.
  • Due to differences in camera configurations, in most literature, training data come from the same image distribution, which does not occur in real life. Transfer learning has been used for different applications in this area as well as subdomain adaptation (a subdomain of transfer learning), where data for training and testing are extracted from other distributions. However, it is not always possible to obtain data from training and testing the same distribution in the real world. Therefore, the model must be robust to test data from a different distribution. Accuracy often decreases due to this domain-shifting problem, and more emphasis should be placed on in-depth domain adaptation approaches to create robust models that can be implemented for real-world ophthalmological diagnosis.
Aspects related to clinical history such as age, race, eye trauma, family history, eye pressure, in addition to anamnesis characteristics such as cataract, eye pressure, the presence of artifacts in the anterior chamber, drainage angle (Schlen), and corneal thickness proposed by [31] could contribute to greater diagnostic accuracy and individualized glaucoma.
The new technologies of portable cameras combine the increase in quality, the potential for using images, and the ease of incorporation with smartphones, reduce costs, increase portability and the minimum requirement for learning by the examiner, produce a more accurate diagnosis, and can lead to earlier treatment [156].
In the future, if integrated into primary care, automated systems may reduce or eliminate unnecessary referrals and enable ophthalmic self-monitoring by patients through smartphone retina photography, visual acuity, and visual field tests which would facilitate referral to specialists, diagnosis, and the treatment of eye diseases.
Other future applications of deep learning in the eye clinic include patient self-examination, the obtention of photos by a technician in a virtual clinic or a hospital waiting room before the eye appointment. In addition, patients could be scanned in remote areas by a healthcare practitioner via home monitoring to assess disease progression.
Health systems suffering from labor shortages can benefit from modern automated imaging. With the increase in AI, the role of the physician will evolve from the paternalism of the 19th century and evidence-based medicine of the 20th century towards a more individualized clinical work focused on the patient, with the improvement of data quality, taking advantage of previous already structured clinical experiences in data/evidence.
Despite the development of deep learning in ophthalmology, few prospective clinical trials have evaluated its performance in real and everyday situations. The IDx-DR was recently approved as the first fully autonomous diagnostic system for diabetic retinopathy, but the patient benefit in terms of visual outcome remains unclear.
Glaucoma screening is performed in the eye exam and depends on several racial factors, age, family, the use of some medications, and ocular factors such as nearsighted or farsighted eyes. In addition, eye pressure represents a risk factor. However, the characteristics of the papilla mean a significant diagnostic factor and are visible in the intermediate and advanced stages of the disease. With artificial intelligence assessment solutions, the diagnosis of glaucoma is solely based on the appearance of the optic papilla without considering others risk factors. Therefore, they can falsify the tracking data.
On the other hand, different studies show reliable results in the identification of glaucoma using specific characteristics of the glaucomatous papilla and data from the anamnesis. These factors bring the technology closer to the diagnostic reality. However, as this is a multifactorial diagnosis, greater individualized data on that patient may obtain better results. Ultimately, with information that does not provide individual answers for each patient, one risks treating everyone identically.
Regarding patient management, these numbers will not reassure patients likely to develop or already suffering from the disease but face a waiting period for diagnostic confirmation or treatment initiation, even in results that point to a safe glaucoma screening through AI. In medicine, any treatment is justified when the benefits are assumed to outweigh the disadvantages. It has been proven that AI can help track glaucoma in a papillary appearance. However, establishing a definitive diagnosis and monitoring treatment depends on other parameters. The decision to start treatment is simple in cases in which there is evidence to confirm the characteristics of glaucoma either by specialists or through AI, especially in moderate and advanced topics.
Regarding the problem of starting the treatment to reduce the influence of the risk factors (e.g., intraocular pressure), the decision can be very delicate. It is vital when an immediate danger does not threaten the patient’s vision, nor the certainty that the risk factor is not is decisive for visual deterioration. Thus, it can lead to the imposition of preventive treatment, even if the hypothetical advantage is very modest.

7. Conclusions

This literature review showed that the analysis of images of the papilla through deep learning methods allows for greater precision in tracking the glaucomatous papilla through optical papilla images. In addition, the insertion of clinical data promoted a significant difference in the AUC curve, corroborating the results obtained by the authors of [134]. Nonetheless, it may have clinical importance in the individualized diagnoses of suspected glaucoma cases.
Methodologies that use public image banks for glaucoma screening need a constant increase in the availability of data that can encompass the normal and pathological morphological diversities of the fundus.
The literature has shown that screening for glaucomatous papillae by deep learning techniques can improve diagnostic accuracy and help screen for glaucoma through the classification of papillary images. However, this still demonstrates weaknesses considering the small amount of data from public databases in the sampling of images obtained by different cameras and the association of data obtained by various architectures.
In the future, we hope to obtain better quality images with the technical improvement of the Internet, smartphones, and applications that can help specialists track glaucoma, especially in more distant places. We may popularize, cheapen, and improve the accuracy of the recognition of papilla glaucomatous, especially in its early stages. Developing a reliable solution is the key to promoting acceptance of these methods among patients and doctors and promote patient empowerment.

Author Contributions

Conceptualization, J.C., A.N., I.M.P. and M.V.V.; methodology, J.C. and A.N.; validation, I.M.P., M.V.V. and A.C.; formal analysis, J.C. and A.N.; investigation, J.C. and A.N.; writing—original draft preparation, J.C., A.N., I.M.P., M.V.V. and E.Z.; writing—review and editing, J.C., A.N., I.M.P., M.V.V. and E.Z.; supervision, A.C. and I.M.P.; project administration, I.M.P. and A.C.; funding acquisition, I.M.P. and A.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work is financed by National Funds through the Portuguese funding agency, FCT—Fundação para a Ciência e a Tecnologia, within project UIDB/50014/2020. This work is also funded by FCT/MEC through national funds and, when applicable, co-funded by the FEDER-PT2020 partnership agreement under the project UIDB/50008/2020. This article is based upon work from COST Action IC1303-AAPELE—Architectures, Algorithms, and Protocols for Enhanced Living Environments and COST Action CA16226–SHELD-ON—Indoor living space improvement: Smart Habitat for the Elderly, supported by COST (European Cooperation in Science and Technology). COST is a funding agency for research and innovation networks. Our actions help connect research initiatives across Europe and enable scientists to grow their ideas by sharing them with their peers. This boosts their research, career, and innovation. More information can be found at www.cost.eu (accessed at 15 January 2022).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shaw, B.; Han, J.; Hawkins, R.; Stewart, J.; Mctavish, F.; Gustafson, D. Doctor–Patient Relationship as Motivation and Outcome: Examining Uses of an Interactive Cancer Communication System. Int. J. Med. Inform. 2007, 76, 274–282. [Google Scholar] [CrossRef]
  2. Moreira, M.W.L.; Rodrigues, J.J.P.C.; Korotaev, V.; Al-Muhtadi, J.; Kumar, N. A Comprehensive Review on Smart Decision Support Systems for Health Care. IEEE Syst. J. 2019, 13, 3536–3545. [Google Scholar] [CrossRef]
  3. Qi, J.; Yang, P.; Min, G.; Amft, O.; Dong, F.; Xu, L. Advanced Internet of Things for Personalised Healthcare Systems: A Survey. Pervasive Mob. Comput. 2017, 41, 132–149. [Google Scholar] [CrossRef]
  4. Jiang, F.; Jiang, Y.; Zhi, H.; Dong, Y.; Li, H.; Ma, S.; Wang, Y.; Dong, Q.; Shen, H.; Wang, Y. Artificial Intelligence in Healthcare: Past, Present and Future. Stroke Vasc. Neurol. 2017, 2, 230–243. [Google Scholar] [CrossRef]
  5. Lopes, H.; Pires, I.M.; Sánchez San Blas, H.; García-Ovejero, R.; Leithardt, V. PriADA: Management and Adaptation of Information Based on Data Privacy in Public Environments. Computers 2020, 9, 77. [Google Scholar] [CrossRef]
  6. Dash, S.; Shakyawar, S.K.; Sharma, M.; Kaushik, S. Big Data in Healthcare: Management, Analysis and Future Prospects. J. Big Data 2019, 6, 54. [Google Scholar] [CrossRef] [Green Version]
  7. Chen, Y.; Ding, S.; Xu, Z.; Zheng, H.; Yang, S. Blockchain-Based Medical Records Secure Storage and Medical Service Framework. J. Med. Syst 2019, 43, 5. [Google Scholar] [CrossRef]
  8. Aceto, G.; Persico, V.; Pescapé, A. Industry 4.0 and Health: Internet of Things, Big Data, and Cloud Computing for Healthcare 4.0. J. Ind. Inf. Integr. 2020, 18, 100129. [Google Scholar] [CrossRef]
  9. Verri Lucca, A.; Augusto Silva, L.; Luchtenberg, R.; Garcez, L.; Mao, X.; García Ovejero, R.; Miguel Pires, I.; Luis Victória Barbosa, J.; Reis Quietinho Leithardt, V. A Case Study on the Development of a Data Privacy Management Solution Based on Patient Information. Sensors 2020, 20, 6030. [Google Scholar] [CrossRef]
  10. Kurtulmus, A.; Elbay, A.; Parlakkaya, F.B.; Kilicarslan, T.; Ozdemir, M.H.; Kirpinar, I. An Investigation of Retinal Layer Thicknesses in Unaffected First-Degree Relatives of Schizophrenia Patients. Schizophr. Res. 2020, 218, 255–261. [Google Scholar] [CrossRef]
  11. O’Brien, J.M.; Salowe, R.J.; Fertig, R.; Salinas, J.; Pistilli, M.; Sankar, P.S.; Miller-Ellis, E.; Lehman, A.; Murphy, W.H.A.; Homsher, M.; et al. Family History in the Primary Open-Angle African American Glaucoma Genetics Study Cohort. Am. J. Ophthalmol. 2018, 192, 239–247. [Google Scholar] [CrossRef] [Green Version]
  12. McMonnies, C.W. Glaucoma History and Risk Factors. J. Optom. 2017, 10, 71–78. [Google Scholar] [CrossRef] [Green Version]
  13. Misajon, R.; Hawthorne, G.; Richardson, J.; Barton, J.; Peacock, S.; Iezzi, A.; Keeffe, J. Vision and Quality of Life: The Development of a Utility Measure. Investig. Ophthalmol. Vis. Sci. 2005, 46, 4007. [Google Scholar] [CrossRef] [Green Version]
  14. Wu, A.; Khawaja, A.P.; Pasquale, L.R.; Stein, J.D. A Review of Systemic Medications That May Modulate the Risk of Glaucoma. Eye 2020, 34, 12–28. [Google Scholar] [CrossRef]
  15. Balendra, S.I.; Zollet, P.; Cisa Asinari Di Gresy E Casasca, G.; Cordeiro, M.F. Personalized Approaches for the Management of Glaucoma. Expert Rev. Precis. Med. Drug Dev. 2020, 5, 145–164. [Google Scholar] [CrossRef]
  16. Mason, L.; Jafri, S.; Dortonne, I.; Sheppard, J.D. Emerging Therapies for Dry Eye Disease. Expert Opin. Emerg. Drugs 2021, 26, 401–413. [Google Scholar] [CrossRef]
  17. Muniesa, M.J.; Ezpeleta, J.; Benítez, I. Fluctuations of the Intraocular Pressure in Medically Versus Surgically Treated Glaucoma Patients by a Contact Lens Sensor. Am. J. Ophthalmol. 2019, 203, 1–11. [Google Scholar] [CrossRef] [PubMed]
  18. Jabbehdari, S.; Chen, J.L.; Vajaranant, T.S. Effect of Dietary Modification and Antioxidant Supplementation on Intraocular Pressure and Open-Angle Glaucoma. Eur. J. Ophthalmol. 2021, 31, 1588–1605. [Google Scholar] [CrossRef]
  19. Sharif, N. Glaucomatous Optic Neuropathy Treatment Options: The Promise of Novel Therapeutics, Techniques and Tools to Help Preserve Vision. Neural Regen. Res. 2018, 13, 1145. [Google Scholar] [CrossRef]
  20. Demer, J.L.; Clark, R.A.; Suh, S.Y.; Giaconi, J.A.; Nouri-Mahdavi, K.; Law, S.K.; Bonelli, L.; Coleman, A.L.; Caprioli, J. Optic Nerve Traction During Adduction in Open Angle Glaucoma with Normal versus Elevated Intraocular Pressure. Curr. Eye Res. 2020, 45, 199–210. [Google Scholar] [CrossRef] [PubMed]
  21. Huang, J.; Xiong, Y.; Huang, W.; Xu, C.; Miao, F. SieveDroid: Intercepting Undesirable Private-Data Transmissions in Android Applications. IEEE Syst. J. 2020, 14, 375–386. [Google Scholar] [CrossRef]
  22. Chanal, P.M.; Kakkasageri, M.S. Security and Privacy in IoT: A Survey. Wirel. Pers. Commun. 2020. [Google Scholar] [CrossRef]
  23. Yang, P.; Xiong, N.; Ren, J. Data Security and Privacy Protection for Cloud Storage: A Survey. IEEE Access 2020, 8, 131723–131740. [Google Scholar] [CrossRef]
  24. Qi, L.; Hu, C.; Zhang, X.; Khosravi, M.R.; Sharma, S.; Pang, S.; Wang, T. Privacy-Aware Data Fusion and Prediction with Spatial-Temporal Context for Smart City Industrial Environment. IEEE Trans. Ind. Inf. 2020, 17, 4159–4167. [Google Scholar] [CrossRef]
  25. Vermeulen, A.F. Unsupervised Learning: Deep Learning. In Industrial Machine Learning; Apress: Berkeley, CA, USA, 2020; pp. 225–241. ISBN 978-1-4842-5315-1. [Google Scholar]
  26. Foote, K.D. A Brief History of Deep Learning. DATAVERSITY. Available online: https://www.dataversity.net/brief-history-deep-learning (accessed on 7 December 2021).
  27. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet Classification with Deep Convolutional Neural Networks. Adv. Neural Inf. Processing Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
  28. Song, C.; Yang, B.; Zhang, L.; Wu, D. A Handheld Device for Measuring the Diameter at Breast Height of Individual Trees Using Laser Ranging and Deep-Learning Based Image Recognition. Plant Methods 2021, 17, 67. [Google Scholar] [CrossRef]
  29. Bock, R.; Meier, J.; Michelson, G.; Nyúl, L.G.; Hornegger, J. Classifying Glaucoma with Image-Based Features from Fundus Photographs. In Joint Pattern Recognition Symposium; Springer: Berlin/Heidelberg, Germany, 2007; pp. 355–364. [Google Scholar]
  30. Cuesta-Vargas, A.I.; Pajares, B.; Trinidad-Fernandez, M.; Alba, E.; Roldan-Jiménez, C. Inertial Sensors Embedded in Smartphones as a Tool for Fatigue Assessment Based on Acceleration in Survivors of Breast Cancer. Phys. Ther. 2020, 100, 447–456. [Google Scholar] [CrossRef]
  31. Chai, Y.; Liu, H.; Xu, J. Glaucoma Diagnosis Based on Both Hidden Features and Domain Knowledge through Deep Learning Models. Knowl. -Based Syst. 2018, 161, 147–156. [Google Scholar] [CrossRef]
  32. Raghavendra, U.; Fujita, H.; Bhandary, S.V.; Gudigar, A.; Tan, J.H.; Acharya, U.R. Deep Convolution Neural Network for Accurate Diagnosis of Glaucoma Using Digital Fundus Images. Inf. Sci. 2018, 441, 41–49. [Google Scholar] [CrossRef]
  33. Cicinelli, M.V.; Cavalleri, M.; Brambati, M.; Lattanzio, R.; Bandello, F. New Imaging Systems in Diabetic Retinopathy. Acta Diabetol. 2019, 56, 981–994. [Google Scholar] [CrossRef] [PubMed]
  34. Cerentini, A.; Welfer, D.; d’Ornellas, M.C.; Haygert, C.J.P.; Dotto, G.N. Automatic Identification of Glaucoma Using Deep Learning Methods. Stud. Health Technol. Inform. 2017, 245, 318–321. [Google Scholar] [CrossRef]
  35. Fujihara, F.M.F.; de Arruda Mello, P.A.; Lindenmeyer, R.L.; Pakter, H.M.; Lavinsky, J.; Benfica, C.Z.; Castoldi, N.; Picetti, E.; Lavinsky, D.; Finkelsztejn, A. Individual Macular Layer Evaluation with Spectral Domain Optical Coherence Tomography in Normal and Glaucomatous Eyes. Clin. Ophthalmol. (Auckl. NZ) 2020, 14, 1591. [Google Scholar] [CrossRef]
  36. Armstrong, G.W.; Kalra, G.; De Arrigunaga, S.; Friedman, D.S.; Lorch, A.C. Anterior Segment Imaging Devices in Ophthalmic Telemedicine. Semin. Ophthalmol. 2021, 36, 149–156. [Google Scholar] [CrossRef]
  37. Ichhpujani, P.; Thakur, S. Smartphones and Telemedicine in Ophthalmology. In Smart Resources in Ophthalmology; Springer: Berlin/Heidelberg, Germany, 2018; pp. 247–255. [Google Scholar]
  38. Omboni, S.; Caserini, M.; Coronetti, C. Telemedicine and M-Health in Hypertension Management: Technologies, Applications and Clinical Evidence. High Blood Press. Cardiovasc. Prev. 2016, 23, 187–196. [Google Scholar] [CrossRef]
  39. Promising Artificial Intelligence-Machine Learning-Deep Learning Algorithms in Ophthalmology. Asia Pac. J. Ophthalmol. (Phila) 2019, 8, 264–272. [CrossRef]
  40. Ting, D.S.W.; Peng, L.; Varadarajan, A.V.; Keane, P.A.; Burlina, P.M.; Chiang, M.F.; Schmetterer, L.; Pasquale, L.R.; Bressler, N.M.; Webster, D.R.; et al. Deep Learning in Ophthalmology: The Technical and Clinical Considerations. Prog. Retin. Eye Res. 2019, 72, 100759. [Google Scholar] [CrossRef]
  41. Wang, Z.; Keane, P.A.; Chiang, M.; Cheung, C.Y.; Wong, T.Y.; Ting, D.S.W. Artificial Intelligence and Deep Learning in Ophthalmology. In Artificial Intelligence in Medicine; Lidströmer, N., Ashrafian, H., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 1–34. ISBN 978-3-030-58080-3. [Google Scholar]
  42. Perdomo Charry, O.J.; González Osorio, F.A. A Systematic Review of Deep Learning Methods Applied to Ocular Images. Cien. Ing. Neogranadina 2019, 30, 9–26. [Google Scholar] [CrossRef]
  43. Grewal, P.S.; Oloumi, F.; Rubin, U.; Tennant, M.T.S. Deep Learning in Ophthalmology: A Review. Can. J. Ophthalmol. 2018, 53, 309–313. [Google Scholar] [CrossRef] [PubMed]
  44. Pires, I.M.; Denysyuk, H.V.; Villasana, M.V.; Sá, J.; Lameski, P.; Chorbev, I.; Zdravevski, E.; Trajkovik, V.; Morgado, J.F.; Garcia, N.M. Mobile 5P-Medicine Approach for Cardiovascular Patients. Sensors 2021, 21, 6986. [Google Scholar] [CrossRef] [PubMed]
  45. Pires, I.M.; Marques, G.; Garcia, N.M.; Flórez-Revuelta, F.; Ponciano, V.; Oniani, S. A Research on the Classification and Applicability of the Mobile Health Applications. J. Pers. Med. 2020, 10, 11. [Google Scholar] [CrossRef] [Green Version]
  46. Villasana, M.V.; Pires, I.M.; Sá, J.; Garcia, N.M.; Zdravevski, E.; Chorbev, I.; Lameski, P.; Flórez-Revuelta, F. Promotion of Healthy Nutrition and Physical Activity Lifestyles for Teenagers: A Systematic Literature Review of The Current Methodologies. J. Pers. Med. 2020, 10, 12. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Pires, I.M.; Garcia, N.M.; Pombo, N.; Flórez-Revuelta, F. Framework for the Recognition of Activities of Daily Living and Their Environments in the Development of a Personal Digital Life Coach. In Proceedings of the DATA, Porto, Portugal, 26–28 July 2018; pp. 163–170. [Google Scholar]
  48. Ferreira, F.; Pires, I.M.; Costa, M.; Ponciano, V.; Garcia, N.M.; Zdravevski, E.; Chorbev, I.; Mihajlov, M. A Systematic Investigation of Models for Color Image Processing in Wound Size Estimation. Computers 2021, 10, 43. [Google Scholar] [CrossRef]
  49. Ponciano, V.; Pires, I.M.; Ribeiro, F.R.; Garcia, N.M.; Pombo, N.; Spinsante, S.; Crisóstomo, R. Smartphone-Based Automatic Measurement of the Results of the Timed-Up and Go Test. In Proceedings of the 5th EAI International Conference on Smart Objects and Technologies for Social Good, Valencia, Spain, 25–27 September 2019; pp. 239–242. [Google Scholar]
  50. Silva, A.R.; Farias, M.C.Q. Perceptual Quality Assessment of 3D Videos with Stereoscopic Degradations. Multimed. Tools Appl. 2020, 79, 1603–1623. [Google Scholar] [CrossRef]
  51. Gargeya, R.; Leng, T. Automated Identification of Diabetic Retinopathy Using Deep Learning. Ophthalmology 2017, 124, 962–969. [Google Scholar] [CrossRef]
  52. Krupinski, E.A. Current Perspectives in Medical Image Perception. Atten. Percept. Psychophys. 2010, 72, 1205–1217. [Google Scholar] [CrossRef] [Green Version]
  53. González-Márquez, F.; Luque-Romero, L.; Ruiz-Romero, M.V.; Castillón-Torre, L.; Hernández-Martínez, F.J.; Olea-Pabón, L.; Moro-Muñoz, S.; García-Díaz, R. del M.; García-Garmendia, J.L. Remote Ophthalmology with a Smartphone Adapter Handled by Nurses for the Diagnosis of Eye Posterior Pole Pathologies during the COVID-19 Pandemic. J. Telemed. Telecare 2021, 1357633X2199401. [Google Scholar] [CrossRef] [PubMed]
  54. Stein, J.D.; Blachley, T.S.; Musch, D.C. Identification of Persons With Incident Ocular Diseases Using Health Care Claims Databases. Am. J. Ophthalmol. 2013, 156, 1169–1175. [Google Scholar] [CrossRef] [Green Version]
  55. Khan, S.M.; Liu, X.; Nath, S.; Korot, E.; Faes, L.; Wagner, S.K.; Keane, P.A.; Sebire, N.J.; Burton, M.J.; Denniston, A.K. A Global Review of Publicly Available Datasets for Ophthalmological Imaging: Barriers to Access, Usability, and Generalisability. Lancet Digit. Health 2021, 3, e51–e66. [Google Scholar] [CrossRef]
  56. Fumero, F.; Alayon, S.; Sanchez, J.L.; Sigut, J.; Gonzalez-Hernandez, M. RIM-ONE: An Open Retinal Image Database for Optic Nerve Evaluation. In Proceedings of the 2011 24th International Symposium on Computer-Based Medical Systems (CBMS), Bristol, UK, 27–30 June 2011; pp. 1–6. [Google Scholar]
  57. Medical Image Analysis Group. Available online: https://medimrg.webs.ull.es/ (accessed on 7 December 2021).
  58. Zhou, W.; Yi, Y.; Bao, J.; Wang, W. Adaptive Weighted Locality-Constrained Sparse Coding for Glaucoma Diagnosis. Med. Biol. Eng. Comput. 2019, 57, 2055–2067. [Google Scholar] [CrossRef]
  59. Fumero Batista, F.J.; Diaz-Aleman, T.; Sigut, J.; Alayon, S.; Arnay, R.; Angel-Pereira, D. RIM-ONE DL: A Unified Retinal Image Database for Assessing Glaucoma Using Deep Learning. Image Anal. Stereol 2020, 39, 161–167. [Google Scholar] [CrossRef]
  60. Drishti-GS Dataset Webpage. Available online: http://cvit.iiit.ac.in/projects/mip/drishti-gs/mip-dataset2/Home.php (accessed on 7 December 2021).
  61. Sivaswamy, J.; Krishnadas, S.R.; Datt Joshi, G.; Jain, M.; Syed Tabish, A.U. Drishti-GS: Retinal Image Dataset for Optic Nerve Head(ONH) Segmentation. In Proceedings of the 2014 IEEE 11th International Symposium on Biomedical Imaging (ISBI), Beijing, China, 29 April–2 May 2014; pp. 53–56. [Google Scholar]
  62. DRIONS-DB: RETINAL IMAGE DATABASE. Available online: http://www.ia.uned.es/~ejcarmona/DRIONS-DB.html (accessed on 7 December 2021).
  63. Patil, D.D.; Manza, R.R.; Bedke, G.C.; Rathod, D.D. Development of Primary Glaucoma Classification Technique Using Optic Cup & Disc Ratio. In Proceedings of the 2015 International Conference on Pervasive Computing (ICPC), Pune, India, 8–10 January 2015; pp. 1–5. [Google Scholar]
  64. MAFFRE, G.P. Messidor-2. Available online: https://www.adcis.net/en/third-party/messidor2/ (accessed on 7 December 2021).
  65. Decencière, E.; Zhang, X.; Cazuguel, G.; Lay, B.; Cochener, B.; Trone, C.; Gain, P.; Ordonez, R.; Massin, P.; Erginay, A.; et al. Feedback on a publicly distributed image database: The messidor database. Image Anal. Stereol 2014, 33, 231. [Google Scholar] [CrossRef] [Green Version]
  66. Odstrcilik, J.; Kolar, R.; Budai, A.; Hornegger, J.; Jan, J.; Gazarek, J.; Kubena, T.; Cernosek, P.; Svoboda, O.; Angelopoulou, E. Retinal Vessel Segmentation by Improved Matched Filtering: Evaluation on a New High-resolution Fundus Image Database. IET Image Processing 2013, 7, 373–383. [Google Scholar] [CrossRef]
  67. Lowell, J.; Hunter, A.; Steel, D.; Basu, A.; Ryder, R.; Fletcher, E.; Kennedy, L. Optic Nerve Head Segmentation. IEEE Trans. Med. Imaging 2004, 23, 256–264. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  68. Diaz-Pinto, A.; Morales, S.; Naranjo, V.; Köhler, T.; Mossi, J.M.; Navea, A. CNNs for Automatic Glaucoma Assessment Using Fundus Images: An Extensive Validation. BioMed. Eng. OnLine 2019, 18, 29. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  69. REFUGE-Grand Challenge. Available online: https://refuge.grand-challenge.org/ (accessed on 7 December 2021).
  70. Zhang, Z.; Yin, F.S.; Liu, J.; Wong, W.K.; Tan, N.M.; Lee, B.H.; Cheng, J.; Wong, T.Y. ORIGA: An Online Retinal Fundus Image Database for Glaucoma Analysis and Research. In Proceedings of the 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, Buenos Aires, Argentina, 31 August–4 September 2010; pp. 3065–3068. [Google Scholar]
  71. Abbas, Q. Glaucoma-Deep: Detection of Glaucoma Eye Disease on Retinal Fundus Images Using Deep Learning. Int. J. Adv. Comput. Sci. Appl. 2017, 8, 41–45. [Google Scholar] [CrossRef] [Green Version]
  72. Fu, H.; Cheng, J.; Xu, Y.; Zhang, C.; Wong, D.W.K.; Liu, J.; Cao, X. Disc-Aware Ensemble Network for Glaucoma Screening From Fundus Image. IEEE Trans. Med. Imaging 2018, 37, 2493–2501. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  73. Zhang, Z.; Srivastava, R.; Liu, H.; Chen, X.; Duan, L.; Kee Wong, D.W.; Kwoh, C.K.; Wong, T.Y.; Liu, J. A Survey on Computer Aided Diagnosis for Ocular Diseases. BMC Med. Inform. Decis. Mak. 2014, 14, 1–29. [Google Scholar] [CrossRef]
  74. Chima Ambrose Dibia; Ezenwa, N.S. Automated detection of glaucoma from retinal. Int. J. Adv. Sci. Eng. Technol. 2018, 2, 13–18. [Google Scholar]
  75. Phasuk, S.; Poopresert, P.; Yaemsuk, A.; Suvannachart, P.; Itthipanichpong, R.; Chansangpetch, S.; Manassakorn, A.; Tantisevi, V.; Rojanapongpun, P.; Tantibundhit, C. Automated Glaucoma Screening from Retinal Fundus Image Using Deep Learning. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 904–907. [Google Scholar]
  76. Sreng, S.; Maneerat, N.; Hamamoto, K.; Win, K.Y. Deep Learning for Optic Disc Segmentation and Glaucoma Diagnosis on Retinal Images. Appl. Sci. 2020, 10, 4916. [Google Scholar] [CrossRef]
  77. Maadi, F.; Faraji, N.; Bibalan, M.H. A Robust Glaucoma Screening Method for Fundus Images Using Deep Learning Technique. In Proceedings of the 2020 27th National and 5th International Iranian Conference on Biomedical Engineering (ICBME), Tehran, Iran, 26–27 November 2020; pp. 289–293. [Google Scholar]
  78. Zhao, R.; Chen, X.; Liu, X.; Chen, Z.; Guo, F.; Li, S. Direct Cup-to-Disc Ratio Estimation for Glaucoma Screening via Semi-Supervised Learning. IEEE J. Biomed. Health Inform. 2020, 24, 1104–1113. [Google Scholar] [CrossRef]
  79. Ali, R.; Sheng, B.; Li, P.; Chen, Y.; Li, H.; Yang, P.; Jung, Y.; Kim, J.; Chen, C.L.P. Optic Disk and Cup Segmentation Through Fuzzy Broad Learning System for Glaucoma Screening. IEEE Trans. Ind. Inf. 2021, 17, 2476–2487. [Google Scholar] [CrossRef]
  80. Wang, M.; Yu, K.; Zhu, W.; Shi, F.; Chen, X. Multi-Strategy Deep Learning Method for Glaucoma Screening on Fundus Image. Investig. Ophthalmol. Vis. Sci. 2019, 60, 6148. [Google Scholar]
  81. Rao Parthasarathy, D.; Hsu, C.-K.; Eldeeb, M.; Jinapriya, D.; Shroff, S.; Shruthi, S.; Pradhan, Z.; Deshmukh, S.; Savoy, F.M. Development and Performance of a Novel ‘Offline’ Deep Learning (DL)-Based Glaucoma Screening Tool Integrated on a Portable Smartphone-Based Fundus Camera. Investig. Ophthalmol. Vis. Sci. 2021, 62, 1002. [Google Scholar]
  82. Zaleska-Żmijewska, A.; Szaflik, J.P.; Borowiecki, P.; Pohnke, K.; Romaniuk, U.; Szopa, I.; Pniewski, J.; Szaflik, J. A New Platform Designed for Glaucoma Screening: Identifying the Risk of Glaucomatous Optic Neuropathy Using Fundus Photography with Deep Learning Architecture Together with Intraocular Pressure Measurements. Klin. Ocz. 2020, 2020, 1–6. [Google Scholar] [CrossRef]
  83. Lee, J.; Lee, J.; Song, H.; Lee, C. Development of an End-to-End Deep Learning System for Glaucoma Screening Using Color Fundus Images. JAMA Ophthalmol. 2019, 137, 1353–1360. [Google Scholar]
  84. Chakrabarty, N.; Chatterjee, S. A Novel Approach to Glaucoma Screening Using Computer Vision. In Proceedings of the 2019 International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 27–29 November 2019; pp. 881–884. [Google Scholar]
  85. Panda, R.; Puhan, N.B.; Mandal, B.; Panda, G. GlaucoNet: Patch-Based Residual Deep Learning Network for Optic Disc and Cup Segmentation Towards Glaucoma Assessment. SN COMPUT. SCI. 2021, 2, 99. [Google Scholar] [CrossRef]
  86. Liu, Y.; Yip, L.W.L.; Zheng, Y.; Wang, L. Glaucoma Screening Using an Attention-Guided Stereo Ensemble Network. Methods. 2021. Available online: https://doi.org/10.1016/j.ymeth.2021.06.010 (accessed on 19 June 2021).
  87. Alghamdi, H.S.; Tang, H.L.; Waheeb, S.A.; Peto, T. Automatic Optic Disc Abnormality Detection in Fundus Images: A Deep Learning Approach; University of Iowa: Iowa City, IA, USA, 2016; pp. 17–24. [Google Scholar]
  88. Maninis, K.-K.; Pont-Tuset, J.; Arbeláez, P.; Van Gool, L. Deep Retinal Image Understanding. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2016; Volume 9901 LNCS, pp. 140–148. ISBN 978-3-319-46722-1. [Google Scholar]
  89. Sevastopolsky, A. Optic Disc and Cup Segmentation Methods for Glaucoma Detection with Modification of U-Net Convolutional Neural Network. Pattern Recognit. Image Anal. 2017, 27, 618–624. [Google Scholar] [CrossRef] [Green Version]
  90. Priyanka, R.; Shoba, S.J.G.; Therese, A.B. Segmentation of Optic Disc in Fundus Images Using Convolutional Neural Networks for Detection of Glaucoma. Int. J. Adv. Eng. Res. Sci. 2017, 4, 170–179. [Google Scholar] [CrossRef]
  91. Tan, J.H.; Acharya, U.R.; Bhandary, S.V.; Chua, K.C.; Sivaprasad, S. Segmentation of Optic Disc, Fovea and Retinal Vasculature Using a Single Convolutional Neural Network. J. Comput. Sci. 2017, 20, 70–79. [Google Scholar] [CrossRef] [Green Version]
  92. Sun, X.; Xu, Y.; Zhao, W.; You, T.; Liu, J. Optic Disc Segmentation from Retinal Fundus Images via Deep Object Detection Networks. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 5954–5957. [Google Scholar]
  93. Singh, V.K.; Rashwan, H.; Akram, F.; Pandey, N.; Sarker, M.M.K.; Saleh, A.; Abdulwahab, S.; Maaroof, N.; Romani, S.; Puig, D. Retinal Optic Disc Segmentation Using Conditional Generative Adversarial Network. Front. Artif. Intell. Appl. 2018, 308, 373–380. [Google Scholar] [CrossRef]
  94. Diaz-Pinto, A.; Colomer, A.; Naranjo, V.; Morales, S.; Xu, Y.; Frangi, A.F. Retinal Image Synthesis and Semi-Supervised Learning for Glaucoma Assessment. IEEE Trans. Med. Imaging 2019, 38, 2211–2218. [Google Scholar] [CrossRef]
  95. Gonzalez-Hernandez, M.; Gonzalez-Hernandez, D.; Perez-Barbudo, D.; Rodriguez-Esteve, P.; Betancor-Caro, N.; Gonzalez de la Rosa, M. Fully Automated Colorimetric Analysis of the Optic Nerve Aided by Deep Learning and Its Association with Perimetry and OCT for the Study of Glaucoma. JCM 2021, 10, 3231. [Google Scholar] [CrossRef] [PubMed]
  96. Li, R.; Wang, X.; Wei, Y.; Fang, Y.; Tian, T.; Kang, L.; Li, M.; Cai, Y.; Pan, Y. Diagnostic Capability of Different Morphological Parameters for Primary Open-angle Glaucoma in the Chinese Population. BMC Ophthalmol. 2021, 21, 151. [Google Scholar] [CrossRef] [PubMed]
  97. Chen, X.; Xu, Y.; Yan, S.; Wong, D.W.K.; Wong, T.Y.; Liu, J. Automatic Feature Learning for Glaucoma Detection Based on Deep Learning. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer International Publishing: Cham, Switzerland, 2015; Volume 9351, pp. 669–677. [Google Scholar]
  98. Colonna, A.; Scarpa, F.; Ruggeri, A. Segmentation of Corneal Nerves Using a U-Net-Based Convolutional Neural Network. In Computational Pathology and Ophthalmic Medical Image Analysis; Stoyanov, D., Taylor, Z., Ciompi, F., Xu, Y., Martel, A., Maier-Hein, L., Rajpoot, N., van der Laak, J., Veta, M., McKenna, S., et al., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2018; Volume 11039, pp. 185–192. ISBN 978-3-030-00948-9. [Google Scholar]
  99. Edupuganti, V.G.; Chawla, A.; Kale, A. Automatic Optic Disk and Cup Segmentation of Fundus Images Using Deep Learning. In Proceedings of the 25th IEEE International Conference on Image Processing, Athens, Greece, 7–10 October 2018; pp. 2227–2231. [Google Scholar]
  100. Benzebouchi, N.; Azizi, N.; Bouziane, S. Glaucoma Diagnosis Using Cooperative Convolutional Neural Networks. Int. J. Adv. Electron. Comput. Sci. 2018, 5, 31–36. [Google Scholar]
  101. Ahn, J.M.; Kim, S.; Ahn, K.-S.; Cho, S.-H.; Lee, K.B.; Kim, U.S. A Deep Learning Model for the Detection of Both Advanced and Early Glaucoma Using Fundus Photography. PLoS ONE 2018, 13, e0207982. [Google Scholar] [CrossRef] [Green Version]
  102. Li, Z.; He, Y.; Keel, S.; Meng, W.; Chang, R.T.; He, M. Efficacy of a Deep Learning System for Detecting Glaucomatous Optic Neuropathy Based on Color Fundus Photographs. Ophthalmology 2018, 125, 1199–1206. [Google Scholar] [CrossRef] [Green Version]
  103. Al-Bander, B.; Al-Nuaimy, W.; Williams, B.M.; Zheng, Y. Multiscale Sequential Convolutional Neural Networks for Simultaneous Detection of Fovea and Optic Disc. Biomed. Signal Processing Control 2018, 40, 91–101. [Google Scholar] [CrossRef]
  104. Sevastopolsky, A.; Drapak, S.; Kiselev, K.; Snyder, B.M.; Keenan, J.D.; Georgievskaya, A. Stack-U-Net: Refinement Network for Improved Optic Disc and Cup Image Segmentation.; Angelini, E.D., Landman, B.A., Eds.; SPIE Medical Imaging: San Diego, CA, USA, 2019; p. 78. [Google Scholar]
  105. Xu, Y.; Hu, M.; Liu, H.; Yang, H.; Wang, H.; Lu, S.; Liang, T.; Li, X.; Xu, M.; Li, L.; et al. A Hierarchical Deep Learning Approach with Transparency and Interpretability Based on Small Samples for Glaucoma Diagnosis. Npj Digit. Med. 2021, 4, 48. [Google Scholar] [CrossRef]
  106. Hemelings, R.; Elen, B.; Barbosa-Breda, J.; Blaschko, M.B.; De Boever, P.; Stalmans, I. Deep Learning on Fundus Images Detects Glaucoma beyond the Optic Disc. Sci. Rep. 2021, 11, 20313. [Google Scholar] [CrossRef]
  107. Kim, M.; Han, J.C.; Hyun, S.H.; Janssens, O.; Van Hoecke, S.; Kee, C.; De Neve, W. Medinoid: Computer-Aided Diagnosis and Localization of Glaucoma Using Deep Learning †. Appl. Sci. 2019, 9, 3064. [Google Scholar] [CrossRef] [Green Version]
  108. Yu, S.; Xiao, D.; Frost, S.; Kanagasingam, Y. Robust Optic Disc and Cup Segmentation with Deep Learning for Glaucoma Detection. Comput. Med. Imaging Graph. 2019, 74, 61–71. [Google Scholar] [CrossRef]
  109. Flores-Rodríguez, P.; Gili, P.; Martín-Ríos, M.D. Ophthalmic Features of Optic Disc Drusen. Ophthalmologica 2012, 228, 59–66. [Google Scholar] [CrossRef]
  110. Say, E.A.T.; Ferenczy, S.; Magrath, G.N.; Samara, W.A.; Khoo, C.T.L.; Shields, C.L. Image quality and artifacts on optical coherence tomography angiography: Comparison of Pathologic and Paired Fellow Eyes in 65 Patients With Unilateral Choroidal Melanoma Treated With Plaque Radiotherapy. Retina 2017, 37, 1660–1673. [Google Scholar] [CrossRef] [PubMed]
  111. Princy, S.B.; Duraisamy, S. Analysis of Retinal Images Using Detection of the Blood Vessels by Optic Disc and Optic Cup Segmentation Method. Int. Sci. J. Sci. Eng. Technol. 2016, 3, 33–40. [Google Scholar]
  112. Bock, R.; Meier, J.; Nyúl, L.G.; Hornegger, J.; Michelson, G. Glaucoma Risk Index:Automated Glaucoma Detection from Color Fundus Images. Med. Image Anal. 2010, 14, 471–481. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  113. Bhartiya, S.; Clement, C.; Dorairaj, S.; Kong, G.Y.X.; Albis-Donado, O. Clinical Decision Making in Glaucoma; Jaypee Brothers Medical Publishers: Guwahati, Assam, 2019; ISBN 93-5270-524-6. [Google Scholar]
  114. Saeed, A.Q.; Abdullah, S.N.H.S.; Che-Hamzah, J.; Ghani, A.T.A. Accuracy of Using Generative Adversarial Networks for Glaucoma Detection: Systematic Review and Bibliometric Analysis. J. Med. Internet Res. 2021, 23, e27414. [Google Scholar] [CrossRef]
  115. Soomro, T.; Shah, N.; Niestrata-Ortiz, M.; Yap, T.; Normando, E.M.; Cordeiro, M.F. Recent Advances in Imaging Technologies for Assessment of Retinal Diseases. Expert Rev. Med. Devices 2020, 17, 1095–1108. [Google Scholar] [CrossRef] [PubMed]
  116. Mohamed, N.A.; Zulkifley, M.A.; Zaki, W.M.D.W.; Hussain, A. An Automated Glaucoma Screening System Using Cup-to-Disc Ratio via Simple Linear Iterative Clustering Superpixel Approach. Biomed. Signal Processing Control 2019, 53, 101454. [Google Scholar] [CrossRef]
  117. Tan, O.; Liu, L.; You, Q.; Wang, J.; Jia, Y.; Huang, D. Focal Loss Analysis of Nerve Fiber Layer Reflectance for Glaucoma Diagnosis. Investig. Ophthalmol. Vis. Sci. 2020, 61, 5194. [Google Scholar] [CrossRef]
  118. MacIver, S.; MacDonald, D.; Prokopich, C.L. Screening, Diagnosis, and Management of Open Angle Glaucoma: An Evidence-Based Guideline for Canadian Optometrists. Can. J. Optom. 2017, 79, 5–71. [Google Scholar] [CrossRef]
  119. Claro, M.; Santos, L.; Silva, W.; Araújo, F.; Santana, A.D.A. Automatic Detection of Glaucoma Using Disc Optic Segmentation and Feature Extraction. In Proceedings of the 2015 41st Latin American Computing Conference, CLEI 2015, Arequipa, Peru, 19–23 October 2015. [Google Scholar] [CrossRef]
  120. Claro, D.L.; Melo, R.D.; Veras, S. Glaucoma Diagnosis Using Texture Attributes and Pre-Trained CNN’s. Rev. Inf. Te orica e Aplicada-RITA-ISSN 2018, 25, 82–89. [Google Scholar] [CrossRef]
  121. Mittapalli, P.S.; Kande, G.B. Segmentation of Optic Disk and Optic Cup from Digital Fundus Images for the Assessment of Glaucoma. Biomed. Signal Processing Control 2016, 24, 34–46. [Google Scholar] [CrossRef]
  122. Morales, S.; Naranjo, V.; Angulo, J.; Alcañiz, M. Automatic detection of optic disc based on PCA and mathematical morphology. IEEE Trans. Med. Imaging 2015, 32, 786–796. [Google Scholar]
  123. Pradhepa, K.; Karkuzhali, S.; Manimegalai, D. Segmentation and Localization of Optic Disc Using Feature Match and Medial Axis Detection in Retinal Images. Biomed. Pharmacol. J. 2015, 8, 391–397. [Google Scholar] [CrossRef]
  124. Lotankar, M.L.; Noronha, K.; Koti, J. Glaucoma Screening Using Digital Fundus Image through Optic Disc and Cup Segmentation. Int. J. Comput. Appl. 2015, 975, 8887. [Google Scholar]
  125. Choudhary, K.; Tiwari, S. ANN Glaucoma Detection Using Cup-to-Disk Ratio and Neuroretinal Rim. Int. J. Comput. Appl. 2015, 111, 8–14. [Google Scholar] [CrossRef]
  126. Müller, H.; González, F.A. Glaucoma Diagnosis from Eye Fundus Images Based on Deep Morphometric Feature Estimation. In Proceedings of the Computational Pathology and Ophthalmic Medical Image Analysis: First International Workshop, COMPAY 2018, and 5th International Workshop, OMIA 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, 16–20 September 2018; Volume 11039, p. 319. [Google Scholar]
  127. Lima, A.; Maia, L.B.; dos Santos, P.T.C.; Junior, G.B.; de Almeida, J.D.; de Paiva, A.C. Evolving Convolutional Neural Networks for Glaucoma Diagnosis. In Proceedings of the Anais do XVIII Simpósio Brasileiro de Computação Aplicada à Saúde; SBC: Natal, Brazil, 2018. [Google Scholar]
  128. Almazroa, A.; Burman, R.; Raahemifar, K.; Lakshminarayanan, V. Optic Disc and Optic Cup Segmentation Methodologies for Glaucoma Image Detection: A Survey. J. Ophthalmol. 2015, 2015, 1–28. [Google Scholar] [CrossRef] [Green Version]
  129. Chakravarty, A.; Sivswamy, J. A Deep Learning Based Joint Segmentation and Classification Framework for Glaucoma Assesment in Retinal Color Fundus Images. arXiv 2018, arXiv:1808.01355. [Google Scholar]
  130. Lim, G.; Cheng, Y.; Hsu, W.; Lee, M.L. Integrated Optic Disc and Cup Segmentation with Deep Learning. In Proceedings of the 2015 IEEE 27th International Conference on Tools with Artificial Intelligence (ICTAI), Vietri sul Mare, Italy, 9–11 November 2015; pp. 162–169. [Google Scholar]
  131. Mitra, A.; Banerjee, P.S.; Roy, S.; Roy, S.; Setua, S.K. The Region of Interest Localization for Glaucoma Analysis from Retinal Fundus Image Using Deep Learning. Comput. Methods Programs Biomed. 2018, 165, 25–35. [Google Scholar] [CrossRef]
  132. Sengupta, S.; Singh, A.; Leopold, H.A.; Gulati, T.; Lakshminarayanan, V. Ophthalmic Diagnosis Using Deep Learning with Fundus Images—A Critical Review. Artif. Intell. Med. 2020, 102, 101758. [Google Scholar] [CrossRef]
  133. Shankaranarayana, S.M.; Ram, K.; Mitra, K.; Sivaprakasam, M. Fully Convolutional Networks for Monocular Retinal Depth Estimation and Optic Disc-Cup Segmentation. IEEE J. Biomed. Health Inform. 2019, 23, 1417–1426. [Google Scholar] [CrossRef]
  134. Gómez-Valverde, J.J.; Antón, A.; Fatti, G.; Liefers, B.; Herranz, A.; Santos, A.; Sánchez, C.I.; Ledesma-Carbayo, M.J. Automatic Glaucoma Classification Using Color Fundus Images Based on Convolutional Neural Networks and Transfer Learning. Biomed. Opt. Express 2019, 10, 892. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  135. Kabir, M.A. Retinal Blood Vessel Extraction Based on Adaptive Segmentation Algorithm. In Proceedings of the 2020 IEEE Region 10 Symposium (TENSYMP), Dhaka, Bangladesh, 5–7 June 2020; pp. 1576–1579. [Google Scholar]
  136. Martins, J.; Cardoso, J.S.; Soares, F. Offline Computer-Aided Diagnosis for Glaucoma Detection Using Fundus Images Targeted at Mobile Devices. Comput. Methods Programs Biomed. 2020, 192, 105341. [Google Scholar] [CrossRef]
  137. Bajwa, M.N.; Singh, G.A.P.; Neumeier, W.; Malik, M.I.; Dengel, A.; Ahmed, S. G1020: A Benchmark Retinal Fundus Image Dataset for Computer-Aided Glaucoma Detection. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; pp. 1–7. [Google Scholar]
  138. Krishnan, R.; Sekhar, V.; Sidharth, J.; Gautham, S.; Gopakumar, G. Glaucoma Detection from Retinal Fundus Images. In Proceedings of the 2020 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 28–30 July 2020; pp. 0628–0631. [Google Scholar]
  139. Tabassum, M.; Khan, T.M.; Arsalan, M.; Naqvi, S.S.; Ahmed, M.; Madni, H.A.; Mirza, J. CDED-Net: Joint Segmentation of Optic Disc and Optic Cup for Glaucoma Screening. IEEE Access 2020, 8, 102733–102747. [Google Scholar] [CrossRef]
  140. Sun, Z.; Zhou, Q.; Li, H.; Yang, L.; Wu, S.; Sui, R. Mutations in Crystallin Genes Result in Congenital Cataract Associated with Other Ocular Abnormalities. Mol. Vis. 2017, 23, 977–986. [Google Scholar]
  141. Phene, S.; Dunn, R.C.; Hammel, N.; Liu, Y.; Krause, J.; Kitade, N.; Schaekermann, M.; Sayres, R.; Wu, D.J.; Bora, A.; et al. Deep Learning and Glaucoma Specialists: The Relative Importance of Optic Disc Features to Predict Glaucoma Referral in Fundus Photographs. Ophthalmology 2019, 126, 1627–1639. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  142. Ting, D.S.W.; Cheung, C.Y.L.; Lim, G.; Tan, G.S.W.; Quang, N.D.; Gan, A.; Hamzah, H.; Garcia-Franco, R.; Yeo, I.Y.S.; Lee, S.Y.; et al. Development and Validation of a Deep Learning System for Diabetic Retinopathy and Related Eye Diseases Using Retinal Images from Multiethnic Populations with Diabetes. JAMA—J. Am. Med. Assoc. 2017, 318, 2211–2223. [Google Scholar] [CrossRef]
  143. Serener, A.; Serte, S. Transfer Learning for Early and Advanced Glaucoma Detection with Convolutional Neural Networks. In Proceedings of the 2019 Medical Technologies Congress (TIPTEKNO), Izmir, Turkey, 3–5 October 2019. [Google Scholar] [CrossRef]
  144. Norouzifard, M.; Nemati, A.; Gholamhosseini, H.; Klette, R.; Nouri-Mahdavi, K.; Yousefi, S. Automated Glaucoma Diagnosis Using Deep and Transfer Learning: Proposal of a System for Clinical Testing. In Proceedings of the 2018 International Conference on Image and Vision Computing New Zealand (IVCNZ), Auckland, New Zealand, 19–21 November 2018. [Google Scholar] [CrossRef]
  145. Zilly, J.; Buhmann, J.M.; Mahapatra, D. Glaucoma Detection Using Entropy Sampling and Ensemble Learning for Automatic Optic Cup and Disc Segmentation. Comput. Med. Imaging Graph. 2017, 55, 28–41. [Google Scholar] [CrossRef]
  146. Panda, R.; Puhan, N.B.; Rao, A.; Padhy, D.; Panda, G. Recurrent Neural Network Based Retinal Nerve Fiber Layer Defect Detection in Early Glaucoma. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), Melbourne, VIC, Australia, 18–21 April 2017; pp. 692–695. [Google Scholar]
  147. Septiarini, A.; Harjoko, A.; Pulungan, R.; Ekantini, R. Automated Detection of Retinal Nerve Fiber Layer by Texture-Based Analysis for Glaucoma Evaluation. Healthc. Inform. Res. 2018, 24, 335. [Google Scholar] [CrossRef]
  148. Meng, Q.; Hashimoto, Y.; Satoh, S. How to Extract More Information With Less Burden: Fundus Image Classification and Retinal Disease Localization With Ophthalmologist Intervention. IEEE J. Biomed. Health Inform. 2020, 24, 3351–3361. [Google Scholar] [CrossRef]
  149. Li, F.; Song, D.; Chen, H.; Xiong, J.; Li, X.; Zhong, H.; Tang, G.; Fan, S.; Lam, D.S.C.; Pan, W.; et al. Development and Clinical Deployment of a Smartphone-Based Visual Field Deep Learning System for Glaucoma Detection. NPJ Digit. Med. 2020, 3, 123. [Google Scholar] [CrossRef] [PubMed]
  150. Li, P.; Geng, L.; Zhu, W.; Shi, F.; Chen, X. Automatic Angle-Closure Glaucoma Screening Based on the Localization of Scleral Spur in Anterior Segment OCT. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI); IEEE: Iowa City, IA, USA, April 2020; pp. 1387–1390. [Google Scholar]
  151. Gupta, K.; Thakur, A.; Goldbaum, M.; Yousefi, S. Glaucoma Precognition: Recognizing Preclinical Visual Functional Signs of Glaucoma. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW); IEEE: Seattle, WA, USA, 14–19 June 2020; pp. 4393–4401. [Google Scholar]
  152. Teikari, P.; Najjar, R.P.; Schmetterer, L.; Milea, D. Embedded Deep Learning in Ophthalmology: Making Ophthalmic Imaging Smarter. Ophthalmol. Eye Dis. 2019, 11, 251584141982717. [Google Scholar] [CrossRef] [PubMed]
  153. Plötz, T.; Roth, S. Neural Nearest Neighbors Networks. arXiv 2018, arXiv:1810.12575. [Google Scholar]
  154. Anderson, R.L.; Ramos Cadena, M. de los A.; Schuman, J.S. Glaucoma Diagnosis. Ophthalmol. Glaucoma 2018, 1, 3–14. [Google Scholar] [CrossRef] [PubMed]
  155. Goodfellow, G.W.; Trachimowicz, R.; Steele, G. Patient Literacy Levels within an Inner-City Optometry Clinic. Optom. -J. Am. Optom. Assoc. 2008, 79, 98–103. [Google Scholar] [CrossRef]
  156. Miller, S.E.; Thapa, S.; Robin, A.L.; Niziol, L.M.; Ramulu, P.Y.; Woodward, M.A.; Paudyal, I.; Pitha, I.; Kim, T.N.; Newman-Casey, P.A. Glaucoma Screening in Nepal: Cup-to-Disc Estimate With Standard Mydriatic Fundus Camera Compared to Portable Nonmydriatic Camera. Am. J. Ophthalmol. 2017, 182, 99–106. [Google Scholar] [CrossRef] [PubMed]
Table 1. Summary of the characteristics of the public databases.
Table 1. Summary of the characteristics of the public databases.
DatabaseGlaucoma/NormalOptic Disc/CupTotal
ACRIMA396/309No705
DRIONS-DB-No110
DRISHTI-GS70/31Both101
HRF27/18Both45
ONHSD-Optic disc only99
ORIGA168/482No650
REFUGE40/360Both400
RIM-One-r1194/261Both455
RIM-One-r374/85Both159
Sjchoi86-HRF101/300No601
Table 2. DL methods for glaucoma screening.
Table 2. DL methods for glaucoma screening.
StudyMethodsDatabasesResults
[75]DNNORIGAAUC: 0.94
RIM-One-r3
DRISHTI-GS
[76]DeepLabv3+RIM-ONEAccuracy: 97.37% (RIM-ONE), 90.00% (ORIGA), 86.84% (DRISHTI-GS) and 99.53% (ACRIMA)
ORIGA
MobileNetDRISHTI-GSAUC: 100% (RIM-ONE), 92.06% (ORIGA), 91.67% (DRISHTI-GS), and 99.98% (ACRIMA)
ACRIMA
[77]U-NetDRISHTI-GSAUC: 94%
REFUGE
RIM-One-r3
[78]MFPPNetDirect-CSUAUC: 90.5%
ORIGA
[79]Fuzzy broad learningRIM-One-r3AUC: 90.6% (RIM-One-r3) and 92.3% (SCRID)
SCRID
[80]U-NetN/DDICE: 89.6%
Precision: 95.12%
[81]DNN5716 images AUC: 94%
[82]DNN933 healthy and 754 glaucoma imagesSensitivity: 73%
Specificity: 83%
[83]M-NetREFUGEDICE: 94.26% (optic disc) and 85.65% (optic cup)
AUC: 96.37%
Sensitivity: 90%
[84]DL-ML Hybrid ModelHRFAccuracy: 100%
Sensitivity: 100%
[85]GlaucoNetDRISHTI-GSOverlapping score:
RIM-ONE- Optic disc segmentation: 91.06% (DRISHTI-GS), 89.72% (RIM-ONE), and 88.35% (ORIGA)
ORIGA- Optic cup segmentation: 82.29% (DRISHTI-GS), 74.01% (RIM-ONE), and 81.06% (ORIGA)
[86]CNN282 images with 70 glaucoma cases and 212 normal casesSensitivity: 95.48%
Table 3. Classification of glaucoma by DL methods for the segmentation of the outer limits of the optic disc.
Table 3. Classification of glaucoma by DL methods for the segmentation of the outer limits of the optic disc.
StudyArchitectureDatabasesResults
[87]CNNHAPIEEAccuracy: 86.5% (HAPIEE), and 97.8% (PAMPI)
PAMDI
[88]CNNDRIONS-DBAccuracy: 97.1% (DRIONS-DB) and 95.9% (RIM-ONE)
RIM-ONE
[90]CNNN/AAccuracy: 95.6%
[91]CNNN/AAccuracy: 92.7%
[92]Faster R-CNNORIGAAccuracy: 93.1%
[72]ResNetSCESAUC: 91.8% (SECS) and 81.8% (SINDI)
SINDI
[93]U-NetDRIONS-DB
DRISHTI-GS
RIM-ONE
IoU: 96.0%
DICE: 98.0%
[94]GANs86926 imagesAUC: 90.17%
[95]DNN477 normal eyes, 235 confirmed, and 98 suspected glaucoma casesAUC: 99.5% (GDF) and 93.5% (TCV)
[96]DNN200 eyes of 77 healthy and 123 primary open-angle glaucomaAUC: 94.6% (BMO-MRW) and 92.1% (BMO-MRA)
Table 4. Classification of glaucoma by DL methods for segmenting the outer limits of the optic disc and cup.
Table 4. Classification of glaucoma by DL methods for segmenting the outer limits of the optic disc and cup.
StudyArchitectureDatabasesResults
[97]CNN with six layersORIGAAUC: 83.1% (ORIGA) and 88.7% (SCES)
SCES
[89]U-NetDRIONS-DBIoU: 89% (DRIONS-DB), 89% (RIM-ONE-r3), and 75% (DRISHT-GS1)
DICE: 94% (DRIONS-DB), 95% (RIM-ONE-r3), and 85% (DRISHT-GS1)
RIM-ONE-r3
DRISHT-GS
[34]GoogleNetHRFAccuracy: 90% (HRF), 94.2% (RIM-ONE-r1), 86.2% (RIM-ONE-r2), and 86.4% (RIM-ONE-r3)
RIM-ONE-r1
RIM-ONE-r2
RIM-ONE-r3
[98]U-NetREFUGEAccuracy: 93.4%
[99]CNN with one layer (CNN1)
CNN with two layers (CNN2)
RIM-ONEAccuracy: 95.6% (CNN1) and 96.9% (CNN2)
AUC: 98% (CNN1) and 97.8% (CNN2)
[100]Transfer Learning
GoogleNet
Inception-V3
1542 imagesAccuracy: 84.5%
AUC: 93%
[101]CNN with nineteen layers48,116 imagesAccuracy: 95.6%
AUC: 98.6%
[102]CNN with eighteen layers1426 imagesAccuracy: 98.1%
[32]CNN with nineteen layersORIGAAccuracy: 99.8%
DICE: 87.2%
DRIONS-DB
ONHSD
RIM-ONE
[103]Stack-U-NetRIM-ONE-r3IoU: 92%
DICE: 96%
[104]DC-GANMESSIDORAUC: 90.2%
ONHSD
DRIVE
STARE
CHASE-DB
DRIONS-DB
SASTRA
[105]HDLS1791 fundus photographsAccuracy: 53% (optic cup), 12% (optic disc), and 16% (retinal nerve fiber layer defects)
[106]DNN2643 imagesAUC: 94%
[107]CNN
Grad-CAM
SMCAccuracy: 96%
Sensitivity: 96%
Specificity: 100%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Camara, J.; Neto, A.; Pires, I.M.; Villasana, M.V.; Zdravevski, E.; Cunha, A. Literature Review on Artificial Intelligence Methods for Glaucoma Screening, Segmentation, and Classification. J. Imaging 2022, 8, 19. https://doi.org/10.3390/jimaging8020019

AMA Style

Camara J, Neto A, Pires IM, Villasana MV, Zdravevski E, Cunha A. Literature Review on Artificial Intelligence Methods for Glaucoma Screening, Segmentation, and Classification. Journal of Imaging. 2022; 8(2):19. https://doi.org/10.3390/jimaging8020019

Chicago/Turabian Style

Camara, José, Alexandre Neto, Ivan Miguel Pires, María Vanessa Villasana, Eftim Zdravevski, and António Cunha. 2022. "Literature Review on Artificial Intelligence Methods for Glaucoma Screening, Segmentation, and Classification" Journal of Imaging 8, no. 2: 19. https://doi.org/10.3390/jimaging8020019

APA Style

Camara, J., Neto, A., Pires, I. M., Villasana, M. V., Zdravevski, E., & Cunha, A. (2022). Literature Review on Artificial Intelligence Methods for Glaucoma Screening, Segmentation, and Classification. Journal of Imaging, 8(2), 19. https://doi.org/10.3390/jimaging8020019

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop