Next Article in Journal
Machine Learning-Based Model for Emergency Department Disposition at a Public Hospital
Previous Article in Journal
Decision-Making Framework for Aviation Safety in Predictive Maintenance Strategies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automated Detection and Biomarker Identification Associated with the Structural and Functional Progression of Glaucoma on Longitudinal Color Fundus Images

1
Doheny Image Analysis Laboratory, Doheny Eye Institute, Pasadena, CA 91103, USA
2
Doheny and Stein Eye Institutes, Department of Ophthalmology, David Geffen School of Medicine, University of California, Los Angeles, CA 90095, USA
3
Department of Ophthalmology and Visual Sciences, Medical College of Wisconsin, Milwaukee, WI 53226, USA
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(3), 1627; https://doi.org/10.3390/app15031627
Submission received: 15 December 2024 / Revised: 29 January 2025 / Accepted: 5 February 2025 / Published: 6 February 2025
(This article belongs to the Special Issue Technological Advances in Ocular Diseases and Oculomics)

Abstract

:
The diagnosis of primary open-angle glaucoma (POAG) progression based on structural imaging such as color fundus photos (CFPs) is challenging due to the limited number of early biomarkers, as commonly determined by clinicians, and the inherent variability in optic nerve heads (ONHs) between individuals. Moreover, while visual function is the main concern for glaucoma patients, and the ability to infer future visual outcome from imaging will benefit patients by early intervention, there is currently no available tool for this. To detect glaucoma progression from ocular hypertension both structurally and functionally, and identify potential objective early biomarkers associated with progression, we developed and evaluated deep convolutional long short-term memory (CNN-LSTM) neural network models using longitudinal CFPs from the Ocular Hypertension Treatment Study (OHTS). Patients were categorized into four diagnostic groups for model input: healthy, POAG with optic disc changes, POAG with visual field (VF) changes, and POAG with both optic disc and VF changes. Gradient-weighted class activation mapping (Grad-CAM) was employed for the post hoc visualization of image features, which may be associated with the objective POAG biomarkers (rather than the biomarkers determined by clinicians). The CNN-LSTM models for the detection of POAG progression achieved promising performance results both for the structural and functional models, with an area under curve (AUC) performance of 0.894 for the disc-only group, 0.911 for the VF-only group, and 0.939 for the disc and VF group. The model demonstrated high precision (0.984) and F1-score (0.963) in the both-changes group (disc + VF). Our preliminary investigation for early POAG biomarkers with Grad-CAM feature visualization signified that retinal vasculature could serve as an early and objective biomarker for POAG progression, complementing the traditionally used optic disc features and improving clinical workflows.

1. Introduction

Glaucoma, a group of optic neuropathies, is the leading cause of irreversible blindness around the world, accounting for over 14% of the total blind population, and predicted to affect over 110 million people by 2040 [1,2]. Primary open-angle glaucoma (POAG) is the most common form of glaucoma, characterized by commonly known progressive damage to the optic nerve head (ONH) and retinal nerve fiber layer (RNFL), resulting in the gradual loss of peripheral vision or visual field (VF), and then the loss of central vision [3]. Ocular hypertension (OHT) and POAG are interconnected conditions, with OHT often considered a precursor or early stage of POAG [4,5].
In 1994, the Department of Ophthalmology and Visual Sciences at Washington University designed the prospective, multicenter clinical trial named the Ocular Hypertension Treatment Study (OHTS) to evaluate the efficacy of 20% intraocular pressure reduction using ocular medications in delaying visual function loss and ONH damage due to POAG onset from OHT [6]. The primary endpoint of the OHTS was the development of POAG, marked by clinically defined structural changes in the optic disc at the ONH or functional changes in VF, or the presence of both structural and functional changes [7]. These changes were initially manually assessed by the Optic Disc Reading Center (ODRC) and Visual Field Reading Center (VFRC), respectively. The ODRC and VFRC consisted of a panel of masked reviewers who monitored optic disc and visual field abnormalities. If two consecutive CFPs or three consecutive VF exams were determined to have changes from the baseline tests, they were subsequently sent to the Endpoint Committee, a panel of three glaucoma specialists who reviewed a patient’s ONH photographs or VFs to determine if the changes were attributable to POAG or other diseases.
The manual assessment of POAG progression is time-consuming, costly, and prone to human error [8,9]. The U.S. healthcare system spends around USD 2.5 billion annually for glaucoma treatment costs [10]. There also exists an immense shortage of ophthalmologists, and major workforce shortages are predicted worldwide in the upcoming decades [11,12,13], which affects positive visual health outcomes [14,15,16,17,18]. Additionally, significant disparities exist in healthcare access and outcomes for glaucoma, particularly among racial and ethnic minorities. Black, Asian, and Hispanic Americans face a higher incidence of glaucoma compared to non-Hispanic White Americans [19,20,21,22] and encounter barriers to timely diagnosis and treatment [23,24,25,26,27]. Socioeconomic inequalities further impact access to quality care [10,28,29], leading to worse outcomes for these populations [30,31]. Thus, there is a potential avenue for artificial intelligence (AI) to automatically detect POAG-related changes to keep up with the increasingly rapid demand of glaucoma detection and management and help address this unmet need in underserved populations.
Numerous studies have explored AI-based deep learning models for automated glaucoma classification tasks. For example, Velpula et al. utilized five convolutional neural networks (CNNs) and a classifier fusion to detect early cases of glaucoma [32]. Kashyap et al. achieved approximately 96.9% accuracy in predicting glaucoma using a U-Net model [33]. Hemelings et al. reported a 0.976 area under the receiver operating characteristic curve (AUC) for their generalizable glaucoma classification model [34]. Fan et al., using a CNN on the OHTS dataset, achieved an area under curve (AUC) of 0.880 for glaucoma classification on fundus photos of patients with optic disc or VF changes [35].
While there are extensive studies in the literature exploring AI-based models for POAG classification, several limitations exist. These models frequently only look at one fundus image per patient at a time, thus failing to account for the long-term development or progression of glaucoma from subsequent follow-up images. Dynamic longitudinal information may reflect the inherent features of POAG changes from early to late stages and may infer early biomarkers for its development or progression. Hence, in this project, we report a different approach to the previous literature: we developed a convolutional long short-term memory neural network (CNN-LSTM) architecture with both spatial and temporal image features for the automated detection of POAG progression from longitudinal fundus stereophotographs collected during the OHTS. Our model was derived from a previous model our lab developed to detect the longitudinal progression of Stargardt and geographic atrophy [36]. A CNN-LSTM architecture was chosen due to its ability to capture patterns of POAG progression over several longitudinal timepoints, as noted in the previous literature [37,38].
In recent years, the implementation of attention techniques in neural networks has become crucial for developing transparent models and visualizing the focus of their attention at each layer. Essentially, there are two categories of attention techniques regarding CNNs: trainable and non-trainable. Trainable methods allow CNNs to focus on key features of images during training and testing, similar to the way human eyes focus on specific elements in a scene [39], and have demonstrated improvements in accuracy and interpretability in biomedical image analysis [40,41]. Non-trainable methods include post hoc attention techniques, which generate heatmaps that show which features of the input data significantly influence the model’s decisions [42,43] but do not affect the training process. As a preliminary exploration of glaucoma-associated image features, we applied a post hoc gradient-weighted class activation mapping (Grad-CAM) [44] technique for explainability after developing a CNN-LSTM model for automatically detecting the progression of POAG from longitudinal CFPs. Our goal is to advance glaucoma classification beyond subjective human interpretation by leveraging explainable AI, enabling a more objective assessment of POAG.
This study aims to develop three novel CNN-LSTM models that leverage both spatial and temporal image features from longitudinal CFPs to independently detect the progression of POAG based on structural changes at the optic disc only, functional changes in the VF only, or both structural and functional changes, which were determined by the OHTS clinical endpoints.

2. Materials and Methods

2.1. Overview

Our approach diverges from the previous literature [9,32,33,34,35] by employing a CNN-LSTM architecture, enabling the analysis of the coherent spatial and temporal features across longitudinal fundus images for a patient. Essentially, the model not only analyzes features found in baseline CFPs (e.g., the optic disc, vasculature) from a single patient visit but how the structures change over the course of longitudinal follow-up images. We developed three CNN-LSTM models: one trained on fundus photos where a positive glaucoma label is due to both structural disc changes at the ONH and functional VF defects (the “Both” model), one trained on fundus photos where a positive glaucoma label is due to only structural disc changes (the “Disc” model), and one trained on fundus photos where a positive glaucoma label is due to only functional VF defects (the “VF” model).

2.2. Data Collection

The dataset used in this study comprised 35,871 stereoscopic ONH photographs from 1551 unique patients with ocular hypertension, taken during the OHTS between 1994 and 2009. Participants underwent Humphery 30-2 VF tests twice annually and had ONH photographs taken once a year.
At the beginning of this study, all the participants’ baseline ONH photographs and VFs were judged to be normal appearing by the ODRC and the VFRC, respectively. After each annual visit, a patient’s ONH photograph was reviewed by the ODRC and compared to their baseline image to identify any structural changes due to glaucoma. If two consecutive photographs were determined to have changes from the baseline, the patient’s case was sent to the Coordinating Center. The Coordinating Center then sent all the patient’s ONH photographs and VFs to date to the OHTS Endpoint Committee. A similar procedure was conducted with VFs, where three consecutive abnormal tests warranted a case review by the committee. The Endpoint Committee consisted of three glaucoma specialists who independently compared a patient’s follow-up ONH photographs and VF exams to their baseline data, along with other relevant clinical information and medical history, to determine if the changes were clinically significant and due to POAG. If the Endpoint Committee determined that a patient had developed progression from OHT to POAG, then the specific clinical endpoint classification (i.e., structural, functional, or both) was recorded.

2.3. Data Preparation

The OHTS was conducted at 22 different sites with various fundus cameras, which resulted in differences in image qualities. The collected images were stereophotographs, capturing the retina from two slightly different angles. Many images contained large black borders, with approximately one-third displaying both stereophotographs side-by-side on the same image slide (simultaneous stereophotographs). To ensure uniformity in the model inputs, we developed a Python algorithm to crop out black borders; in cases where both stereophotographs were displayed on the same image slide, only the right image was retained.
The number of longitudinal images (per image per visit) per patient varied, ranging from 3 to 15, with most patients having 10 longitudinal images per eye from 10 different longitudinal visits. Our model input size was standardized to 15 images to ensure that no images were discarded. For patients with fewer than 15 images, we included empty placeholders to maintain consistency for the model input. These empty patches were defined by zero images (completely black images). Each image was also labeled with a corresponding ground truth for the absence or presence of POAG as had been determined by the clinical endpoint criteria per study protocol. The empty patches were given a label based on the most recent image’s label.
We categorized patients into three subgroups based on their diagnosis method by the Endpoint Committee: the “Disc” group diagnosed by only CFPs for structural POAG, the “VF” group diagnosed by only VF tests for functional POAG, and the “Both” group diagnosed by both CFPs and VF tests for structural and functional POAG, respectively. Our cohort was randomly split into training (80%), validation (10%), and testing sets (10%) at the participant level to ensure that the training and test sets were independent.

2.4. Data Augmentation

The OHTS exhibited a significant class imbalance (healthy versus POAG patients), with only 337 patients diagnosed with POAG (approximately 20.6%) by the end of the study. A deep learning model normally needs around a 50:50 ratio between classes to better learn to differentiate them. To address this, we oversampled the minority class (i.e., cases where patients develop POAG) in the training and validation sets. To ensure diversity in the training and validation sets, we applied data augmentation techniques, including horizontal and vertical translations (up to 20% of the image dimensions), rotations (up to 20 degrees), and random adjustments to brightnesses, contrast, hue, and saturation. This ensured that the model would not memorize the resampled images but rather develop a general understanding of how POAG appears on fundus photos. Each augmented image retained the same label as the original image.

2.5. Model Architecture

Our model architecture was a CNN-LSTM (Figure 1).
We used MobileNetV2 [45] pretrained on the ImageNet database [46] and fine-tuned to extract notable features from the images. We then flattened the CNN feature extractor’s output before inputting it into two LSTM layers with 16 and 8 units, respectively, followed by a fully connected layer that outputs 15 labels, 1 per input longitudinal image. The flattening was performed at the image level and thus features from each image were preserved. The CNN-LSTM training and testing were carried out on an NVIDIA GeForce RTX 3090 GPU.

2.6. Performance Evaluation

The trained CNN-LSTM was evaluated on a hold-out test set, to which no data augmentation operations were applied. The hold-out test set also featured the original class imbalance, where only approximately 20% of images were positive labels. We evaluated our model’s performance using precision, recall, F1-score, accuracy, and the AUC by comparing the models predicted label to the ground truth label of each CFP. The metrics were determined as follows: precision (number of true positives/total number of true and false positives), recall (total number of true positives/total number of true positives and false negatives), F1-score (2 × precision × recall/[precision + recall]), and accuracy (number of true positives and true negatives/total number of entries); AUC was calculated as the area under the receiving operator curve using the sci-kit learn Python module [47].
Additionally, Grad-CAM [44] was used to visualize and explain the model’s learning process. Grad-CAM uses the gradients entering a specific layer of a model to generate a heatmap, highlighting areas in an input most important for predicting the output. The layers used to generate the heatmaps were the “expand_conv_depthwise” and “out_relu” layers, which correspond to the fifth and third-to-final layers, respectively, in the CNN model. All layers present in MobileNetV2 are attached in the Supplementary Materials in Table S1.

2.7. Statistical Analysis

We performed bootstrapping to generate multiple resampled datasets, allowing us to estimate the variability and confidence intervals of our model’s performance metrics. Statistical analysis was then conducted using p-values and t-tests to evaluate the significance of the differences in performance between each of the three CNN-LSTM models. Specifically, we compared the AUC across the models to determine if the observed differences were statistically significant or could be attributed to random variation.

2.8. Code Availability

The code used for this paper is available upon request to the corresponding author.

3. Results

Table 1 shows the performance metrics for the “Both”, “VF”, and “Disc” models in detecting the progression of POAG from longitudinal color fundus photos on the hold-out test set. A plot of each model’s achieved AUC is presented in Figure 2.
The “Both” model achieved the highest AUC (0.939, p = 0.0048), as well as precision, recall, and F1-score (0.984, 0.895, and 0.963, respectively). Of note, the “VF” model achieved the highest accuracy (0.959), followed by the “Both” model (0.942) and the “Disc” model (0.911). The “VF” model also demonstrated a higher AUC than the “Disc” model (0.911 vs. 0.894, respectively; p = 0.118), precision (0.813 vs. 0.646), and F1-score (0.804 vs. 0.743). However, the “Disc” model outperformed the “VF” model in terms of recall (0.875 vs. 0.565, respectively).
Figure 3 shows the results of the heatmaps generated by Grad-CAM.
Two Grad-CAMs from each of the three models on follow-up fundus images are provided. Areas of greater importance are highlighted in red, while areas of less importance are highlighted in purple. In the earlier layers (e.g., the “expand_conv_depthwise” layer), all models predominantly focused on retinal vasculature. In later layers (e.g., the “out_relu” layer), the models’ attention shifted towards the optic nerve head (ONH), consistent with its well-documented significance in glaucoma diagnosis. This progression of focus from vascular features to the ONH suggests that the models are effectively capturing both early and advanced biomarkers of glaucoma.

4. Discussion

Our study demonstrates that the CNN-LSTM models are highly effective in detecting the progression of POAG using longitudinal CFPs. Our model that integrated both structural and functional clinical endpoints significantly outperformed the other models that relied solely on either structural or functional data. When analyzing the generated heatmaps of the features the models most focused on when making their predictions, all models favored vascular structures in the earlier layers of the feature extractor and disc regions by the final layer. Our results underscore the challenge of detecting POAG progression based on optic disc changes alone, as the “Disc” model achieved a lower AUC than the “VF” model, which relied on visual field changes. This difference aligns with clinical observations that disc changes are more subtle and harder to recognize than VF changes, which are more readily detectable when tests are reliable. Our study is unique in its application of a longitudinal dataset, setting it apart from previous research using the OHTS dataset [35,48]. Unlike these studies that focused on static images, our model leverages the dynamic information across different longitudinal visits provided by sequential CFPs, offering a more nuanced understanding of POAG progression.
The “Both” model outperformed the other models in terms of AUC, precision, recall, and F1-score in detecting POAG progression. This advantage is likely due to the occurrence of both structural and functional features/biomarkers associated with the progression of POAG in image training data, allowing the model to detect a more comprehensive set of indicators for POAG development. In contrast, the “Disc” model, which only had structural changes but preserved visual function, and the “VF” model, which only focused on functional changes, achieved lower AUCs. Our findings, that incorporating both structural and functional changes in glaucoma improves model performance, support the literature. Mursch-Edlmayr et al. found that their neural network performed significantly better in detecting glaucoma progression when trained on fused structural and functional glaucoma changes compared to when trained only on functional changes [49]. Bowd et al. found that combining functional and structural information significantly improves model performance in detecting glaucoma compared to only using structural information [50]. Several other publications exist investigating the improvement in AI models when combining structural and functional inputs in glaucoma detection tasks [51,52,53,54]. However, these publications only use baseline patient visits and thus fail to account for the long-term progression of glaucoma and dynamic longitudinal information found in subsequent follow-up images.
A key benefit of utilizing longitudinal CFPs is the potential to uncover early biomarkers of POAG, leading to earlier and more accurate diagnoses in clinical settings. By integrating dynamic information from sequential imaging, AI-based models can detect subtle changes over time, which might be missed in single snapshots [9]. This capability is particularly valuable in addressing the growing shortage of ophthalmologists in the U.S. and globally [11,12,13]. These tools can also help standardize care across diverse populations, ultimately improving access to quality eye care and mitigating disparities in glaucoma diagnosis and treatment [10,27,28,29]. By analyzing changes over time, our CNN-LSTM architecture, which leverages longitudinal data, demonstrated a superior ability to capture the dynamic progression of POAG compared to static image models that are trained on single patient visits from the OHTS dataset [35,48].
The use of explainability techniques, such as Grad-CAM, provides insights into the model’s decision-making process, highlighting areas on the image of important relevance when making predictions. While traditional diagnosis relies on clinician-evaluated biomarkers such as the optic disc, cup, and rim, our model has the advantage of detecting objective biomarkers related to glaucoma progression, particularly in vasculature. Because the model was only input CFPs and their corresponding labels, the highlighted areas are features that the model learned autonomously. These findings suggest that AI-based models can potentially complement existing diagnostic methods by adding to the clinical endpoints and biomarkers that help detect glaucoma and monitor its progression more effectively.
Our preliminary investigation as demonstrated by the Grad-CAM results in some neural network layers indicates that all models predominantly focus on vasculature features, regardless of whether glaucoma impacts visual structures or functions. Such an association between vascular structures and glaucoma was also found in a recent vasculature imaging modality: optical coherence tomography angiography (OCTA) [55,56,57,58,59,60,61,62,63,64]. This biological marker is logical; retinal ganglion cells require blood oxygen to function and transmit a visual input to the brain [65].
The vascular theory posits that reduced ocular blood flow (OBF) is “pathogenetically relevant” to glaucoma progression [66]. Reduced OBF induces ocular damage, leading to the destruction of the optic nerve and apoptosis of retinal ganglion cells, symptoms associated with glaucoma. Several studies have shown this link [65,67,68,69,70,71,72], though this theory remains controversial [65,71,73]. As shown in the A panels of Figure 3, our use of artificial intelligence may provide an objective backing to the role of reduced OBF in glaucoma progression. In some neural network layers, our model identified early manifestations of glaucoma in the vasculature and focused on vasculature across all longitudinal images in making its prediction.
As with all studies, our study also has several limitations. One issue was the automatic cropping of raw fundus stereophotographs with a Python algorithm we developed that automatically removed black borders around the images. In some cases, the algorithm inadvertently cropped out significant portions of the fundus photo (Figure 4).
Additionally, the quality of the images varied significantly due to the different types of fundus cameras used across the 22 study sites, which could have hindered the performance of our models, as they had to adapt to a wide range of visual inputs. However, this may have also produced a more generalizable model, able to detect POAG progression even from poorer-quality photos.
We plan several avenues for future work. First, we plan to analyze and compare Grad-CAMs generated by different layers in our CNN model; we only analyzed two layers, and the model contains over 100 different layers. The analysis of these different layers may provide further insights into potential POAG biomarkers. Second, we plan to utilize different model architectures, specifically transformer models, for longitudinal prediction in parallel instead of the “sequential” prediction of LSTM. Fan et al. found that one of their developed transformer models achieved an AUC of 0.91 in glaucoma detection on the OHTS dataset [48]; however, their study only utilized fundus photos from patients’ baseline visits, while we plan to use longitudinal data for glaucoma progression detection. Third, we also plan to implement localized VF data into our model to improve glaucoma progression detection. Fourth, as described in the Introduction, there are several attention techniques—trainable and non-trainable. As a preliminary investigation, we used a non-trainable post hoc method to generate the Grad-CAM photos for this study and plan to use the trainable methods we developed in a previous project [41] to potentially improve glaucoma progression detections. These future projects may improve the AI-based model detection of glaucoma and find potentially new biomarkers both for the automated detection and progression of glaucoma.

5. Conclusions

In summary, we developed a CNN-LSTM architecture to automate the detection of POAG progression using longitudinal CFP data from the OHTS. Our model, trained on both structural and functional features of glaucoma, achieved the highest AUC (0.939) compared to the structural-only and functional-only models (0.894 and 0.911, respectively). Grad-CAM-generated heatmaps revealed that the models emphasize both the vasculature and the ONH for predictions, suggesting that retinal vasculature could serve as an early biomarker for the detection and progression of POAG, complementing the traditionally used optic disc features. Our study highlights the potential for AI-based deep learning models to advance POAG diagnostics and underscores the potential for integrating AI-based diagnostic tools into clinical workflows. These models could assist clinicians by providing objective assessments of disease progression, especially in settings with limited access to glaucoma specialists.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app15031627/s1, Table S1. All layers of MobileNetV2, used as the feature extractor in our CNN-LSTM model.

Author Contributions

Conceptualization, I.M. and Z.J.H.; methodology, I.M., Z.M. and Z.J.H.; software, I.M., Z.M. and Z.C.W.; validation, I.M.; formal analysis, I.M.; investigation, I.M.; resources, Z.C.W.; data curation, I.M., Z.C.W. and Z.J.H.; writing—original draft preparation, I.M.; writing—review and editing, I.M., Z.M., Z.C.W., V.C., D.H. and Z.J.H.; visualization, I.M.; supervision, Z.J.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

All Clinical Centers and Resource Centers have received local IRB approval for the OHTS.

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

Code and data are available on request to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Parihar, J. Glaucoma: The “Black Hole” of Irreversible Blindness. Med. J. Armed Forces India 2016, 72, 3–4. [Google Scholar] [CrossRef] [PubMed]
  2. Quigley, H.A.; Broman, A.T. The Number of People with Glaucoma Worldwide in 2010 and 2020. Br. J. Ophthalmol. 2006, 90, 262–267. [Google Scholar] [CrossRef]
  3. Weinreb, R.N.; Leung, C.K.S.; Crowston, J.G.; Medeiros, F.A.; Friedman, D.S.; Wiggs, J.L.; Martin, K.R. Primary Open-Angle Glaucoma. Nat. Rev. Dis. Primers 2016, 2, 16067. [Google Scholar] [CrossRef]
  4. Maier, P.C.; Funk, J.; Schwarzer, G.; Antes, G.; Falck-Ytter, Y.T. Treatment of Ocular Hypertension and Open Angle Glaucoma: Meta-Analysis of Randomised Controlled Trials. BMJ 2005, 331, 134. [Google Scholar] [CrossRef]
  5. Coleman, A.L.; Miglior, S. Risk Factors for Glaucoma Onset and Progression. Surv. Ophthalmol. 2008, 53 (Suppl. S1), S3–S10. [Google Scholar] [CrossRef]
  6. Gordon, M.O.; Kass, M.A. For the Ocular Hypertension Treatment Study Group. The Ocular Hypertension Treatment Study: Design and Baseline Description of the Participants. Arch. Ophthalmol. 1999, 117, 573–583. [Google Scholar] [CrossRef]
  7. Kass, M.A.; Heuer, D.K.; Higginbotham, E.J.; Johnson, C.A.; Keltner, J.L.; Miller, J.P.; Parrish, R.K., 2nd; Wilson, M.R.; Gordon, M.O. The Ocular Hypertension Treatment Study: A Randomized Trial Determines That Topical Ocular Hypotensive Medication Delays or Prevents the Onset of Primary Open-Angle Glaucoma. Arch. Ophthalmol. 2002, 120, 701–713, discussion 829–830. [Google Scholar] [CrossRef]
  8. Shoukat, A.; Akbar, S.; Hassan, S.A.; Iqbal, S.; Mehmood, A.; Ilyas, Q.M. Automatic Diagnosis of Glaucoma from Retinal Images Using Deep Learning Approach. Diagnostics 2023, 13, 1738. [Google Scholar] [CrossRef] [PubMed]
  9. Doozandeh, A.; Yazdani, S.; Pakravan, M.; Ghasemi, Z.; Hassanpour, K.; Hatami, M.; Ansari, I. Risk of Missed Diagnosis of Primary Open-Angle Glaucoma by Eye Care Providers. J. Curr. Ophthalmol. 2022, 34, 404–408. [Google Scholar] [CrossRef]
  10. Varma, R.; Lee, P.P.; Goldberg, I.; Kotak, S. An Assessment of the Health and Economic Burdens of Glaucoma. Am. J. Ophthalmol. 2011, 152, 515–522. [Google Scholar] [CrossRef] [PubMed]
  11. Delgado, M.F.; Abdelrahman, A.M.; Terahi, M.; Miro Quesada Woll, J.J.; Gil-Carrasco, F.; Cook, C.; Benharbit, M.; Boisseau, S.; Chung, E.; Hadjiat, Y.; et al. Management of Glaucoma in Developing Countries: Challenges and Opportunities for Improvement. Clinicoecon. Outcomes Res. 2019, 11, 591–604. [Google Scholar] [CrossRef] [PubMed]
  12. Berkowitz, S.T.; Finn, A.P.; Parikh, R.; Kuriyan, A.E.; Patel, S. Ophthalmology Workforce Projections in the United States, 2020 to 2035. Ophthalmology 2024, 131, 133–139. [Google Scholar] [CrossRef] [PubMed]
  13. Adekoya, B.J.; Adepoju, F.G.; Moshood, K.F.; Balarabe, A.H. Challenges in the Management of Glaucoma in a Developing Country; A Qualitative Study of Providers’ Perspectives. Niger. J. Med. 2015, 24, 315. [Google Scholar] [CrossRef]
  14. Gibson, D.M. Eye Care Availability and Access among Individuals with Diabetes, Diabetic Retinopathy, or Age-Related Macular Degeneration. JAMA Ophthalmol. 2014, 132, 471–477. [Google Scholar] [CrossRef]
  15. Wang, K.M.; Tseng, V.L.; Liu, X.; Pan, D.; Yu, F.; Baker, R.; Mondino, B.J.; Coleman, A.L. Association between Geographic Distribution of Eye Care Clinicians and Visual Impairment in California. JAMA Ophthalmol. 2022, 140, 577–584. [Google Scholar] [CrossRef] [PubMed]
  16. Gibson, D.M. The Local Availability of Eye Care Providers and the Vision Health of Adults in the United States. Ophthalmic Epidemiol. 2016, 23, 223–231. [Google Scholar] [CrossRef]
  17. Wang, F.; Javitt, J.C. Eye Care for Elderly Americans with Diabetes Mellitus. Failure to Meet Current Guidelines. Ophthalmology 1996, 103, 1744–1750. [Google Scholar] [CrossRef]
  18. Chou, C.-F.; Zhang, X.; Crews, J.E.; Barker, L.E.; Lee, P.P.; Saaddine, J.B. Impact of Geographic Density of Eye Care Professionals on Eye Care among Adults with Diabetes. Ophthalmic Epidemiol. 2012, 19, 340–349. [Google Scholar] [CrossRef] [PubMed]
  19. Tielsch, J.M.; Sommer, A.; Katz, J.; Royall, R.M.; Quigley, H.A.; Javitt, J. Racial Variations in the Prevalence of Primary Open-Angle Glaucoma. The Baltimore Eye Survey. JAMA 1991, 266, 369–374. [Google Scholar] [CrossRef]
  20. Stein, J.D.; Kim, D.S.; Niziol, L.M.; Talwar, N.; Nan, B.; Musch, D.C.; Richards, J.E. Differences in Rates of Glaucoma among Asian Americans and Other Racial Groups, and among Various Asian Ethnic Groups. Ophthalmology 2011, 118, 1031–1037. [Google Scholar] [CrossRef]
  21. Nathan, N.; Joos, K.M. Glaucoma Disparities in the Hispanic Population. Semin. Ophthalmol. 2016, 31, 394–399. [Google Scholar] [CrossRef]
  22. Zhang, X.; Beckles, G.L.; Chou, C.-F.; Saaddine, J.B.; Wilson, M.R.; Lee, P.P.; Parvathy, N.; Ryskulova, A.; Geiss, L.S. Socioeconomic Disparity in Use of Eye Care Services among US Adults with Age-Related Eye Diseases: National Health Interview Survey, 2002 and 2008. JAMA Ophthalmol. 2013, 131, 1198–1206. [Google Scholar] [CrossRef] [PubMed]
  23. Gracitelli, C.P.B.; Zangwill, L.M.; Diniz-Filho, A.; Abe, R.Y.; Girkin, C.A.; Weinreb, R.N.; Liebmann, J.M.; Medeiros, F.A. Detection of Glaucoma Progression in Individuals of African Descent Compared with Those of European Descent. JAMA Ophthalmol. 2018, 136, 329–335. [Google Scholar] [CrossRef]
  24. Stagg, B.; Mariottoni, E.B.; Berchuck, S.; Jammal, A.; Elam, A.R.; Hess, R.; Kawamoto, K.; Haaland, B.; Medeiros, F.A. Longitudinal Visual Field Variability and the Ability to Detect Glaucoma Progression in Black and White Individuals. Br. J. Ophthalmol. 2022, 106, 1115–1120. [Google Scholar] [CrossRef] [PubMed]
  25. Stein, J.D.; Talwar, N.; Laverne, A.M.; Nan, B.; Lichter, P.R. Racial Disparities in the Use of Ancillary Testing to Evaluate Individuals with Open-Angle Glaucoma. Arch. Ophthalmol. 2012, 130, 1579–1588. [Google Scholar] [CrossRef] [PubMed]
  26. Murakami, Y.; Lee, B.W.; Duncan, M.; Kao, A.; Huang, J.-Y.; Singh, K.; Lin, S.C. Racial and Ethnic Disparities in Adherence to Glaucoma Follow-up Visits in a County Hospital Population. Arch. Ophthalmol. 2011, 129, 872–878. [Google Scholar] [CrossRef] [PubMed]
  27. Awidi, A.A.; Wang, J.; Varadaraj, V.; Ali, M.; Cai, C.X.; Sommer, A.; Ramulu, P.Y.; Woreta, F.A. The Impact of Social Determinants of Health on Vision Loss from Cataracts and Cataract Surgery Utilization in the United States-A National Health Interview Survey Analysis. Am. J. Ophthalmol. 2023, 254, 44–53. [Google Scholar] [CrossRef] [PubMed]
  28. Delavar, A.; Radha Saseendrakumar, B.; Weinreb, R.N.; Baxter, S.L. Racial and Ethnic Disparities in Cost-Related Barriers to Medication Adherence among Patients with Glaucoma Enrolled in the National Institutes of Health All of Us Research Program. JAMA Ophthalmol. 2022, 140, 354–361. [Google Scholar] [CrossRef] [PubMed]
  29. Davuluru, S.S.; Jess, A.T.; Kim, J.S.B.; Yoo, K.; Nguyen, V.; Xu, B.Y. Identifying, Understanding, and Addressing Disparities in Glaucoma Care in the United States. Transl. Vis. Sci. Technol. 2023, 12, 18. [Google Scholar] [CrossRef] [PubMed]
  30. Sleath, B.; Blalock, S.; Covert, D.; Stone, J.L.; Skinner, A.C.; Muir, K.; Robin, A.L. The Relationship between Glaucoma Medication Adherence, Eye Drop Technique, and Visual Field Defect Severity. Ophthalmology 2011, 118, 2398–2402. [Google Scholar] [CrossRef]
  31. Almidani, L.; Bradley, C.; Herbert, P.; Ramulu, P.; Yohannan, J. The Impact of Social Vulnerability on Structural and Functional Glaucoma Severity, Worsening, and Variability. Ophthalmol. Glaucoma 2024, 7, 380–390. [Google Scholar] [CrossRef]
  32. Velpula, V.K.; Sharma, L.D. Multi-Stage Glaucoma Classification Using Pre-Trained Convolutional Neural Networks and Voting-Based Classifier Fusion. Front. Physiol. 2023, 14, 1175881. [Google Scholar] [CrossRef] [PubMed]
  33. Kashyap, R.; Nair, R.; Gangadharan, S.M.P.; Botto-Tobar, M.; Farooq, S.; Rizwan, A. Glaucoma Detection and Classification Using Improved U-Net Deep Learning Model. Healthcare 2022, 10, 2497. [Google Scholar] [CrossRef]
  34. Hemelings, R.; Elen, B.; Schuster, A.K.; Blaschko, M.B.; Barbosa-Breda, J.; Hujanen, P.; Junglas, A.; Nickels, S.; White, A.; Pfeiffer, N.; et al. A Generalizable Deep Learning Regression Model for Automated Glaucoma Screening from Fundus Images. NPJ Digit. Med. 2023, 6, 112. [Google Scholar] [CrossRef] [PubMed]
  35. Fan, R.; Bowd, C.; Christopher, M.; Brye, N.; Proudfoot, J.A.; Rezapour, J.; Belghith, A.; Goldbaum, M.H.; Chuter, B.; Girkin, C.A.; et al. Detecting Glaucoma in the Ocular Hypertension Study Using Deep Learning. JAMA Ophthalmol. 2022, 140, 383–391. [Google Scholar] [CrossRef] [PubMed]
  36. Mishra, Z.; Wang, Z.; Xu, E.; Xu, S.; Majid, I.; Sadda, S.R.; Hu, Z.J. Recurrent and Concurrent Prediction of Longitudinal Progression of Stargardt Atrophy and Geographic Atrophy. medRxiv 2024. [Google Scholar] [CrossRef]
  37. Vuppu, V.M.; Kumari, P.L.S. Early Glaucoma Detection Using LSTM-CNN Integrated with Multi Class SVM. Eng. Technol. Appl. Sci. Res. 2024, 14, 15645–15650. [Google Scholar] [CrossRef]
  38. Hussain, S.; Chua, J.; Wong, D.; Lo, J.; Kadziauskiene, A.; Asoklis, R.; Barbastathis, G.; Schmetterer, L.; Yong, L. Predicting Glaucoma Progression Using Deep Learning Framework Guided by Generative Algorithm. Sci. Rep. 2023, 13, 19960. [Google Scholar] [CrossRef]
  39. Jetley, S.; Lord, N.A.; Lee, N.; Torr, P.H.S. Learn to Pay Attention. arXiv 2018, arXiv:1804.02391. [Google Scholar]
  40. Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al. Attention U-Net: Learning Where to Look for the Pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar]
  41. Wang, Z.; Sadda, S.R.; Lee, A.; Hu, Z.J. Automated Segmentation and Feature Discovery of Age-Related Macular Degeneration and Stargardt Disease via Self-Attended Neural Networks. Sci. Rep. 2022, 12, 14565. [Google Scholar] [CrossRef] [PubMed]
  42. Wang, S.; Wang, Z.; Vejalla, S.; Ganegoda, A.; Nittala, M.G.; Sadda, S.R.; Hu, Z.J. Reverse Engineering for Reconstructing Baseline Features of Dry Age-Related Macular Degeneration in Optical Coherence Tomography. Sci. Rep. 2022, 12, 22620. [Google Scholar] [CrossRef] [PubMed]
  43. Saha, S.; Wang, Z.; Sadda, S.; Kanagasingam, Y.; Hu, Z. Visualizing and Understanding Inherent Features in SD-OCT for the Progression of Age-Related Macular Degeneration Using Deconvolutional Neural Networks. Appl. AI Lett. 2020, 1, e16. [Google Scholar] [CrossRef] [PubMed]
  44. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 29 October 2017; pp. 618–626. [Google Scholar]
  45. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  46. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Li, F.-F. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  47. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Müller, A.; Nothman, J.; Louppe, G.; et al. Scikit-Learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar] [CrossRef]
  48. Fan, R.; Alipour, K.; Bowd, C.; Christopher, M.; Brye, N.; Proudfoot, J.A.; Goldbaum, M.H.; Belghith, A.; Girkin, C.A.; Fazio, M.A.; et al. Detecting Glaucoma from Fundus Photographs Using Deep Learning without Convolutions: Transformer for Improved Generalization. Ophthalmol. Sci. 2023, 3, 100233. [Google Scholar] [CrossRef] [PubMed]
  49. Mursch-Edlmayr, A.S.; Ng, W.S.; Diniz-Filho, A.; Sousa, D.C.; Arnold, L.; Schlenker, M.B.; Duenas-Angeles, K.; Keane, P.A.; Crowston, J.G.; Jayaram, H. Artificial Intelligence Algorithms to Diagnose Glaucoma and Detect Glaucoma Progression: Translation to Clinical Practice. Transl. Vis. Sci. Technol. 2020, 9, 55. [Google Scholar] [CrossRef]
  50. Bowd, C.; Hao, J.; Tavares, I.M.; Medeiros, F.A.; Zangwill, L.M.; Lee, T.-W.; Sample, P.A.; Weinreb, R.N.; Goldbaum, M.H. Bayesian Machine Learning Classifiers for Combining Structural and Functional Measurements to Classify Healthy and Glaucomatous Eyes. Investig. Ophthalmol. Vis. Sci. 2008, 49, 945–953. [Google Scholar] [CrossRef] [PubMed]
  51. Brigatti, L.; Hoffman, D.; Caprioli, J. Neural Networks to Identify Glaucoma with Structural and Functional Measurements. Am. J. Ophthalmol. 1996, 121, 511–521. [Google Scholar] [CrossRef]
  52. Grewal, D.S.; Jain, R.; Grewal, S.P.S.; Rihani, V. Artificial Neural Network-Based Glaucoma Diagnosis Using Retinal Nerve Fiber Layer Analysis. Eur. J. Ophthalmol. 2008, 18, 915–921. [Google Scholar] [CrossRef]
  53. Silva, F.R.; Vidotti, V.G.; Cremasco, F.; Dias, M.; Gomi, E.S.; Costa, V.P. Sensitivity and Specificity of Machine Learning Classifiers for Glaucoma Diagnosis Using Spectral Domain OCT and Standard Automated Perimetry. Arq. Bras. Oftalmol. 2013, 76, 170–174. [Google Scholar] [CrossRef] [PubMed]
  54. Sugimoto, K.; Murata, H.; Hirasawa, H.; Aihara, M.; Mayama, C.; Asaoka, R. Cross-Sectional Study: Does Combining Optical Coherence Tomography Measurements Using the “Random Forest” Decision Tree Classifier Improve the Prediction of the Presence of Perimetric Deterioration in Glaucoma Suspects? BMJ Open 2013, 3, e003114. [Google Scholar] [CrossRef]
  55. Liu, L.; Jia, Y.; Takusagawa, H.L.; Pechauer, A.D.; Edmunds, B.; Lombardi, L.; Davis, E.; Morrison, J.C.; Huang, D. Optical Coherence Tomography Angiography of the Peripapillary Retina in Glaucoma. JAMA Ophthalmol. 2015, 133, 1045–1052. [Google Scholar] [CrossRef]
  56. Chen, A.; Wei, P.; Wang, J.; Liu, L.; Camino, A.; Guo, Y.; Tan, O.; Jia, Y.; Huang, D. Glaucomatous Focal Perfusion Loss in the Macula Measured by Optical Coherence Tomographic Angiography. Am. J. Ophthalmol. 2024, 268, 181–189. [Google Scholar] [CrossRef] [PubMed]
  57. Takusagawa, H.L.; Liu, L.; Ma, K.N.; Jia, Y.; Gao, S.S.; Zhang, M.; Edmunds, B.; Parikh, M.; Tehrani, S.; Morrison, J.C.; et al. Projection-Resolved Optical Coherence Tomography Angiography of Macular Retinal Circulation in Glaucoma. Ophthalmology 2017, 124, 1589–1599. [Google Scholar] [CrossRef] [PubMed]
  58. Akil, H.; Chopra, V.; Al-Sheikh, M.; Ghasemi Falavarjani, K.; Huang, A.S.; Sadda, S.R.; Francis, B.A. Swept-Source OCT Angiography Imaging of the Macular Capillary Network in Glaucoma. Br. J. Ophthalmol. 2017, 102, 515–519. [Google Scholar] [CrossRef]
  59. Tepelus, T.C.; Song, S.; Borrelli, E.; Nittala, M.G.; Baghdasaryan, E.; Sadda, S.R.; Chopra, V. Quantitative Analysis of Retinal and Choroidal Vascular Parameters in Patients with Low Tension Glaucoma. J. Glaucoma 2019, 28, 557–562. [Google Scholar] [CrossRef]
  60. Mohammadzadeh, V.; Liang, Y.; Moghimi, S.; Xie, P.; Nishida, T.; Mahmoudinezhad, G.; Eslani, M.; Walker, E.; Kamalipour, A.; Micheletti, E.; et al. Detection of Glaucoma Progression on Longitudinal Series of En-Face Macular Optical Coherence Tomography Angiography Images with a Deep Learning Model. Br. J. Ophthalmol. 2024, 108, 1688–1693. [Google Scholar] [CrossRef] [PubMed]
  61. Nishida, T.; Moghimi, S.; Hou, H.; Proudfoot, J.A.; Chang, A.C.; David, R.C.C.; Kamalipour, A.; El-Nimri, N.; Rezapour, J.; Bowd, C.; et al. Long-Term Reproducibility of Optical Coherence Tomography Angiography in Healthy and Stable Glaucomatous Eyes. Br. J. Ophthalmol. 2023, 107, 657–662. [Google Scholar] [CrossRef]
  62. Tansuebchueasai, N.; Nishida, T.; Moghimi, S.; Wu, J.-H.; Mahmoudinezhad, G.; Gunasegaran, G.; Kamalipour, A.; Zangwill, L.M.; Weinreb, R.N. Rate of Initial Optic Nerve Head Capillary Density Loss and Risk of Visual Field Progression. JAMA Ophthalmol. 2024, 142, 530–537. [Google Scholar] [CrossRef]
  63. Wu, J.-H.; Moghimi, S.; Nishida, T.; Mahmoudinezhad, G.; Zangwill, L.M.; Weinreb, R.N. Detection and Agreement of Event-Based OCT and OCTA Analysis for Glaucoma Progression. EYE 2024, 38, 973–979. [Google Scholar] [CrossRef] [PubMed]
  64. Suh, M.H.; Weinreb, R.N.; Zangwill, L.M. Optic Disc Microvasculature Dropout in Preperimetric Glaucoma. J. Glaucoma 2024, 33, 490–498. [Google Scholar] [CrossRef] [PubMed]
  65. Wang, X.; Wang, M.; Liu, H.; Mercieca, K.; Prinz, J.; Feng, Y.; Prokosch, V. The Association between Vascular Abnormalities and Glaucoma-What Comes First? Int. J. Mol. Sci. 2023, 24, 3211. [Google Scholar] [CrossRef]
  66. Galassi, F.; Giambene, B.; Varriale, R. Systemic Vascular Dysregulation and Retrobulbar Hemodynamics in Normal-Tension Glaucoma. Investig. Ophthalmol. Vis. Sci. 2011, 52, 4467–4471. [Google Scholar] [CrossRef] [PubMed]
  67. Dascalu, A.M.; Stana, D.; Nicolae, V.A.; Cirstoveanu, C.; Vancea, G.; Serban, D.; Socea, B. Association between Vascular Comorbidity and Glaucoma Progression: A Four-Year Observational Study. Exp. Ther. Med. 2021, 21, 283. [Google Scholar] [CrossRef]
  68. Chung, H.S.; Harris, A.; Evans, D.W.; Kagemann, L.; Garzozi, H.J.; Martin, B. Vascular Aspects in the Pathophysiology of Glaucomatous Optic Neuropathy. Surv. Ophthalmol. 1999, 43 (Suppl. S1), S43–S50. [Google Scholar] [CrossRef]
  69. Flammer, J.; Orgül, S.; Costa, V.P.; Orzalesi, N.; Krieglstein, G.K.; Serra, L.M.; Renard, J.-P.; Stefánsson, E. The Impact of Ocular Blood Flow in Glaucoma. Prog. Retin. Eye Res. 2002, 21, 359–393. [Google Scholar] [CrossRef] [PubMed]
  70. Grieshaber, M.C.; Mozaffarieh, M.; Flammer, J. What Is the Link between Vascular Dysregulation and Glaucoma? Surv. Ophthalmol. 2007, 52 (Suppl. S2), S144–S154. [Google Scholar] [CrossRef] [PubMed]
  71. Shin, J.D.; Wolf, A.T.; Harris, A.; Verticchio Vercellin, A.; Siesky, B.; Rowe, L.W.; Packles, M.; Oddone, F. Vascular Biomarkers from Optical Coherence Tomography Angiography and Glaucoma: Where Do We Stand in 2021? Acta Ophthalmol. 2022, 100, e377–e385. [Google Scholar] [CrossRef] [PubMed]
  72. Chan, K.K.W.; Tang, F.; Tham, C.C.Y.; Young, A.L.; Cheung, C.Y. Retinal Vasculature in Glaucoma: A Review. BMJ Open Ophthalmol. 2017, 1, e000032. [Google Scholar] [CrossRef]
  73. Ahmad, S.S. Controversies in the Vascular Theory of Glaucomatous Optic Nerve Degeneration. Taiwan J. Ophthalmol. 2016, 6, 182–186. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Fifteen colored images of size 128 × 128 were input into the feature extractor; the output was flattened and then input into an LSTM model, which was connected to a batch normalization layer and a final fully connected layer that output 15 labels, 1 for each image input.
Figure 1. Fifteen colored images of size 128 × 128 were input into the feature extractor; the output was flattened and then input into an LSTM model, which was connected to a batch normalization layer and a final fully connected layer that output 15 labels, 1 for each image input.
Applsci 15 01627 g001
Figure 2. The “Both” model (top left) achieved the highest AUC (0.939), followed by the “VF” model (0.911; bottom left) and “Disc” model (0.894; top right).
Figure 2. The “Both” model (top left) achieved the highest AUC (0.939), followed by the “VF” model (0.911; bottom left) and “Disc” model (0.894; top right).
Applsci 15 01627 g002
Figure 3. Grad-CAMs generated by the “Both”, “VF”, and “Disc” models are on images of patients diagnosed with both functional and structural glaucoma, just functional glaucoma, and just structural glaucoma, respectively. Panels (A) and (B) show the features selected by the “expand_conv_depthwise” and “out_relu” layers, respectively. All models focus on vasculature in the former layer and develop towards the ONH in the latter layer.
Figure 3. Grad-CAMs generated by the “Both”, “VF”, and “Disc” models are on images of patients diagnosed with both functional and structural glaucoma, just functional glaucoma, and just structural glaucoma, respectively. Panels (A) and (B) show the features selected by the “expand_conv_depthwise” and “out_relu” layers, respectively. All models focus on vasculature in the former layer and develop towards the ONH in the latter layer.
Applsci 15 01627 g003
Figure 4. Side-by-side comparison of a color fundus photo pre- and post-mis-cropping by the cropping algorithm developed, which automatically determines areas with black borders. Of note, these cases were infrequent.
Figure 4. Side-by-side comparison of a color fundus photo pre- and post-mis-cropping by the cropping algorithm developed, which automatically determines areas with black borders. Of note, these cases were infrequent.
Applsci 15 01627 g004
Table 1. Comparison of the performance between the “Both”, “VF”, and “Disc” models in detecting glaucoma from longitudinal color fundus photos.
Table 1. Comparison of the performance between the “Both”, “VF”, and “Disc” models in detecting glaucoma from longitudinal color fundus photos.
Metric“Both” Model“VF” Model“Disc” Model
AUC0.9390.9110.894
Accuracy0.9420.9590.911
F1-Score0.9630.8040.743
Precision0.9840.8130.646
Recall0.8950.5650.875
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Majid, I.; Mishra, Z.; Wang, Z.C.; Chopra, V.; Heuer, D.; Hu, Z.J. Automated Detection and Biomarker Identification Associated with the Structural and Functional Progression of Glaucoma on Longitudinal Color Fundus Images. Appl. Sci. 2025, 15, 1627. https://doi.org/10.3390/app15031627

AMA Style

Majid I, Mishra Z, Wang ZC, Chopra V, Heuer D, Hu ZJ. Automated Detection and Biomarker Identification Associated with the Structural and Functional Progression of Glaucoma on Longitudinal Color Fundus Images. Applied Sciences. 2025; 15(3):1627. https://doi.org/10.3390/app15031627

Chicago/Turabian Style

Majid, Iyad, Zubin Mishra, Ziyuan Chris Wang, Vikas Chopra, Dale Heuer, and Zhihong Jewel Hu. 2025. "Automated Detection and Biomarker Identification Associated with the Structural and Functional Progression of Glaucoma on Longitudinal Color Fundus Images" Applied Sciences 15, no. 3: 1627. https://doi.org/10.3390/app15031627

APA Style

Majid, I., Mishra, Z., Wang, Z. C., Chopra, V., Heuer, D., & Hu, Z. J. (2025). Automated Detection and Biomarker Identification Associated with the Structural and Functional Progression of Glaucoma on Longitudinal Color Fundus Images. Applied Sciences, 15(3), 1627. https://doi.org/10.3390/app15031627

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop