Artificial Intelligence in Eye Disease, 4th Edition

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: 31 March 2026 | Viewed by 3027

Special Issue Editor


E-Mail Website
Guest Editor
1. Department of Brain and Cognitive Engineering (Primary), Korea University, Seoul 136-701, Republic of Korea
2. Department of Artificial Intelligence (Secondary), Korea University, Seoul 136-701, Republic of Korea
Interests: artificial intelligence in biomedicine; diagnosis of retinal diseases; deep learning for ophthalmology images; neuroscience research
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

While the use of artificial intelligence (AI) is rapidly spreading to the medical world amid the vortex of the Fourth Industrial Revolution, the use of AI in ophthalmology is attracting attention for the diagnosis of various ophthalmic diseases, including optic nerve diseases, which are difficult to diagnose. In particular, introducing AI could help to make diagnoses with high accuracy when applied to fundus photographs, optical coherence tomography, and the visual field in order to achieve a strong classification performance in the detection of ocular and retinal diseases. In ocular imaging, AI can be used as a possible solution for screening, diagnosing, and monitoring patients with major eye diseases in primary care and community settings. For instance, through deep learning algorithms that read retinal images, various diseases can be observed, such as bleeding, macular abnormalities—e.g., drusen—choroidal abnormalities, retinal vessel abnormalities, nerve fiber layer defects, and glaucomatous optic nerve papilla changes. Thus, deep learning architectures can be applied to learn to recognize eye diseases, thereby increasing the diagnosis rate with a clinically acceptable performance. In other words, AI serves as a safety device for both patients and doctors, as well as an auxiliary tool to quickly judge the results. It prevents the possibility of an initial misdiagnosis, provides treatment efficiency, and increases patient reliability. Consequently, AI could potentially revolutionize the way that ophthalmology is practiced in the future. Thus, the aim of this Special Issue is to highlight the recent progress and trends in utilizing AI techniques, such as machine learning and deep learning, for detecting, screening, diagnosing, and monitoring numerous eye diseases, not only in diverse clinical practice but also in basic research on ophthalmology.

Prof. Dr. Jae-Ho Han
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • medical diagnosis
  • artificial intelligence
  • deep learning
  • fundus image
  • optical coherence tomography
  • ophthalmology
  • retinal vessel
  • glaucoma
  • retinopathy
  • macular degeneration
  • image segmentation

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 21597 KB  
Article
U-Net Optimization for Hyperreflective Foci Segmentation in Retinal OCT
by Pavithra Kodiyalbail Chakrapani, Preetham Kumar, Sulatha Venkataraya Bhandary, Geetha Maiya, Shailaja Shenoy, Steven Fernandes and Prakhar Choudhary
Diagnostics 2026, 16(6), 853; https://doi.org/10.3390/diagnostics16060853 - 13 Mar 2026
Abstract
Background/Objectives: Hyperreflective foci (HRF) are supportive optical coherence tomography (OCT) imaging biomarkers that have been examined for their association with disease progression and severity in various retinal disorders. The accurate identification and segmentation of these tiny structures of lipid extravasation remain complicated because [...] Read more.
Background/Objectives: Hyperreflective foci (HRF) are supportive optical coherence tomography (OCT) imaging biomarkers that have been examined for their association with disease progression and severity in various retinal disorders. The accurate identification and segmentation of these tiny structures of lipid extravasation remain complicated because of their small size, class imbalance, similarity in the reflectivity patterns with the surrounding structures and imaging artifacts. While U-Net-based models have promised exceptional results for medical image segmentation, optimal architectural settings and suitable preprocessing methods for HRF detection remain unclear. Methods: This research assessed optimal settings for U-Net-based models for HRF segmentation by evaluating standard U-Net and attention U-Net under different preprocessing regimes. Attention U-Net employed Z-score normalization and contrast-limited adaptive histogram equalization (CLAHE) enhancement with soft dice loss. The standard U-Net was trained on OCT images with CLAHE using focal Tversky loss. A total of 435 fovea-centered OCT B scans with the corresponding, consensus-annotated HRF masks were utilized for this research. Results: The standard U-Net outperformed attention U-Net with a dice score of 0.5207, an AUC of 0.8411, and a recall of 0.6439 on raw OCT images. The attention U-Net with preprocessing (dice: 0.5033, AUC: 0.6987, recall: 0.5391) demonstrated satisfactory performance. The results showed that the U-Net model with CLAHE and focal Tversky loss improved recall by 19.4% relative to the attention U-Net, and this corresponds roughly to a 23% relative decline in false negatives. This indicates increased sensitivity in identifying HRF regions. Conclusions: The best-performing configuration using U-Net-based architectures for segmentation of HRFs combines the standard U-Net model with CLAHE and focal Tversky loss for handling class imbalance. This approach yields relatively higher sensitivity, indicating that the standard U-Net model delivers a simple and robust framework for automated HRF segmentation on the evaluated dataset, promising further validation in broader clinical datasets. Full article
(This article belongs to the Special Issue Artificial Intelligence in Eye Disease, 4th Edition)
Show Figures

Figure 1

18 pages, 3156 KB  
Article
Artificial Intelligence–Based Prediction of Subjective Refraction and Clinical Determinants of Prediction Error
by Ozlem Candan, Irem Saglam, Gozde Orman, Nurten Unlu, Ayşe Burcu and Yusuf Candan
Diagnostics 2026, 16(2), 331; https://doi.org/10.3390/diagnostics16020331 - 20 Jan 2026
Cited by 1 | Viewed by 387
Abstract
Background/Objectives: Subjective refraction is the clinical gold standard but is time-consuming and examiner-dependent. Most artificial intelligence (AI)-based approaches rely on specialized imaging or biometric data not routinely available. This study aimed to predict subjective refraction using only routine, non-cycloplegic autorefraction and keratometric data [...] Read more.
Background/Objectives: Subjective refraction is the clinical gold standard but is time-consuming and examiner-dependent. Most artificial intelligence (AI)-based approaches rely on specialized imaging or biometric data not routinely available. This study aimed to predict subjective refraction using only routine, non-cycloplegic autorefraction and keratometric data and to identify factors associated with reduced prediction accuracy. Methods: This retrospective study included 1856 eyes from 1006 patients. A multi-output histogram gradient-boosting model predicted subjective spherical equivalent, cylindrical power, and astigmatic axis. Performance was evaluated on an independent test dataset using R2 and mean absolute error, with circular statistics for axis prediction. Prediction failure was assessed using clinically relevant tolerance thresholds (sphere/cylinder ≤ 0.50 D; axis ≤ 10°) and multivariable logistic regression. Results: The model achieved high accuracy for spherical and cylindrical prediction (R2 = 0.987 and 0.933; MAE = 0.126 D and 0.137 D). Astigmatic axis prediction demonstrated strong circular agreement (ρ = 0.898), with a mean absolute angular error of 4.65° (median, 0.96°). Axis errors were higher in eyes with low cylinder magnitude (<0.75 D) and oblique astigmatism. In multivariable analysis, steeper keratometry (K2; OR = 7.25, 95% CI 1.62–32.46, p = 0.010) and greater objective cylindrical power (OR = 2.79, 95% CI 1.87–8.94, p = 0.032) were independently associated with poor prediction. Conclusions: A machine-learning model based solely on routine, non-cycloplegic autorefractor and keratometric measurements can accurately estimate subjective refraction, supporting AI as a complementary decision-support tool rather than a replacement for conventional subjective refraction. Full article
(This article belongs to the Special Issue Artificial Intelligence in Eye Disease, 4th Edition)
Show Figures

Figure 1

18 pages, 52336 KB  
Article
Self-Supervised Representation Learning for Data-Efficient DRIL Classification in OCT Images
by Pavithra Kodiyalbail Chakrapani, Akshat Tulsani, Preetham Kumar, Geetha Maiya, Sulatha Venkataraya Bhandary and Steven Fernandes
Diagnostics 2025, 15(24), 3221; https://doi.org/10.3390/diagnostics15243221 - 16 Dec 2025
Cited by 1 | Viewed by 493
Abstract
Background/Objectives: Disorganization of the retinal inner layers (DRIL) is an important biomarker of diabetic macular edema (DME) that has a very strong association with visual acuity (VA) in patients. But the unavailability of annotated training data from experts severely limits the adaptability of [...] Read more.
Background/Objectives: Disorganization of the retinal inner layers (DRIL) is an important biomarker of diabetic macular edema (DME) that has a very strong association with visual acuity (VA) in patients. But the unavailability of annotated training data from experts severely limits the adaptability of models pretrained on real-world images owing to significant variations in the domain, posing two primary challenges for the design of efficient computerized DRIL detection methods. Methods: In an attempt to address these challenges, we propose a novel, self-supervision-based learning framework that employs a huge unlabeled optical coherence tomography (OCT) dataset to learn and detect clinically applicable interpretations before fine-tuning with a small proprietary dataset of annotated OCT images. In this research, we introduce a spatial Bootstrap Your Own Latent (BYOL) with a hybrid spatial aware loss function aimed to capture anatomical representations from unlabeled OCT dataset of 108,309 images that cover various retinal abnormalities, and then adapt the learned interpretations for DRIL classification employing 823 annotated OCT images. Results: With an accuracy of 99.39%, the proposed two-stage approach substantially exceeds the direct transfer learning models pretrained on ImageNet. Conclusions: The findings demonstrate the efficacy of domain-specific self-supervised learning for rare retinal pathological detection tasks with limited annotated data. Full article
(This article belongs to the Special Issue Artificial Intelligence in Eye Disease, 4th Edition)
Show Figures

Graphical abstract

17 pages, 2569 KB  
Article
Automated Multi-Class Classification of Retinal Pathologies: A Deep Learning Approach to Unified Ophthalmic Screening
by Uğur Şevik and Onur Mutlu
Diagnostics 2025, 15(21), 2745; https://doi.org/10.3390/diagnostics15212745 - 29 Oct 2025
Cited by 1 | Viewed by 1671
Abstract
Background/Objectives: The prevailing paradigm in ophthalmic AI involves siloed, single-disease models, which fails to address the complexity of differential diagnosis in clinical practice. This study aimed to develop and validate a unified deep learning framework for the automated multi-class classification of a [...] Read more.
Background/Objectives: The prevailing paradigm in ophthalmic AI involves siloed, single-disease models, which fails to address the complexity of differential diagnosis in clinical practice. This study aimed to develop and validate a unified deep learning framework for the automated multi-class classification of a wide spectrum of retinal pathologies from fundus photographs, moving beyond the single-disease paradigm to create a comprehensive screening tool. Methods: A publicly available dataset was manually curated by an ophthalmologist, resulting in 1841 images across nine classes, including Diabetic Retinopathy, Glaucoma, and Healthy retinas. After extensive data augmentation to mitigate class imbalance, three pre-trained CNN architectures (ResNet-152, EfficientNetV2, and a YOLOv11-based classifier) were comparatively evaluated. The models were trained using transfer learning and their performance was assessed on an independent test set using accuracy, macro-averaged F1-score, and Area Under the Curve (AUC). Results: The YOLOv11-based classifier demonstrated superior performance over the other architectures on the validation set. On the final independent test set, it achieved a robust overall accuracy of 0.861 and a macro-averaged F1-score of 0.861. The model yielded a validation set AUC of 0.961, which was statistically superior to both ResNet-152 (p < 0.001) and EfficientNetV2 (p < 0.01) as confirmed by the DeLong test. Conclusions: A unified deep learning framework, leveraging a YOLOv11 backbone, can accurately classify nine distinct retinal conditions from a single fundus photograph. This holistic approach moves beyond the limitations of single-disease algorithms, offering considerable promise as a comprehensive AI-driven screening tool to augment clinical decision-making and enhance diagnostic efficiency in ophthalmology. Full article
(This article belongs to the Special Issue Artificial Intelligence in Eye Disease, 4th Edition)
Show Figures

Figure 1

Back to TopTop