Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (295)

Search Parameters:
Keywords = retinal image segmentation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 21597 KB  
Article
U-Net Optimization for Hyperreflective Foci Segmentation in Retinal OCT
by Pavithra Kodiyalbail Chakrapani, Preetham Kumar, Sulatha Venkataraya Bhandary, Geetha Maiya, Shailaja Shenoy, Steven Fernandes and Prakhar Choudhary
Diagnostics 2026, 16(6), 853; https://doi.org/10.3390/diagnostics16060853 - 13 Mar 2026
Viewed by 266
Abstract
Background/Objectives: Hyperreflective foci (HRF) are supportive optical coherence tomography (OCT) imaging biomarkers that have been examined for their association with disease progression and severity in various retinal disorders. The accurate identification and segmentation of these tiny structures of lipid extravasation remain complicated because [...] Read more.
Background/Objectives: Hyperreflective foci (HRF) are supportive optical coherence tomography (OCT) imaging biomarkers that have been examined for their association with disease progression and severity in various retinal disorders. The accurate identification and segmentation of these tiny structures of lipid extravasation remain complicated because of their small size, class imbalance, similarity in the reflectivity patterns with the surrounding structures and imaging artifacts. While U-Net-based models have promised exceptional results for medical image segmentation, optimal architectural settings and suitable preprocessing methods for HRF detection remain unclear. Methods: This research assessed optimal settings for U-Net-based models for HRF segmentation by evaluating standard U-Net and attention U-Net under different preprocessing regimes. Attention U-Net employed Z-score normalization and contrast-limited adaptive histogram equalization (CLAHE) enhancement with soft dice loss. The standard U-Net was trained on OCT images with CLAHE using focal Tversky loss. A total of 435 fovea-centered OCT B scans with the corresponding, consensus-annotated HRF masks were utilized for this research. Results: The standard U-Net outperformed attention U-Net with a dice score of 0.5207, an AUC of 0.8411, and a recall of 0.6439 on raw OCT images. The attention U-Net with preprocessing (dice: 0.5033, AUC: 0.6987, recall: 0.5391) demonstrated satisfactory performance. The results showed that the U-Net model with CLAHE and focal Tversky loss improved recall by 19.4% relative to the attention U-Net, and this corresponds roughly to a 23% relative decline in false negatives. This indicates increased sensitivity in identifying HRF regions. Conclusions: The best-performing configuration using U-Net-based architectures for segmentation of HRFs combines the standard U-Net model with CLAHE and focal Tversky loss for handling class imbalance. This approach yields relatively higher sensitivity, indicating that the standard U-Net model delivers a simple and robust framework for automated HRF segmentation on the evaluated dataset, promising further validation in broader clinical datasets. Full article
(This article belongs to the Special Issue Artificial Intelligence in Eye Disease, 4th Edition)
Show Figures

Figure 1

30 pages, 5709 KB  
Article
The Role of Autophagy–Lysosomal Pathways in Photoreceptor Death in the rd10 Mouse Model of Inherited Retinal Degeneration
by Kirstan A. Vessey, Nadia Hosseini Naveh, Ophelia Ehrlich, Allegra Glover, Joshua Lee, Ursula Greferath, Andrew I. Jobling and Erica L. Fletcher
Cells 2026, 15(4), 345; https://doi.org/10.3390/cells15040345 - 13 Feb 2026
Viewed by 651
Abstract
Inherited retinal degenerations, such as retinitis pigmentosa, are a leading cause of irreversible vision loss, yet broadly effective treatments remain elusive. Impaired cellular waste clearance via autophagy–lysosomal pathways have been implicated in photoreceptor death, but the spatiotemporal dynamics of these processes during degeneration [...] Read more.
Inherited retinal degenerations, such as retinitis pigmentosa, are a leading cause of irreversible vision loss, yet broadly effective treatments remain elusive. Impaired cellular waste clearance via autophagy–lysosomal pathways have been implicated in photoreceptor death, but the spatiotemporal dynamics of these processes during degeneration remain poorly understood. Using the rd10 mouse model of retinitis pigmentosa, we characterised autophagy–lysosomal dysfunction at key stages of photoreceptor degeneration (postnatal day P17, P22, P35) through super-resolution imaging of RFP-EGFP-LC3 reporter mice, Western blot, and bulk RNA sequencing. Autophagosome and autolysosome numbers were significantly elevated across all photoreceptor compartments (inner/outer segments, outer nuclear layer, outer plexiform layer) at P17, prior to significant photoreceptor nuclei loss. Autophagosome and autolysosome size progressively increased from P22 onwards, suggesting accumulation of unprocessed intracellular waste. Molecular analyses revealed downregulation of mTOR protein, upregulation of autophagy-related genes, and increased lysosomal processes from P17. These histological and molecular findings are consistent with early autophagy induction followed by overwhelmed degradative capacity. Our findings identify autophagy–lysosomal change as an early event in photoreceptor loss in the rd10 model, revealing a critical therapeutic window for mutation-independent interventions targeting cellular clearance pathways in inherited retinal degenerations. Full article
Show Figures

Figure 1

11 pages, 1038 KB  
Data Descriptor
Refined IDRiD: An Enhanced Dataset for Diabetic Retinopathy Segmentation with Expert-Validated Annotations and Comprehensive Anatomical Context
by Sakon Chankhachon, Supaporn Kansomkeat, Patama Bhurayanontachai and Sathit Intajag
Data 2026, 11(2), 30; https://doi.org/10.3390/data11020030 - 1 Feb 2026
Viewed by 727
Abstract
The Indian Diabetic Retinopathy Image Dataset (IDRiD) has been widely adopted for DR lesion segmentation research. However, it contains annotation gaps for proliferative DR lesions and labeling errors that limit its utility for comprehensive automated screening systems. We present Refined IDRiD, an enhanced [...] Read more.
The Indian Diabetic Retinopathy Image Dataset (IDRiD) has been widely adopted for DR lesion segmentation research. However, it contains annotation gaps for proliferative DR lesions and labeling errors that limit its utility for comprehensive automated screening systems. We present Refined IDRiD, an enhanced version that addresses these limitations through (1) expert ophthalmologist validation and correction of labeling errors in original annotations for four non-proliferative lesions (microaneurysms, hemorrhages, hard exudates, cotton-wool spots), (2) the addition of three critical proliferative DR lesion annotations (neovascularization, vitreous hemorrhage, intraretinal microvascular abnormalities), and (3) the integration of comprehensive anatomical context (optic disc, fovea, blood vessels, retinal region). A team of three ophthalmologists (one senior specialist with >10 years’ experience, two expert fundus image annotators) conducted systematic annotation refinement, achieving an inter-rater agreement F1-score of 0.9012. The enhanced dataset comprises 81 high-resolution fundus images with pixel-level annotations for seven DR lesion types and four anatomical structures. All images were cropped to the retinal region of interest and resized to 1024 × 1024 pixels, with annotations stored as unified grayscale masks containing 12 classes enabling efficient multi-task learning. Refined IDRiD enables training of comprehensive DR screening systems capable of detecting both non-proliferative and proliferative stages while reducing false positives through anatomical context awareness. Full article
Show Figures

Figure 1

18 pages, 2295 KB  
Article
Automatic Retinal Nerve Fiber Segmentation and the Influence of Intersubject Variability in Ocular Parameters on the Mapping of Retinal Sites to the Pointwise Orientation Angles
by Diego Luján Villarreal and Adriana Leticia Vera-Tizatl
J. Imaging 2026, 12(1), 47; https://doi.org/10.3390/jimaging12010047 - 19 Jan 2026
Viewed by 397
Abstract
The current study investigates the influence of intersubject variability in ocular characteristics on the mapping of visual field (VF) sites to the pointwise directional angles in retinal nerve fiber layer (RNFL) bundle traces. In addition, the performance efficacy on the mapping of VF [...] Read more.
The current study investigates the influence of intersubject variability in ocular characteristics on the mapping of visual field (VF) sites to the pointwise directional angles in retinal nerve fiber layer (RNFL) bundle traces. In addition, the performance efficacy on the mapping of VF sites to the optic nerve head (ONH) was compared to ground truth baselines. Fundus photographs of 546 eyes of 546 healthy subjects (with no history of ocular disease or diabetic retinopathy) were enhanced digitally and RNFL bundle traces were segmented based on the Personalized Estimated Segmentation (PES) algorithm’s core technique. A 24-2 VF grid pattern was overlaid onto the photographs in order to relate VF test points to intersecting RNFL bundles. The PES algorithm effectively traced RNFL bundles in fundus images, achieving an average accuracy of 97.6% relative to the Jansonius map through the application of 10th-order Bezier curves. The PES algorithm assembled an average of 4726 RNFL bundles per fundus image based on 4975 sampling points, obtaining a total of 2,580,505 RNFL bundles based on 2,716,321 sampling points. The influence of ocular parameters could be evaluated for 34 out of 52 VF locations. The ONH-fovea angle and the ONH position in relation to the fovea were the most prominent predictors for variations in the mapping of retinal locations to the pointwise directional angle (p < 0.001). The variation explained by the model (R2 value) ranges from 27.6% for visual field location 15 to 77.8% in location 22, with a mean of 56%. Significant individual variability was found in the mapping of VF sites to the ONH, with a mean standard deviation (95% limit) of 16.55° (median 17.68°) for 50 out of 52 VF locations, ranging from less than 1° to 44.05°. The mean entry angles differed from previous baselines by a range of less than 1° to 23.9° (average difference of 10.6° ± 5.53°), and RMSE of 11.94. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

12 pages, 670 KB  
Article
Emerging Oculomic Signatures: Linking Thickness of Entire Retinal Layers with Plasma Biomarkers in Preclinical Alzheimer’s Disease
by Ibrahim Abboud, Emily Xu, Sophia Xu, Aya Alhasany, Ziyuan Wang, Xiaomeng Wu, Natalie Astraea, Fei Jiang, Zhihong Jewel Hu and Jane W. Chan
J. Clin. Med. 2026, 15(1), 275; https://doi.org/10.3390/jcm15010275 - 30 Dec 2025
Cited by 1 | Viewed by 779
Abstract
Background/Objectives: Alzheimer’s disease (AD) is the leading cause of dementia, which is an inevitable consequence of aging. Early detection of AD, or detection during the pre-AD stage, is beneficial, as it enables timely intervention to reduce modifiable risk factors, which may help [...] Read more.
Background/Objectives: Alzheimer’s disease (AD) is the leading cause of dementia, which is an inevitable consequence of aging. Early detection of AD, or detection during the pre-AD stage, is beneficial, as it enables timely intervention to reduce modifiable risk factors, which may help prevent or delay the progression to dementia. On the one hand, plasma biomarkers have demonstrated great promise in predicting cognitive decline. On the other hand, in recent years, ocular imaging features, particularly the thickness of retinal layers measured by spectral-domain optical coherence tomography (SD-OCT), are emerging as possible non-invasive, non-contact surrogate markers for early detection and monitoring of neurodegeneration. This pilot study aims to identify retinal layer thickness changes across the entire retina linked to plasma AD biomarkers in cognitively healthy (CH) elderly individuals at risk for AD. Methods: Eleven CH individuals (20 eyes total) were classified in the pre-AD stage by plasma β-amyloid (Aβ)42/40 ratio < 0.10 and underwent SD-OCT. A deep-learning-derived automated algorithm was used to segment retinal layers on OCT (with manual correction when needed). Multiple layer thicknesses throughout the entire retina (including the inner retina, the outer retina, and the choroid) were measured in the inner ring (1–3 mm) and outer ring (3–6 mm) of the Early Treatment Diabetic Retinopathy Study (ETDRS). Relationships between retinal layers and plasma biomarkers were analyzed by ridge regression/bootstrapping. Results: Results showed that photoreceptor inner segment (PR-IS) thinning had the largest size effect with neurofilament light chain. Additional findings revealed thinning or thickening of the other retinal layers in association with increasing levels of glial fibrillary acidic protein and phosphorylated tau at threonine 181 and 217 (p-tau181 and p-tau217). Conclusions: This pilot study suggests that retinal layer-specific signatures exist, with PR-IS thinning as the largest effect, indicating neurodegeneration in pre-AD. Further research is needed to confirm the findings of this pilot study using larger longitudinal pre-AD cohorts and comparative analyses with healthy aging adults. Full article
(This article belongs to the Special Issue New Insights into Retinal Diseases)
Show Figures

Figure 1

24 pages, 2918 KB  
Article
Quantifying Explainability in OCT Segmentation of Macular Holes and Cysts: A SHAP-Based Coverage and Factor Contribution Analysis
by İlknur Tuncer Fırat, Murat Fırat and Taner Tuncer
Diagnostics 2026, 16(1), 97; https://doi.org/10.3390/diagnostics16010097 - 27 Dec 2025
Viewed by 500
Abstract
Background: Optical coherence tomography (OCT) can quantify the morphology and dimensions of a macular hole for diagnosis and treatment planning. Objective: The aim of this study was to perform automatic segmentation of macular holes (MHs) and cysts from OCT macular volumes using [...] Read more.
Background: Optical coherence tomography (OCT) can quantify the morphology and dimensions of a macular hole for diagnosis and treatment planning. Objective: The aim of this study was to perform automatic segmentation of macular holes (MHs) and cysts from OCT macular volumes using a deep learning-based model and to quantitatively evaluate decision reliability using the model’s focus regions and GradientSHAP-based explainability. Methods: In this study, we automatically segmented MHs and cysts in OCT images from the open-access OIMHS dataset. The dataset comprises 125 eyes from 119 patients and 3859 OCT B-scans. OCT B-scan slices were input to a UNet-48-based model with a 2.5D stacking strategy. Performance was evaluated using Dice and intersection-over-union (IoU), boundary accuracy was evaluated using the 95th-percentile Hausdorff distance (HD95), and calibration was evaluated using the expected calibration error (ECE). Explainability was quantified from GradientSHAP maps using lesion coverage and spatial focus metrics: Attribution Precision in Lesion (APILτ), which is the proportion of attributions (SHAP contributions) falling inside the lesion; Attribution Recall in Lesion (ARILτ), which is the proportion of the true lesion covered by the attributions; and leakage (Leakτ = 1 − APILτ), which is the proportion of attributions falling outside the lesion. Spatial focus was monitored using the center-of-mass distance (COM-dist), which is the Euclidean distance between the attribution center and the segmentation center. All metrics were calculated using the top τ% of the pixels with the highest SHAP values. SHAP features were clustered using PCA and k-means. Explanations were calculated using the clinical mask in ground truth (GT) mode and the model segmentation in prediction (Pred) mode. Results: The Dice/IoU values for holes and cysts were 0.94/0.91 and 0.87/0.81, respectively. Across lesion classes, HD95 = 6 px and ECE = 0.008, indicating good boundary accuracy and calibration. In GT mode (τ = 20), three regimes were observed: (i) retina-dominant: high ARIL (hole: 0.659; cyst: 0.654), high Leak (hole: 0.983; cyst: 0.988), and low COM-dist (hole: 7.84 px; cyst: 6.91 px), with the focus lying within the retina and largely confined to the retinal tissue; (ii) peri-lesional: highest ARIL (hole: 0.684; cyst: 0.719), relatively lower Leak (hole: 0.917; cyst: 0.940), and medium/high COM-dist (hole: 16.22 px; cyst: 10.17 px), with the focus located around the lesion; (iii) narrow-coverage: primarily seen for cysts in GT mode (ARIL: 0.494; Leak: 1.000; COM-dist: 52.02 px), with markedly reduced coverage. In Pred mode, the ARIL20 for holes increased in the retina-dominant cluster (0.758) and COM-dist decreased (6.24 px), indicating better agreement with the model segmentation. Conclusions: The model exhibited high accuracy and good calibration for MH and cyst segmentation in OCT images. Quantitative characterization of SHAP validated the model results. In the clinic, peri-lesion and narrow-coverage conditions are the key situations that require careful interpretation. Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
Show Figures

Figure 1

21 pages, 1667 KB  
Article
Advanced Retinal Lesion Segmentation via U-Net with Hybrid Focal–Dice Loss and Automated Ground Truth Generation
by Ahmad Sami Al-Shamayleh, Mohammad Qatawneh and Hany A. Elsalamony
Algorithms 2025, 18(12), 790; https://doi.org/10.3390/a18120790 - 14 Dec 2025
Viewed by 864
Abstract
An early and accurate detection of retinal lesions is imperative to intercept the course of sight-threatening ailments, such as Diabetic Retinopathy (DR) or Age-related Macular Degeneration (AMD). Manual expert annotation of all such lesions would take a long time and would be subject [...] Read more.
An early and accurate detection of retinal lesions is imperative to intercept the course of sight-threatening ailments, such as Diabetic Retinopathy (DR) or Age-related Macular Degeneration (AMD). Manual expert annotation of all such lesions would take a long time and would be subject to interobserver tendencies, especially in large screening projects. This work introduces an end-to-end deep learning pipeline for automated retinal lesion segmentation, tailored to datasets without available expert pixel-level reference annotations. The approach is specifically designed for our needs. A novel multi-stage automated ground truth mask generation method, based on colour space analysis, entropy filtering and morphological operations, and creating reliable pseudo-labels from raw retinal images. These pseudo-labels then serve as the training input for a U-Net architecture, a convolutional encoder–decoder architecture for biomedical image segmentation. To address the inherent class imbalance often encountered in medical imaging, we employ and thoroughly evaluate a novel hybrid loss function combining Focal Loss and Dice Loss. The proposed pipeline was rigorously evaluated on the ‘Eye Image Dataset’ from Kaggle, achieving a state-of-the-art segmentation performance with a Dice Similarity Coefficient of 0.932, Intersection over Union (IoU) of 0.865, Precision of 0.913, and Recall of 0.897. This work demonstrates the feasibility of achieving high-quality retinal lesion segmentation even in resource-constrained environments where extensive expert annotations are unavailable, thus paving the way for more accessible and scalable ophthalmological diagnostic tools. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

11 pages, 7165 KB  
Article
Diagnosis and Monitoring of Retinal Vasculitis by Widefield Swept Source OCT Angiography
by Manish Harrigill, Matthew Nguyen and Jila Noori
Diagnostics 2025, 15(24), 3129; https://doi.org/10.3390/diagnostics15243129 - 9 Dec 2025
Viewed by 509
Abstract
Objectives: To evaluate the utility of widefield montage swept-source OCT angiography (SS-OCTA) in detecting and monitoring retinal vasculitis beyond the posterior pole. Methods: Prospective case series. Patients with clinically diagnosed retinal vasculitis imaged with a same-day widefield SS-OCTA montage and ultra-widefield fluorescein angiography [...] Read more.
Objectives: To evaluate the utility of widefield montage swept-source OCT angiography (SS-OCTA) in detecting and monitoring retinal vasculitis beyond the posterior pole. Methods: Prospective case series. Patients with clinically diagnosed retinal vasculitis imaged with a same-day widefield SS-OCTA montage and ultra-widefield fluorescein angiography (FA) at 2 or more visits. Five overlapping 12 × 12 mm SS-OCTA scans were acquired to provide imaging of the posterior pole and each quadrant of the near periphery. A color retinal thickness map was superimposed on each 12 × 12 mm en-face flow scan with a customized segmentation to demonstrate perivascular retinal thickening. A composite “montage” image was then created by combining the scans to allow for analysis of the macula and near periphery. Findings were then correlated with the same-day FA, the current “gold standard” diagnostic tool for retinal vasculitis, to assess diagnostic efficacy. Results: SS-OCTA demonstrated perivascular thickening in both the posterior pole and peripheral retina in 30 eyes of 16 patients and was found to be an effective diagnostic tool with good correlation to findings on fluorescein angiography for monitoring retinal vasculitis over time. Conclusions: The widefield SS-OCTA montage expands the visualization of retinal vasculitis into the near periphery, providing a noninvasive tool that may complement FA in the diagnosis and monitoring of retinal vasculitis. Full article
(This article belongs to the Special Issue Diagnosis and Management of Retinopathy—2nd Edition)
Show Figures

Figure 1

15 pages, 1093 KB  
Article
AI-Based Retinal Image Analysis for the Detection of Choroidal Neovascular Age-Related Macular Degeneration (AMD) and Its Association with Brain Health
by Chuying Shi, Jack Lee, Di Shi, Gechun Wang, Fei Yuan, Timothy Y. Y. Lai, Jingwen Liu, Yijie Lu, Dongcheng Liu, Bo Qin and Benny Chung-Ying Zee
Brain Sci. 2025, 15(11), 1249; https://doi.org/10.3390/brainsci15111249 - 20 Nov 2025
Viewed by 829
Abstract
Purpose: This study aims to develop a method for detecting referable (intermediate and advanced) age-related macular degeneration (AMD) and neovascular AMD, as well as providing an automatic segmentation of choroidal neovascularisation (CNV) on colour fundus retinal images. We also demonstrated that brain [...] Read more.
Purpose: This study aims to develop a method for detecting referable (intermediate and advanced) age-related macular degeneration (AMD) and neovascular AMD, as well as providing an automatic segmentation of choroidal neovascularisation (CNV) on colour fundus retinal images. We also demonstrated that brain health risk scores estimated by AI-based Retinal Image Analysis (ARIA), such as white matter hyperintensities and depression, are significantly associated with AMD and neovascular AMD. Methods: A primary dataset of 1480 retinal images was collected from Zhongshan Hospital of Fudan University for training and 10-fold cross-validation. Additionally, two validation subdataset comprising 238 images (retinal images and wide-field images) were used. Using fluorescein angiography-based labels, we applied the InceptionResNetV2 deep network with the ARIA method to detect AMD, and a transfer ResNet50_Unet was used to segment CNV. The risks of cerebral white matter hyperintensities and depression were estimated using an AI-based Retinal Image Analysis approach. Results: In a 10-fold cross-validation, we achieved sensitivities of 97.4% and 98.1%, specificities of 96.8% and 96.1%, and accuracies of 97.0% and 96.4% in detecting referable AMD and neovascular AMD, respectively. In the external validation, we achieved accuracies of 92.9% and 93.7% and AUCs of 0.967 and 0.967, respectively. The performances on two validation sub-datasets show no statistically significant difference in detecting referable AMD (p = 0.704) and neovascular AMD (p = 0.213). In the segmentation of CNV, we achieved a global accuracy of 93.03%, a mean accuracy of 91.83%, a mean intersection over union (IoU) of 68.7%, a weighted IoU of 89.63%, and a mean boundary F1 (BF) of 67.77%. Conclusions: The proposed method shows promising results as a highly efficient and cost-effective screening tool for detecting neovascular and referable AMD on both retinal and wide-field images, and providing critical insights into CNV. Its implementation could be particularly valuable in resource-limited settings, enabling timely referrals, enhancing patient care, and supporting decision-making across AMD classifications. In addition, we demonstrated that AMD and neovascular AMD are significantly associated with increased risks of WMH and depression. Full article
Show Figures

Figure 1

29 pages, 5808 KB  
Systematic Review
Artificial Intelligence Algorithms for Epiretinal Membrane Detection, Segmentation and Postoperative BCVA Prediction: A Systematic Review and Meta-Analysis
by Eirini Maliagkani, Petroula Mitri, Dimitra Mitsopoulou, Andreas Katsimpris, Ioannis D. Apostolopoulos, Athanasia Sandali, Konstantinos Tyrlis, Nikolaos Papandrianos and Ilias Georgalas
Appl. Sci. 2025, 15(22), 12280; https://doi.org/10.3390/app152212280 - 19 Nov 2025
Viewed by 931
Abstract
Epiretinal membrane (ERM) is a common retinal pathology associated with progressive visual impairment, requiring timely and accurate assessment. Recent advances in artificial intelligence (AI) have enabled automated approaches for ERM detection, segmentation, and postoperative best corrected visual acuity (BCVA) prediction, offering promising avenues [...] Read more.
Epiretinal membrane (ERM) is a common retinal pathology associated with progressive visual impairment, requiring timely and accurate assessment. Recent advances in artificial intelligence (AI) have enabled automated approaches for ERM detection, segmentation, and postoperative best corrected visual acuity (BCVA) prediction, offering promising avenues to enhance clinical efficiency and diagnostic precision. We conducted a comprehensive literature search across MEDLINE (via PubMed), Scopus, CENTRAL, ClinicalTrials.gov, and Google Scholar from the inception to 31 December 2023. A total of 42 studies were included in the systematic review, with 16 eligible for meta-analysis. Risk of bias and reporting quality were assessed using the QUADAS-2 and CLAIM tools. Meta-analysis of 16 studies (533,674 images) showed that deep learning (DL) models achieved high diagnostic accuracy (AUC = 0.97), with pooled sensitivity and specificity of 0.93 and 0.97, respectively. Optical coherence tomography (OCT)-based models outperformed fundus-based ones, and although performance remained high under external validation, the positive predictive value (PPV) declined—highlighting the importance of testing model generalizability. To the best of our knowledge, this is the first systematic review and meta-analysis to critically evaluate the role of AI in the detection, segmentation, and postoperative BCVA prediction of ERM across various ophthalmic imaging modalities. Our findings provide a clear overview of current evidence supporting the continued development and clinical adoption of AI tools for ERM diagnosis and management. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

10 pages, 1718 KB  
Proceeding Paper
Explainability of Diabetic Retinopathy Detection and Classification with Deep Learning Hybrid Architecture: AlterNet-K and ResNet-101
by Lavkush Gupta, Richa Gupta, Parul Agarwal and Suraiya Praveen
Chem. Proc. 2025, 18(1), 141; https://doi.org/10.3390/ecsoc-29-26888 - 13 Nov 2025
Viewed by 382
Abstract
Diabetic retinopathy (DR), an eye disease that is a threatening cause of irreversible blindness, always challenging to detect and diagnose on time. There are many ophthalmic invasive procedures which exist in medical science for the diagnosis of oculi (eyes). These all require highly [...] Read more.
Diabetic retinopathy (DR), an eye disease that is a threatening cause of irreversible blindness, always challenging to detect and diagnose on time. There are many ophthalmic invasive procedures which exist in medical science for the diagnosis of oculi (eyes). These all require highly skilled medical practitioners with operational knowledge of diagnosing sensitive organs like the retina and its tiny vessels. Due to the dearth of retinal specialists, the eye’s organs’ sensitivity, and the complexity of retinal therapy, invasive procedures are time-consuming, costly, and have slow progress. The fundus images are the visual information of the rear part of the retina. The progression of lesions around the retinal tissue’s surface causes the electric signals to not able to reach at the visual cortex, thus causing blurry vision or vision loss experienced by patients. The older methods using retinal fundus images for diagnosing lesions and symptoms of DR take time, causing delays in treatment and hence reducing the chance of success. Therefore, for early diagnosis, using fundus or retinal images can save the required effort and time of both doctors and patients. Artificial intelligence (AI) techniques have the capability to learn the tissue structures of the eye’s anatomy and to provide an analysis of the disease through the retinal fundus images. This process consists of operations, first apply the image preprocessing techniques followed by segmentation and filtering, then classify the disease using the artificial intelligence-based model. The proposed model trained over a dataset of DR images, for the prediction of accurate results, followed by deciding if the diagnosis by the model is correctly classified or not using the Explainable AI (XAI) algorithm. The rapid growth and better outcome of machine learning and deep learning algorithms are reasons to adopt, enhance the early diagnosis and treatments of patients. Full article
Show Figures

Figure 1

35 pages, 2963 KB  
Article
Explainable Artificial Intelligence Framework for Predicting Treatment Outcomes in Age-Related Macular Degeneration
by Mini Han Wang
Sensors 2025, 25(22), 6879; https://doi.org/10.3390/s25226879 - 11 Nov 2025
Cited by 2 | Viewed by 1718
Abstract
Age-related macular degeneration (AMD) is a leading cause of irreversible blindness, yet current tools for forecasting treatment outcomes remain limited by either the opacity of deep learning or the rigidity of rule-based systems. To address this gap, we propose a hybrid neuro-symbolic and [...] Read more.
Age-related macular degeneration (AMD) is a leading cause of irreversible blindness, yet current tools for forecasting treatment outcomes remain limited by either the opacity of deep learning or the rigidity of rule-based systems. To address this gap, we propose a hybrid neuro-symbolic and large language model (LLM) framework that combines mechanistic disease knowledge with multimodal ophthalmic data for explainable AMD treatment prognosis. In a pilot cohort of ten surgically managed AMD patients (six men, four women; mean age 67.8 ± 6.3 years), we collected 30 structured clinical documents and 100 paired imaging series (optical coherence tomography, fundus fluorescein angiography, scanning laser ophthalmoscopy, and ocular/superficial B-scan ultrasonography). Texts were semantically annotated and mapped to standardized ontologies, while images underwent rigorous DICOM-based quality control, lesion segmentation, and quantitative biomarker extraction. A domain-specific ophthalmic knowledge graph encoded causal disease and treatment relationships, enabling neuro-symbolic reasoning to constrain and guide neural feature learning. An LLM fine-tuned on ophthalmology literature and electronic health records ingested structured biomarkers and longitudinal clinical narratives through multimodal clinical-profile prompts, producing natural-language risk explanations with explicit evidence citations. On an independent test set, the hybrid model achieved AUROC 0.94 ± 0.03, AUPRC 0.92 ± 0.04, and a Brier score of 0.07, significantly outperforming purely neural and classical Cox regression baselines (p ≤ 0.01). Explainability metrics showed that >85% of predictions were supported by high-confidence knowledge-graph rules, and >90% of generated narratives accurately cited key biomarkers. A detailed case study demonstrated real-time, individualized risk stratification—for example, predicting an >70% probability of requiring three or more anti-VEGF injections within 12 months and a ~45% risk of chronic macular edema if therapy lapsed—with predictions matching the observed clinical course. These results highlight the framework’s ability to integrate multimodal evidence, provide transparent causal reasoning, and support personalized treatment planning. While limited by single-center scope and short-term follow-up, this work establishes a scalable, privacy-aware, and regulator-ready template for explainable, next-generation decision support in AMD management, with potential for expansion to larger, device-diverse cohorts and other complex retinal diseases. Full article
(This article belongs to the Special Issue Sensing Functional Imaging Biomarkers and Artificial Intelligence)
Show Figures

Figure 1

16 pages, 923 KB  
Review
Beyond the Surface: Revealing the Concealed Effects of Hyperglycemia on Ocular Surface Homeostasis and Dry Eye Disease
by Marco Zeppieri, Matteo Capobianco, Federico Visalli, Mutali Musa, Alessandro Avitabile, Rosa Giglio, Daniele Tognetto, Caterina Gagliano, Fabiana D’Esposito and Francesco Cappellani
Medicina 2025, 61(11), 1992; https://doi.org/10.3390/medicina61111992 - 6 Nov 2025
Cited by 3 | Viewed by 1017
Abstract
Background and Objectives: Dry eye disease (DED) is a multifactorial ocular surface disease that markedly diminishes quality of life. Although diabetes mellitus is well-known for its retinal consequences, anterior segment symptoms including dry eye disease are often overlooked. Chronic hyperglycemia causes metabolic, [...] Read more.
Background and Objectives: Dry eye disease (DED) is a multifactorial ocular surface disease that markedly diminishes quality of life. Although diabetes mellitus is well-known for its retinal consequences, anterior segment symptoms including dry eye disease are often overlooked. Chronic hyperglycemia causes metabolic, neurovascular, and immunological changes that undermine tear film stability, corneal innervation, and ocular surface integrity. This review seeks to consolidate existing knowledge regarding the concealed impacts of diabetes on ocular surface homeostasis, highlighting processes, diagnostic difficulties, and treatment prospects. Materials and Methods: A narrative review of the literature was performed by searching PubMed for publications from January 2020 to July 2025 using the terms “diabetic dry eye,” “hyperglycemia AND ocular surface,” “tear proteomics AND diabetes,” “corneal nerves AND diabetes,” and “neurotrophic keratitis.” Eligible studies were experimental research, clinical trials, and translational investigations concerning tear film function, corneal neuropathy, inflammatory indicators, or lacrimal gland dysfunction in diabetes. The exclusion criteria were non-English language, lack of primary data, and inadequate methodological description. Results: Hyperglycemia compromises lacrimal gland functionality, modifies lipid secretion from Meibomian glands, and diminishes corneal nerve density, resulting in neurotrophic deficits. Inflammatory cytokines and oxidative stress compromise epithelial integrity, but proteome alterations in tears serve as sensitive indicators of disease. Diagnosis is impeded by corneal hypoesthesia, resulting in a disconnection between symptoms and findings. Progress in imaging, proteomics, and artificial intelligence may facilitate earlier detection and improved risk assessment. Novel therapeutics, such as neurotrophic drugs, antioxidants, and customized anti-inflammatory approaches, show promise but remain under clinical evaluation. Conclusions: Diabetes-related dry eye disease is a multifaceted and underappreciated condition influenced by systemic metabolic dysfunction. The ocular surface may act as an initial indicator for systemic disease load. Narrative synthesis emphasizes the necessity for customized diagnostic instruments, individualized treatment approaches, and collaborative management. Reconceptualizing diabetic dry eye disease within the context of systemic metabolic care presents prospects for precision medicine strategies that enhance both ocular and systemic results. Full article
(This article belongs to the Special Issue Ophthalmology: New Diagnostic and Treatment Approaches (2nd Edition))
Show Figures

Figure 1

29 pages, 11403 KB  
Article
In-Vivo Characterization of Healthy Retinal Pigment Epithelium and Photoreceptor Cells from AO-(T)FI Imaging
by Sohrab Ferdowsi, Leila Sara Eppenberger, Safa Mohanna, Oliver Pfäffli, Christoph Amstutz, Lucas M. Bachmann, Michael A. Thiel and Martin K. Schmid
Vision 2025, 9(4), 91; https://doi.org/10.3390/vision9040091 - 1 Nov 2025
Viewed by 1068
Abstract
We provide an automated characterization of human retinal cells, i.e., RPE’s based on the non-invasive AO-TFI retinal imaging and PR’s based on the non-invasive AO-FI retinal imaging on a large-scale study involving 171 confirmed healthy eyes from 104 participants of 23 to 80 [...] Read more.
We provide an automated characterization of human retinal cells, i.e., RPE’s based on the non-invasive AO-TFI retinal imaging and PR’s based on the non-invasive AO-FI retinal imaging on a large-scale study involving 171 confirmed healthy eyes from 104 participants of 23 to 80 years old. Comprehensive standard checkups based on SD-OCT and Fondus imaging modalities were carried out by Ophthalmologists from the Luzerner Kantonsspital (LUKS) to confirm the absence of retinal pathologies. AO imaging imaging was performed using the Cellularis® device and each eye was imaged at various retinal eccentricities. The images were automatically segmented using a dedicated software and RPE and PR cells were identified and morphometric characterizations, such as cell density and area were computed. The results were stratified based on various criteria, such as age, retinal eccentricity, visual acuity, etc. The automatic segmentation was validated independently on a held-out set by five trained medical students not involved in this study. We plotted cell density variations as a function of eccentricity from the fovea along both nasal and temporal directions. For RPE cells, no consistent trend in density was observed between 0° to 9° eccentricity, contrasting with established histological literature demonstrating foveal density peaks. In contrast, PR cell density showed a clear decrease from 2.5° to 9°. RPE cell density declined linearly with age, whereas no age-related pattern was detected for PR cell density. On average, RPE cell density was found to be ≈6313 cells/mm2 (±σ=757), while the average PR cell density was calculated as ≈10,207 cells/mm2 (±σ=1273). Full article
Show Figures

Figure 1

32 pages, 57072 KB  
Article
Deep Learning Network with Illuminant Augmentation for Diabetic Retinopathy Segmentation Using Comprehensive Anatomical Context Integration
by Sakon Chankhachon, Supaporn Kansomkeat, Patama Bhurayanontachai and Sathit Intajag
Diagnostics 2025, 15(21), 2762; https://doi.org/10.3390/diagnostics15212762 - 31 Oct 2025
Cited by 1 | Viewed by 1343
Abstract
Background/Objectives: Diabetic retinopathy (DR) segmentation faces critical challenges from domain shift and false positives caused by heterogeneous retinal backgrounds. Recent transformer-based studies have shown that existing approaches do not comprehensively integrate the anatomical context, particularly training datasets combining blood vessels with DR lesions. [...] Read more.
Background/Objectives: Diabetic retinopathy (DR) segmentation faces critical challenges from domain shift and false positives caused by heterogeneous retinal backgrounds. Recent transformer-based studies have shown that existing approaches do not comprehensively integrate the anatomical context, particularly training datasets combining blood vessels with DR lesions. Methods: These limitations were addressed by deploying a DeepLabV3+ framework enhanced with more comprehensive anatomical contexts, rather than more complex architectures. The approach produced the first training dataset that systematically integrates DR lesions with complete retinal anatomical structures (optic disc, fovea, blood vessels, retinal boundaries) as contextual background classes. An innovative illumination-based data augmentation simulated diverse camera characteristics using color constancy principles. Two-stage training (cross-entropy and Tversky loss) managed class imbalance effectively. Results: An extensive evaluation of the IDRiD, DDR, and TJDR datasets demonstrated significant improvements. The model achieved competitive performances (AUC-PR: 0.7715, IoU: 0.6651, F1: 0.7930) compared with state-of-the-art methods, including transformer approaches, while showing promising generalization on some unseen datasets, though performance varied across different domains. False-positive returns were reduced through anatomical context awareness. Conclusions: The framework demonstrates that comprehensive anatomical context integration is more critical than architectural complexity for DR segmentation. By combining systematic anatomical annotation with effective data augmentation, conventional network performances can be improved while maintaining computational efficiency and clinical interpretability, establishing a new paradigm for medical image segmentation. Full article
Show Figures

Figure 1

Back to TopTop