Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (431)

Search Parameters:
Keywords = retinal image model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 3554 KB  
Article
Early Detection of Cystoid Macular Edema in Retinitis Pigmentosa Using Longitudinal Deep Learning Analysis of OCT Scans
by Farhang Hosseini, Farkhondeh Asadi, Reza Rabiei, Arash Roshanpoor, Hamideh Sabbaghi, Mehrnoosh Eslami and Rayan Ebnali Harari
Diagnostics 2026, 16(1), 46; https://doi.org/10.3390/diagnostics16010046 - 23 Dec 2025
Abstract
Background/Objectives: Retinitis pigmentosa (RP) is a progressive hereditary retinal disorder that frequently leads to vision loss, with cystoid macular edema (CME) occurring in approximately 10–50% of affected patients. Early detection of CME is crucial for timely intervention, yet most existing studies lack [...] Read more.
Background/Objectives: Retinitis pigmentosa (RP) is a progressive hereditary retinal disorder that frequently leads to vision loss, with cystoid macular edema (CME) occurring in approximately 10–50% of affected patients. Early detection of CME is crucial for timely intervention, yet most existing studies lack longitudinal data capable of capturing subtle disease progression. Methods: We propose a deep learning–based framework utilizing longitudinal optical coherence tomography (OCT) imaging for early detection of CME in patients with RP. A total of 2280 longitudinal OCT images were preprocessed using denoising and data augmentation techniques. Multiple pre-trained deep learning architectures were evaluated using a patient-wise data split to ensure robust performance assessment. Results: Among the evaluated models, ResNet-34 achieved the best performance, with an accuracy of 98.68%, specificity of 99.45%, and an F1-score of 98.36%. Conclusions: These results demonstrate the potential of longitudinal OCT–based artificial intelligence as a reliable, non-invasive screening tool for early CME detection in RP. To the best of our knowledge, this study is the first to leverage longitudinal OCT data for AI-driven CME prediction in this patient population. Full article
(This article belongs to the Special Issue 3rd Edition: AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

18 pages, 52336 KB  
Article
Self-Supervised Representation Learning for Data-Efficient DRIL Classification in OCT Images
by Pavithra Kodiyalbail Chakrapani, Akshat Tulsani, Preetham Kumar, Geetha Maiya, Sulatha Venkataraya Bhandary and Steven Fernandes
Diagnostics 2025, 15(24), 3221; https://doi.org/10.3390/diagnostics15243221 - 16 Dec 2025
Viewed by 167
Abstract
Background/Objectives: Disorganization of the retinal inner layers (DRIL) is an important biomarker of diabetic macular edema (DME) that has a very strong association with visual acuity (VA) in patients. But the unavailability of annotated training data from experts severely limits the adaptability of [...] Read more.
Background/Objectives: Disorganization of the retinal inner layers (DRIL) is an important biomarker of diabetic macular edema (DME) that has a very strong association with visual acuity (VA) in patients. But the unavailability of annotated training data from experts severely limits the adaptability of models pretrained on real-world images owing to significant variations in the domain, posing two primary challenges for the design of efficient computerized DRIL detection methods. Methods: In an attempt to address these challenges, we propose a novel, self-supervision-based learning framework that employs a huge unlabeled optical coherence tomography (OCT) dataset to learn and detect clinically applicable interpretations before fine-tuning with a small proprietary dataset of annotated OCT images. In this research, we introduce a spatial Bootstrap Your Own Latent (BYOL) with a hybrid spatial aware loss function aimed to capture anatomical representations from unlabeled OCT dataset of 108,309 images that cover various retinal abnormalities, and then adapt the learned interpretations for DRIL classification employing 823 annotated OCT images. Results: With an accuracy of 99.39%, the proposed two-stage approach substantially exceeds the direct transfer learning models pretrained on ImageNet. Conclusions: The findings demonstrate the efficacy of domain-specific self-supervised learning for rare retinal pathological detection tasks with limited annotated data. Full article
(This article belongs to the Special Issue Artificial Intelligence in Eye Disease, 4th Edition)
Show Figures

Graphical abstract

20 pages, 2825 KB  
Article
Dementia Detection via Retinal Hyperspectral Imaging and Deep Learning: Clinical Dataset Analysis and Comparative Evaluation of Multiple Architectures
by Wen-Shou Lin, Chia-Ling Chen, Shih-Wun Liang and Hsiang-Chen Wang
Bioengineering 2025, 12(12), 1362; https://doi.org/10.3390/bioengineering12121362 - 14 Dec 2025
Viewed by 367
Abstract
This study aimed to detect dementia using intelligent hyperspectral imaging (HSI), which enables the extraction of detailed spectral information from retinal tissues. A total of 3256 ophthalmoscopic images collected from 137 participants were analyzed. The spectral signatures of selected retinal regions were reconstructed [...] Read more.
This study aimed to detect dementia using intelligent hyperspectral imaging (HSI), which enables the extraction of detailed spectral information from retinal tissues. A total of 3256 ophthalmoscopic images collected from 137 participants were analyzed. The spectral signatures of selected retinal regions were reconstructed using hyperspectral conversion techniques to examine wavelength-dependent variations associated with dementia. To assess the diagnostic capability of deep learning models, four convolutional neural network (CNN) architectures—ResNet50, Inception_v3, GoogLeNet, and EfficientNet—were implemented and benchmarked on two datasets: original ophthalmoscopic images (ORIs) and hyperspectral images (HSIs). The HSI-based models consistently demonstrated superior accuracy, achieving 84% with ResNet50, 83% with GoogLeNet, and 82% with EfficientNet, compared with 80–81% obtained from ORIs. Inception_v3 maintained an accuracy of 80% across both datasets. These results confirm that integrating spectral information enhances model sensitivity to dementia-related retinal changes, highlighting the potential of HSI for early and noninvasive detection. Full article
(This article belongs to the Special Issue Retinal Biomarkers: Seeing Diseases in the Eye)
Show Figures

Figure 1

15 pages, 1591 KB  
Article
The Protective Effects of Silk Sericin Against Retinal Oxidative Stress: In Vitro and In Vivo Assays with a Fluorometric Nitroxide Molecular Probe
by Cassie L. Rayner, Shuko Suzuki, Traian V. Chirila and Nigel L. Barnett
Molecules 2025, 30(24), 4707; https://doi.org/10.3390/molecules30244707 - 9 Dec 2025
Viewed by 272
Abstract
Sericin is a major polypeptidic constituent of the silk in the cocoons produced by the Bombyx mori silkworm. Certain fractions isolated from sericin exhibited antioxidant properties in a variety of reported experimental settings. In a previous study, we found that only the non-protein [...] Read more.
Sericin is a major polypeptidic constituent of the silk in the cocoons produced by the Bombyx mori silkworm. Certain fractions isolated from sericin exhibited antioxidant properties in a variety of reported experimental settings. In a previous study, we found that only the non-protein fraction, extracted from crude sericin, displayed antioxidative activity in cultures of murine retinal photoreceptor cells (661W), a cell line that is highly sensitive to oxidative stress associated with diseases of the retina. In the same assay, the protein fraction (purified sericin) did not show any such activity. To check these findings, in the present study, two additional different assays were employed: an in vitro assay based on the dose-dependent mitigating effects exerted by each sericin fraction on the activity of antimycin A in cultures of 661W cells and an in vivo assay based on an animal (rat) model of retinal ischemia/reperfusion injury. In both assays, nitroxide was appended as a fluorometric molecular probe, and fluorescent intensity was monitored by either flow cytometry (in vitro) or the Micron IV retinal imaging system (in vivo). The in vitro assay indicated unequivocally antioxidative capacity for the non-protein fraction and a lack of it for the purified sericin. The in vivo assay indicated that each fraction was able to act as an antioxidant. We hypothesized that the ability of purified sericin to display antioxidative activity in vivo, but not in vitro, was the result of the metabolic degradation of sericin, a process that delivered serine, an amino acid with known antioxidant properties. However, this hypothesis needs experimental confirmation. Full article
Show Figures

Figure 1

17 pages, 1001 KB  
Systematic Review
The Role of Artificial Intelligence in Imaging-Based Diagnosis of Retinal Dystrophy and Evaluation of Gene Therapy Efficacy
by Weronika Chuchmacz, Barbara Bobowska, Alicja Forma, Eliasz Dzierżyński, Damian Puźniak, Barbara Teresińska, Jacek Baj and Joanna Dolar-Szczasny
J. Pers. Med. 2025, 15(12), 605; https://doi.org/10.3390/jpm15120605 - 5 Dec 2025
Viewed by 260
Abstract
Introduction: Inherited retinal dystrophies (IRDs) are genetically determined conditions leading to progressive vision loss. Developments in gene therapy are creating new treatment options for IRD, but require precise imaging diagnosis and monitoring. According to recent studies, artificial intelligence, especially deep neural networks, could [...] Read more.
Introduction: Inherited retinal dystrophies (IRDs) are genetically determined conditions leading to progressive vision loss. Developments in gene therapy are creating new treatment options for IRD, but require precise imaging diagnosis and monitoring. According to recent studies, artificial intelligence, especially deep neural networks, could become an important tool for analyzing imaging data. Material and Methods: A systematic literature review was conducted in accordance with PRISMA guidelines, using PubMed, Scopus, and Web of Science databases to identify publications from 2015 to 2025 on the application of artificial intelligence in diagnosing inherited retinal dystrophies and monitoring the effects of gene therapy. The included articles passed a two-stage selection process and met the methodological quality criteria. Results: Among all the included studies it can be noticed that the use of artificial intelligence in diagnostics and therapy of IRDs is rather effective. The most common method was deep learning with its subtype convolutional neural networks (CNNs). However, there is still a place for improvement due to various limitations occurring in the studies. Conclusions: The review points to the growing potential of AI models in optimizing the diagnostic and therapeutic pathway in IRDs, while noting current limitations such as low data availability, the need for clinical validation, and the interpretability of the models. AI may play a key role in personalized ophthalmic medicine in the near future, supporting both clinical decisions and interventional study design. Full article
Show Figures

Figure 1

21 pages, 6152 KB  
Article
Structural Retinal Analysis in Toxoplasmic Retinochoroiditis: OCT Follow-Up with Three-Dimensional Reconstruction
by Ioana Damian, Adrian Pop, Adrian Groza, Elisabetta Miserocchi and Simona Delia Nicoară
Diagnostics 2025, 15(23), 3091; https://doi.org/10.3390/diagnostics15233091 - 4 Dec 2025
Viewed by 307
Abstract
Background: Ocular toxoplasmosis remains the leading cause of posterior uveitis worldwide. Optical coherence tomography (OCT) provides valuable insights into the structural alterations associated with this condition. The present study aimed to characterize the vitreous, retinal, and choroidal morphological changes observed during both [...] Read more.
Background: Ocular toxoplasmosis remains the leading cause of posterior uveitis worldwide. Optical coherence tomography (OCT) provides valuable insights into the structural alterations associated with this condition. The present study aimed to characterize the vitreous, retinal, and choroidal morphological changes observed during both the active and scarred stages of ocular toxoplasmosis using OCT imaging. A secondary objective was to evaluate the added value of three-dimensional reconstruction in the assessment of retinal lesions. Methods: A retrospective study was conducted on 12 eyes belonging to 12 patients diagnosed with toxoplasmosis retinochoroiditis (TRC). Optical coherence tomography (OCT) scans centered on the active lesions were qualitatively analyzed at baseline and follow-up. Additionally, a ResUNet model was trained to generate a full volumetric reconstruction of the retinochoroidal lesions in selected cases. Results: Twelve eyes were analyzed at a mean of 16.2 days from symptom onset. The mean follow-up duration was 144 days (range: 12–490 days). OCT imaging revealed characteristic alterations in the retina, choroid, and vitreous body, which were documented both at baseline and at follow-up. Representative cases were selected for three-dimensional reconstruction to illustrate the extent of retinal architectural involvement. Conclusions: OCT analysis refines our understanding of the structural damage associated with ocular toxoplasmosis, while three-dimensional reconstruction enhances our ability to visualize and interpret these alterations on a larger scale. Full article
(This article belongs to the Special Issue Optical Coherence Tomography in Diagnosis of Ophthalmology Disease)
Show Figures

Figure 1

12 pages, 795 KB  
Article
Intraocular Cytokine Level Prediction from Fundus Images and Optical Coherence Tomography
by Hidenori Takahashi, Taiki Tsuge, Yusuke Kondo, Yasuo Yanagi, Satoru Inoda, Shohei Morikawa, Yuki Senoo, Toshikatsu Kaburaki, Tetsuro Oshika and Toshihiko Yamasaki
Sensors 2025, 25(23), 7382; https://doi.org/10.3390/s25237382 - 4 Dec 2025
Viewed by 314
Abstract
The relationship between retinal images and intraocular cytokine profiles remains largely unexplored, and no prior work has systematically compared fundus- and OCT-based deep learning models for cytokine prediction. We aimed to predict intraocular cytokine concentrations using color fundus photographs (CFP) and retinal optical [...] Read more.
The relationship between retinal images and intraocular cytokine profiles remains largely unexplored, and no prior work has systematically compared fundus- and OCT-based deep learning models for cytokine prediction. We aimed to predict intraocular cytokine concentrations using color fundus photographs (CFP) and retinal optical coherence tomography (OCT) with deep learning. Our pipeline consisted of image preprocessing, convolutional neural network–based feature extraction, and regression modeling for each cytokine. Deep learning was implemented using AutoGluon, which automatically explored multiple architectures and converged on ResNet18, reflecting the small dataset size. Four approaches were tested: (1) CFP alone, (2) CFP plus demographic/clinical features, (3) OCT alone, and (4) OCT plus these features. Prediction performance was defined as the mean coefficient of determination (R2) across 34 cytokines, and differences were evaluated using paired two-tailed t-tests. We used data from 139 patients (152 eyes) and 176 aqueous humor samples. The cohort consisted of 85 males (61%) with a mean age of 73 (SD 9.8). Diseases included 64 exudative age-related macular degeneration, 29 brolucizumab-associated endophthalmitis, 19 cataract surgeries, 15 retinal vein occlusion, and 8 diabetic macular edema. Prediction performance was generally poor, with mean R2 values below zero across all approaches. The CFP-only model (–0.19) outperformed CFP plus demographics (–24.1; p = 0.0373), and the OCT-only model (–0.18) outperformed OCT plus demographics (–14.7; p = 0.0080). No significant difference was observed between CFP and OCT (p = 0.9281). Notably, VEGF showed low predictability (31st with CFP, 12th with OCT). Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

10 pages, 782 KB  
Article
Development of an Algorithm to Assist in the Diagnosis of Combined Retinal Vein Occlusion and Glaucoma
by Hiroshi Kasai, Kazuyoshi Kitamura, Yuka Hasebe, Junya Mizutani, Kengo Utsunomiya, Shiori Sato, Kohei Murao, Yoichiro Ninomiya, Kensaku Mori, Kazuhide Kawase, Masaki Tanito, Toru Nakazawa, Atsuya Miki, Kazuhiko Mori, Takeshi Yoshitomi and Kenji Kashiwagi
J. Clin. Med. 2025, 14(23), 8547; https://doi.org/10.3390/jcm14238547 - 2 Dec 2025
Viewed by 296
Abstract
Objectives: To develop an algorithm to assist in the diagnosis of glaucoma with concomitant retinal vein occlusion (RVO) and to compare its diagnostic accuracy with that of ophthalmology residents and specialists. Methods: Fundus photographs of eyes with RVO and those with both RVO [...] Read more.
Objectives: To develop an algorithm to assist in the diagnosis of glaucoma with concomitant retinal vein occlusion (RVO) and to compare its diagnostic accuracy with that of ophthalmology residents and specialists. Methods: Fundus photographs of eyes with RVO and those with both RVO and glaucoma were obtained from patients who visited the University of Yamanashi Hospital. All images were preprocessed through normalization and resized to 512 × 512 pixels to ensure uniformity before model training. The diagnostic accuracy of two algorithms—the Comprehensive Fundus Disease Diagnostic Artificial Intelligence Algorithm (CD-AI) and the Glaucoma Concomitant RVO Artificial Intelligence Algorithm (RVO-GLA AI)—was evaluated. CD-AI is a clinical decision support algorithm originally developed to detect eleven common fundus diseases, including glaucoma and RVO. RVO-GLA AI is a fine-tuned version of CD-AI that is specifically adapted to detect glaucoma with or without RVO. Fine-tuning was performed using 1234 images of glaucoma, 1233 images of nonglaucomatous conditions, including RVO, and 15 images of cases with both glaucoma and RVO. The number of comorbid cases was determined empirically by gradually adding glaucomatous eyes with concomitant RVO to the training set, and 15 images provided the best balance between sensitivity and specificity. Because the available number of such cases was limited, this small sample size may have influenced the stability of the performance estimates. For the final evaluation, both algorithms and all ophthalmologists assessed the same independent test dataset comprising 66 fundus images (16 eyes with glaucoma and RVO and 50 eyes with RVO alone). The diagnostic performance of both algorithms was compared with that of three first-year ophthalmology residents and three board-certified ophthalmologists. Results: CD-AI demonstrated high diagnostic accuracy (92.5%) in eyes with glaucoma alone. However, its sensitivity and specificity decreased to 0.375 and 1.0, respectively, in patients with concomitant RVO. In contrast, the RVO-GLA AI achieved an area under the curve (AUC) of 0.875, with a sensitivity of 0.87 and a specificity of 0.71. Across all the ophthalmologists, the average sensitivity was 0.63, and the specificity was 0.87. Specialists achieved a sensitivity of 0.80 and a specificity of 0.89, while residents had a sensitivity of 0.46 and a specificity of 0.85. Conclusions: An AI-based clinical decision support system specifically designed for glaucoma detection significantly improved diagnostic performance in eyes with combined RVO and glaucoma, achieving an accuracy comparable to that of ophthalmologists, even with a limited number of training cases. Full article
(This article belongs to the Special Issue Advances in the Diagnosis and Treatment of Glaucoma)
Show Figures

Figure 1

31 pages, 5755 KB  
Article
Explainable AI for Diabetic Retinopathy: Utilizing YOLO Model on a Novel Dataset
by A. M. Mutawa, Khalid Al Sabti, Seemant Raizada and Sai Sruthi
AI 2025, 6(12), 301; https://doi.org/10.3390/ai6120301 - 24 Nov 2025
Viewed by 874
Abstract
Background: Diagnostic errors can be substantially diminished, and clinical decision-making can be significantly enhanced through automated image classification. Methods: We implemented a YOLO (You Only Look Once)-based system to classify diabetic retinopathy (DR) utilizing a unique retinal dataset. Although YOLO provides exceptional accuracy [...] Read more.
Background: Diagnostic errors can be substantially diminished, and clinical decision-making can be significantly enhanced through automated image classification. Methods: We implemented a YOLO (You Only Look Once)-based system to classify diabetic retinopathy (DR) utilizing a unique retinal dataset. Although YOLO provides exceptional accuracy and rapidity in object recognition and categorization, its interpretability is constrained. Both binary and multi-class classification methods (graded severity levels) were employed. The Contrast-Limited Adaptive Histogram Equalization (CLAHE) model was utilized to improve image brightness and detailed readability. To improve interpretability, we utilized Eigen Class Activation Mapping (Eigen-CAM) to display areas affecting classification predictions. Results: Our model exhibited robust and consistent performance on the datasets for binary and 5-class tasks. The YOLO 11l model obtained a binary classification accuracy of 97.02% and an Area Under Curve (AUC) score of 0.98. The YOLO 8x model showed superior performance in 5-class classification, with an accuracy of 80.12% and an AUC score of 0.88. A simple interface was created using Gradio to enable real-time interaction. Conclusions: The suggested technique integrates robust prediction accuracy with visual interpretability, rendering it a potential instrument for DR screening in clinical environments. Full article
Show Figures

Graphical abstract

24 pages, 490 KB  
Article
Learning Dynamics Analysis: Assessing Generalization of Machine Learning Models for Optical Coherence Tomography Multiclass Classification
by Michael Sher, David Remyes, Riah Sharma and Milan Toma
Informatics 2025, 12(4), 128; https://doi.org/10.3390/informatics12040128 - 22 Nov 2025
Viewed by 622
Abstract
This study evaluated the generalization and reliability of machine learning models for multiclass classification of retinal pathologies using a diverse set of images representing eight disease categories. Images were aggregated from two public datasets and divided into training, validation, and test sets, with [...] Read more.
This study evaluated the generalization and reliability of machine learning models for multiclass classification of retinal pathologies using a diverse set of images representing eight disease categories. Images were aggregated from two public datasets and divided into training, validation, and test sets, with an additional independent dataset used for external validation. Multiple modeling approaches were compared, including classical machine learning algorithms, convolutional neural networks with and without data augmentation, and a deep neural network using pre-trained feature extraction. Analysis of learning dynamics revealed that classical models and unaugmented convolutional neural networks exhibited overfitting and poor generalization, while models with data augmentation and the deep neural network showed healthy, parallel convergence of training and validation performance. Only the deep neural network demonstrated a consistent, monotonic decrease in accuracy, F1-score, and recall from training through external validation, indicating robust generalization. These results underscore the necessity of evaluating learning dynamics (not just summary metrics) to ensure model reliability and patient safety. Typically, model performance is expected to decrease gradually as data becomes less familiar. Therefore, models that do not exhibit these healthy learning dynamics, or that show unexpected improvements in performance on subsequent datasets, should not be considered for clinical application, as such patterns may indicate methodological flaws or data leakage rather than true generalization. Full article
Show Figures

Figure 1

16 pages, 4211 KB  
Article
MambaDPF-Net: A Dual-Path Fusion Network with Selective State Space Modeling for Robust Low-Light Image Enhancement
by Zikang Zhang and Songfeng Yin
Electronics 2025, 14(22), 4533; https://doi.org/10.3390/electronics14224533 - 19 Nov 2025
Viewed by 391
Abstract
Low-light images commonly suffer from insufficient contrast, noise accumulation, and colour shifts, which impair human perception and subsequent visual tasks. We propose MambaDPF-Net—a dual-path fusion framework based on the retinal effect, adhering to a ‘decoupling–denoising–coupling’ paradigm while incorporating sharpening priors for texture stabilisation. [...] Read more.
Low-light images commonly suffer from insufficient contrast, noise accumulation, and colour shifts, which impair human perception and subsequent visual tasks. We propose MambaDPF-Net—a dual-path fusion framework based on the retinal effect, adhering to a ‘decoupling–denoising–coupling’ paradigm while incorporating sharpening priors for texture stabilisation. Specifically, the decoupling branch estimates illumination and reflectance through dual-scale feature aggregation with physically interpretable constraints; the denoising branch primarily performs noise reduction in the reflectance domain, employing an illumination-aware modulation mechanism to prevent excessive smoothing in low-SNR regions; the coupling branch utilises a selective state space module (Mamba) to adaptively fuse spatio-temporal representations, achieving non-local interactions and cross-region long-range dependency modelling with near-linear complexity. Extensive experiments on public datasets demonstrate that this method achieves state-of-the-art performance on metrics such as PSNR and SSIM, excels in non-reference evaluations, and produces natural colours with enhanced details. This validates the proposed approach’s effectiveness and robustness. Full article
(This article belongs to the Special Issue 2D/3D Industrial Visual Inspection and Intelligent Image Processing)
Show Figures

Figure 1

29 pages, 5808 KB  
Systematic Review
Artificial Intelligence Algorithms for Epiretinal Membrane Detection, Segmentation and Postoperative BCVA Prediction: A Systematic Review and Meta-Analysis
by Eirini Maliagkani, Petroula Mitri, Dimitra Mitsopoulou, Andreas Katsimpris, Ioannis D. Apostolopoulos, Athanasia Sandali, Konstantinos Tyrlis, Nikolaos Papandrianos and Ilias Georgalas
Appl. Sci. 2025, 15(22), 12280; https://doi.org/10.3390/app152212280 - 19 Nov 2025
Viewed by 543
Abstract
Epiretinal membrane (ERM) is a common retinal pathology associated with progressive visual impairment, requiring timely and accurate assessment. Recent advances in artificial intelligence (AI) have enabled automated approaches for ERM detection, segmentation, and postoperative best corrected visual acuity (BCVA) prediction, offering promising avenues [...] Read more.
Epiretinal membrane (ERM) is a common retinal pathology associated with progressive visual impairment, requiring timely and accurate assessment. Recent advances in artificial intelligence (AI) have enabled automated approaches for ERM detection, segmentation, and postoperative best corrected visual acuity (BCVA) prediction, offering promising avenues to enhance clinical efficiency and diagnostic precision. We conducted a comprehensive literature search across MEDLINE (via PubMed), Scopus, CENTRAL, ClinicalTrials.gov, and Google Scholar from the inception to 31 December 2023. A total of 42 studies were included in the systematic review, with 16 eligible for meta-analysis. Risk of bias and reporting quality were assessed using the QUADAS-2 and CLAIM tools. Meta-analysis of 16 studies (533,674 images) showed that deep learning (DL) models achieved high diagnostic accuracy (AUC = 0.97), with pooled sensitivity and specificity of 0.93 and 0.97, respectively. Optical coherence tomography (OCT)-based models outperformed fundus-based ones, and although performance remained high under external validation, the positive predictive value (PPV) declined—highlighting the importance of testing model generalizability. To the best of our knowledge, this is the first systematic review and meta-analysis to critically evaluate the role of AI in the detection, segmentation, and postoperative BCVA prediction of ERM across various ophthalmic imaging modalities. Our findings provide a clear overview of current evidence supporting the continued development and clinical adoption of AI tools for ERM diagnosis and management. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

20 pages, 695 KB  
Review
Retinal Neurovascular Coupling: From Mechanisms to a Diagnostic Window into Brain Disorders
by Wen Shen
Cells 2025, 14(22), 1798; https://doi.org/10.3390/cells14221798 - 16 Nov 2025
Viewed by 909
Abstract
Retinal neurovascular coupling reflects the precise coordination between neuronal activity, glial support, and vascular responses, mirroring key neurovascular mechanisms in the brain. This review emphasizes the cellular and molecular processes underlying retinal neurovascular coupling and positions the retina as a sensitive and accessible [...] Read more.
Retinal neurovascular coupling reflects the precise coordination between neuronal activity, glial support, and vascular responses, mirroring key neurovascular mechanisms in the brain. This review emphasizes the cellular and molecular processes underlying retinal neurovascular coupling and positions the retina as a sensitive and accessible model for investigating neurovascular function in the brain. It highlights how parallel neurovascular degeneration in the brain and retina provides critical insights into the pathophysiology of neurodegenerative and vascular disorders. Advances in retinal imaging, including functional optical coherence tomography (fOCT), OCT angiography (OCTA), and functional electrophysiology, offer unprecedented opportunities to detect early neuronal and vascular dysfunction, establishing the retina as a non-invasive biomarker for early detection, disease monitoring, and therapeutic evaluation in Alzheimer’s, Parkinson’s and Huntington’s disease, and stroke. By integrating structural, functional, and mechanistic approaches, the review emphasizes the retina’s potential as a translational platform bridging basic science and clinical applications in neurovascular research. Full article
Show Figures

Figure 1

35 pages, 2963 KB  
Article
Explainable Artificial Intelligence Framework for Predicting Treatment Outcomes in Age-Related Macular Degeneration
by Mini Han Wang
Sensors 2025, 25(22), 6879; https://doi.org/10.3390/s25226879 - 11 Nov 2025
Viewed by 1056
Abstract
Age-related macular degeneration (AMD) is a leading cause of irreversible blindness, yet current tools for forecasting treatment outcomes remain limited by either the opacity of deep learning or the rigidity of rule-based systems. To address this gap, we propose a hybrid neuro-symbolic and [...] Read more.
Age-related macular degeneration (AMD) is a leading cause of irreversible blindness, yet current tools for forecasting treatment outcomes remain limited by either the opacity of deep learning or the rigidity of rule-based systems. To address this gap, we propose a hybrid neuro-symbolic and large language model (LLM) framework that combines mechanistic disease knowledge with multimodal ophthalmic data for explainable AMD treatment prognosis. In a pilot cohort of ten surgically managed AMD patients (six men, four women; mean age 67.8 ± 6.3 years), we collected 30 structured clinical documents and 100 paired imaging series (optical coherence tomography, fundus fluorescein angiography, scanning laser ophthalmoscopy, and ocular/superficial B-scan ultrasonography). Texts were semantically annotated and mapped to standardized ontologies, while images underwent rigorous DICOM-based quality control, lesion segmentation, and quantitative biomarker extraction. A domain-specific ophthalmic knowledge graph encoded causal disease and treatment relationships, enabling neuro-symbolic reasoning to constrain and guide neural feature learning. An LLM fine-tuned on ophthalmology literature and electronic health records ingested structured biomarkers and longitudinal clinical narratives through multimodal clinical-profile prompts, producing natural-language risk explanations with explicit evidence citations. On an independent test set, the hybrid model achieved AUROC 0.94 ± 0.03, AUPRC 0.92 ± 0.04, and a Brier score of 0.07, significantly outperforming purely neural and classical Cox regression baselines (p ≤ 0.01). Explainability metrics showed that >85% of predictions were supported by high-confidence knowledge-graph rules, and >90% of generated narratives accurately cited key biomarkers. A detailed case study demonstrated real-time, individualized risk stratification—for example, predicting an >70% probability of requiring three or more anti-VEGF injections within 12 months and a ~45% risk of chronic macular edema if therapy lapsed—with predictions matching the observed clinical course. These results highlight the framework’s ability to integrate multimodal evidence, provide transparent causal reasoning, and support personalized treatment planning. While limited by single-center scope and short-term follow-up, this work establishes a scalable, privacy-aware, and regulator-ready template for explainable, next-generation decision support in AMD management, with potential for expansion to larger, device-diverse cohorts and other complex retinal diseases. Full article
(This article belongs to the Special Issue Sensing Functional Imaging Biomarkers and Artificial Intelligence)
Show Figures

Figure 1

16 pages, 5178 KB  
Review
Ophthalmic Imaging in Diabetic Retinopathy and Diabetic Macular Edema: Key Findings and Advancements
by Akanksha Malepati, Edmund Arthur and Maria B. Grant
J. Clin. Transl. Ophthalmol. 2025, 3(4), 24; https://doi.org/10.3390/jcto3040024 - 7 Nov 2025
Viewed by 1117
Abstract
Diabetes mellitus (DM) is a debilitating chronic disorder that results in ocular microvascular complications, including diabetic retinopathy (DR) and diabetic macular edema (DME). Early detection and timely intervention for DR and DME are crucial for improving visual outcomes in affected patients. Ophthalmic imaging [...] Read more.
Diabetes mellitus (DM) is a debilitating chronic disorder that results in ocular microvascular complications, including diabetic retinopathy (DR) and diabetic macular edema (DME). Early detection and timely intervention for DR and DME are crucial for improving visual outcomes in affected patients. Ophthalmic imaging plays a vital role in the screening, diagnosis, and management of DR and DME. In this review, a comprehensive overview of the imaging modalities frequently utilized in the assessment of DR and DME, encompassing both structural and functional imaging techniques are presented. The key imaging findings that are associated with the various stages of DR and DME are underscored and their diagnostic utility in assessing disease progression and visual function are evaluated. Additionally, we discuss emerging imaging biomarkers that are currently under investigation, which hold significant potential for improving the diagnostic and prognostic capabilities of imaging for DR and DME patients. Finally, the advent of new imaging methods, such as ultrawide-field imaging (UWFI) and deep learning models, which have markedly improved the detection of retinal pathologies are considered. Full article
Show Figures

Figure 1

Back to TopTop