Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (149)

Search Parameters:
Keywords = retinal disease classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 749 KB  
Review
Neuroprotection in Diabetes Retinal Disease: An Unmet Medical Need
by Hugo Ramos and Olga Simó-Servat
Int. J. Mol. Sci. 2026, 27(2), 901; https://doi.org/10.3390/ijms27020901 - 16 Jan 2026
Viewed by 156
Abstract
Diabetic retinopathy (DR) has been classically considered a microvascular disease with all diagnostic and therapeutic resources focusing on its vascular components. However, during the past years, the obtained evidence highlighted the critical pathogenic role of early neuronal impairment redefining DR as a neurovascular [...] Read more.
Diabetic retinopathy (DR) has been classically considered a microvascular disease with all diagnostic and therapeutic resources focusing on its vascular components. However, during the past years, the obtained evidence highlighted the critical pathogenic role of early neuronal impairment redefining DR as a neurovascular complication. Retinal neurodegeneration is triggered by chronic hyperglycemia, which activates harmful biochemical pathways that lead to oxidative stress, metabolic overload, glutamate excitotoxicity, inflammation, and neurotrophic factor deficiency. These drivers of neurodegeneration can precede detectable vascular abnormalities. Simultaneously, endothelial injury, pericyte loss, and breakdown of the blood–retinal barrier compromise neurovascular unit integrity and establish a damaging cyclic loop in which neuronal and vascular dysfunctions reinforce each other. The interindividual variability of these processes highlights the need to properly redefine patient phenotyping by using advanced imaging and functional biomarkers. This would allow early detection of neurodegeneration and patient subtype classification. Nonetheless, translation of therapies based on neuroprotection has been limited by classical focus on vascular impairment. To meet this need, several strategies are emerging, with the most promising being those delivered through innovative ocular routes such as topical formulations, sustained-release implants, or nanocarriers. Future advances will depend on proper guidance of these therapies by integrating personalized medicine with multimodal biomarkers. Full article
(This article belongs to the Special Issue Retinal Diseases: From Molecular Pathology to Therapies—2nd Edition)
Show Figures

Figure 1

31 pages, 1485 KB  
Article
Explainable Multi-Modal Medical Image Analysis Through Dual-Stream Multi-Feature Fusion and Class-Specific Selection
by Naeem Ullah, Ivanoe De Falco and Giovanna Sannino
AI 2026, 7(1), 30; https://doi.org/10.3390/ai7010030 - 16 Jan 2026
Viewed by 303
Abstract
Effective and transparent medical diagnosis relies on accurate and interpretable classification of medical images across multiple modalities. This paper introduces an explainable multi-modal image analysis framework based on a dual-stream architecture that fuses handcrafted descriptors with deep features extracted from a custom MobileNet. [...] Read more.
Effective and transparent medical diagnosis relies on accurate and interpretable classification of medical images across multiple modalities. This paper introduces an explainable multi-modal image analysis framework based on a dual-stream architecture that fuses handcrafted descriptors with deep features extracted from a custom MobileNet. Handcrafted descriptors include frequency-domain and texture features, while deep features are summarized using 26 statistical metrics to enhance interpretability. In the fusion stage, complementary features are combined at both the feature and decision levels. Decision-level integration combines calibrated soft voting, weighted voting, and stacking ensembles with optimized classifiers, including decision trees, random forests, gradient boosting, and logistic regression. To further refine performance, a hybrid class-specific feature selection strategy is proposed, combining mutual information, recursive elimination, and random forest importance to select the most discriminative features for each class. This hybrid selection approach eliminates redundancy, improves computational efficiency, and ensures robust classification. Explainability is provided through Local Interpretable Model-Agnostic Explanations, which offer transparent details about the ensemble model’s predictions and link influential handcrafted features to clinically meaningful image characteristics. The framework is validated on three benchmark datasets, i.e., BTTypes (brain MRI), Ultrasound Breast Images, and ACRIMA Retinal Fundus Images, demonstrating generalizability across modalities (MRI, ultrasound, retinal fundus) and disease categories (brain tumor, breast cancer, glaucoma). Full article
(This article belongs to the Special Issue Digital Health: AI-Driven Personalized Healthcare and Applications)
Show Figures

Figure 1

21 pages, 4339 KB  
Article
Efficient Ensemble Learning with Curriculum-Based Masked Autoencoders for Retinal OCT Classification
by Taeyoung Yoon and Daesung Kang
Diagnostics 2026, 16(2), 179; https://doi.org/10.3390/diagnostics16020179 - 6 Jan 2026
Viewed by 266
Abstract
Background/Objectives: Retinal optical coherence tomography (OCT) is essential for diagnosing ocular diseases, yet developing high-performing multiclass classifiers remains challenging due to limited labeled data and the computational cost of self-supervised pretraining. This study aims to address these limitations by introducing a curriculum-based [...] Read more.
Background/Objectives: Retinal optical coherence tomography (OCT) is essential for diagnosing ocular diseases, yet developing high-performing multiclass classifiers remains challenging due to limited labeled data and the computational cost of self-supervised pretraining. This study aims to address these limitations by introducing a curriculum-based self-supervised framework to improve representation learning and reduce computational burden for OCT classification. Methods: Two ensemble strategies were developed using progressive masked autoencoder (MAE) pretraining. We refer to this curriculum-based MAE framework as CurriMAE (curriculum-based masked autoencoder). CurriMAE-Soup merges multiple curriculum-aware pretrained checkpoints using weight averaging, producing a single model for fine-tuning and inference. CurriMAE-Greedy selects top-performing fine-tuned models from different pretraining stages and ensembles their predictions. Both approaches rely on one curriculum-guided MAE pretraining run, avoiding repeated training with fixed masking ratios. Experiments were conducted on two publicly available retinal OCT datasets, the Kermany dataset for self-supervised pretraining and the OCTDL dataset for downstream evaluation. The OCTDL dataset comprises seven clinically relevant retinal classes, including normal retina, age-related macular degeneration (AMD), diabetic macular edema (DME), epiretinal membrane (ERM), retinal vein occlusion (RVO), retinal artery occlusion (RAO), and vitreomacular interface disease (VID) and the proposed methods were compared against standard MAE variants and supervised baselines including ResNet-34 and ViT-S. Results: Both CurriMAE methods outperformed standard MAE models and supervised baselines. CurriMAE-Greedy achieved the highest performance with an area under the receiver operating characteristic curve (AUC) of 0.995 and accuracy of 93.32%, while CurriMAE-Soup provided competitive accuracy with substantially lower inference complexity. Compared with MAE models trained at fixed masking ratios, the proposed methods improved accuracy while requiring fewer pretraining runs and reduced model storage for inference. Conclusions: The proposed curriculum-based self-supervised ensemble framework offers an effective and resource-efficient solution for multiclass retinal OCT classification. By integrating progressive masking with snapshot-based model fusion, CurriMAE methods provide high performance with reduced computational cost, supporting their potential for real-world ophthalmic imaging applications where labeled data and computational resources are limited. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

12 pages, 2098 KB  
Article
Diagnostic Performance of ChatGPT-4o in Classifying Idiopathic Epiretinal Membrane Based on Optical Coherence Tomography
by Tadanobu Sato and Taro Kuramoto
J. Clin. Med. 2026, 15(1), 292; https://doi.org/10.3390/jcm15010292 - 30 Dec 2025
Viewed by 260
Abstract
Background/Objectives: Recent advances in large language models (LLMs) have enabled the multimodal interpretation of medical images, but their agreement in ophthalmology issues remains underexplored. This study evaluated the ability of ChatGPT-4o, a multimodal LLM, to classify idiopathic epiretinal membrane (ERM) using optical coherence [...] Read more.
Background/Objectives: Recent advances in large language models (LLMs) have enabled the multimodal interpretation of medical images, but their agreement in ophthalmology issues remains underexplored. This study evaluated the ability of ChatGPT-4o, a multimodal LLM, to classify idiopathic epiretinal membrane (ERM) using optical coherence tomography (OCT) based on the Govetto classification. Methods: This retrospective study included 250 eyes of 250 patients with idiopathic ERM who visited Uonuma Kikan Hospital between June 2015 and April 2025. Horizontal B-scan OCT images were independently classified into four stages by two masked ophthalmologists; cases with disagreement were excluded. ChatGPT-4o was prompted to identify ocular diseases and classify ERM stage. Agreement between ChatGPT-4o and ophthalmologists was evaluated using weighted Cohen’s κ, and logistic regression identified factors associated with disagreement. Results: Among 272 eligible eyes, 250 were analyzed (Stage 1: 87; Stage 2: 76; Stage 3: 63; Stage 4: 24). ChatGPT-4o identified the presence of ERM in 26.4% of cases on the first prompt. The perfect agreement rate for Govetto staging was 46.0%, with a weighted κ of 0.513 (95% CI: 0.420–0.605; p < 0.001), indicating moderate agreement. Disagreement was significantly associated with the presence of ectopic inner foveal layer (EIFL) (OR = 0.528, 95% CI: 0.312–0.893; p = 0.017). Conclusions: ChatGPT-4o showed moderate agreement with ophthalmologists in Govetto classification of idiopathic ERM using OCT images. Although its agreement was limited, the model demonstrated partial ability to recognize retinal structures, providing insight into the current capabilities and limitations of multimodal large language models in ophthalmic image interpretation. Full article
(This article belongs to the Section Ophthalmology)
Show Figures

Figure 1

12 pages, 1073 KB  
Article
Clinical Characteristics of Patients with Neovascular Age-Related Macular Degeneration and Responses to Anti-VEGF Therapy: Four-Group Stratification Based on Drusen and Punctate Hyperfluorescence
by Hiroyuki Kamao, Katsutoshi Goto, Kenichi Mizukawa, Ryutaro Hiraki, Atsushi Miki and Shuhei Kimura
J. Clin. Med. 2025, 14(23), 8593; https://doi.org/10.3390/jcm14238593 - 4 Dec 2025
Viewed by 354
Abstract
Background/Objectives: Different disease subtypes in neovascular age-related macular degeneration (nAMD) influence treatment burden, yet existing classifications such as the pachychoroid neovasculopathy (PNV)/non-PNV dichotomy may not fully capture clinical heterogeneity. This study aimed to compare the 12-month outcomes of intravitreal aflibercept (IVA) in [...] Read more.
Background/Objectives: Different disease subtypes in neovascular age-related macular degeneration (nAMD) influence treatment burden, yet existing classifications such as the pachychoroid neovasculopathy (PNV)/non-PNV dichotomy may not fully capture clinical heterogeneity. This study aimed to compare the 12-month outcomes of intravitreal aflibercept (IVA) in treatment-naïve patients with unilateral nAMD stratified by the presence or absence of drusen and punctate hyperfluorescence (PH). Methods: This retrospective study included 130 eyes of 130 patients categorized into the Drusen−/PH−, Drusen+/PH−, Drusen−/PH+, and Drusen+/PH+ groups. Their best-corrected visual acuity, retinal thickness, choroidal thickness, number of injections, no-retinal fluid rate during the loading dose regimen, and 12-month retreatment rate following treatment initiation were determined. The primary outcome was 12-month retreatment rate for the four groups, which was determined using Kaplan–Meier curves and log-rank tests. Exploratory metric multidimensional scaling (MDS) was used to visualize the baseline profiles. Results: The 12-month retreatment rates of the groups were significantly different. The Drusen+/PH− group had a higher retreatment rate and required more injections than the Drusen−/PH+ and Drusen+/PH+ groups. The Drusen+/PH− group was older than the Drusen−/PH+ and Drusen−/PH− groups. The Drusen−/PH+ group had a thicker choroid than the Drusen+/PH− group. The MDS results clear separation of the groups, consistent with the older age of the Drusen+/PH− group and the thicker choroid of the Drusen−/PH+ group. Conclusions: nAMD stratified based on drusen and PH differed in age, choroidal thickness, and IVA outcomes. The four-category framework provides greater pathophysiologic and therapeutic resolution than the simple PNV/non-PNV dichotomy and may help anticipate injection demand to guide individualized dosing strategies. Full article
(This article belongs to the Special Issue An Update on Retinal Diseases: From Diagnosis to Treatment)
Show Figures

Figure 1

24 pages, 490 KB  
Article
Learning Dynamics Analysis: Assessing Generalization of Machine Learning Models for Optical Coherence Tomography Multiclass Classification
by Michael Sher, David Remyes, Riah Sharma and Milan Toma
Informatics 2025, 12(4), 128; https://doi.org/10.3390/informatics12040128 - 22 Nov 2025
Viewed by 902
Abstract
This study evaluated the generalization and reliability of machine learning models for multiclass classification of retinal pathologies using a diverse set of images representing eight disease categories. Images were aggregated from two public datasets and divided into training, validation, and test sets, with [...] Read more.
This study evaluated the generalization and reliability of machine learning models for multiclass classification of retinal pathologies using a diverse set of images representing eight disease categories. Images were aggregated from two public datasets and divided into training, validation, and test sets, with an additional independent dataset used for external validation. Multiple modeling approaches were compared, including classical machine learning algorithms, convolutional neural networks with and without data augmentation, and a deep neural network using pre-trained feature extraction. Analysis of learning dynamics revealed that classical models and unaugmented convolutional neural networks exhibited overfitting and poor generalization, while models with data augmentation and the deep neural network showed healthy, parallel convergence of training and validation performance. Only the deep neural network demonstrated a consistent, monotonic decrease in accuracy, F1-score, and recall from training through external validation, indicating robust generalization. These results underscore the necessity of evaluating learning dynamics (not just summary metrics) to ensure model reliability and patient safety. Typically, model performance is expected to decrease gradually as data becomes less familiar. Therefore, models that do not exhibit these healthy learning dynamics, or that show unexpected improvements in performance on subsequent datasets, should not be considered for clinical application, as such patterns may indicate methodological flaws or data leakage rather than true generalization. Full article
Show Figures

Figure 1

17 pages, 2569 KB  
Article
Automated Multi-Class Classification of Retinal Pathologies: A Deep Learning Approach to Unified Ophthalmic Screening
by Uğur Şevik and Onur Mutlu
Diagnostics 2025, 15(21), 2745; https://doi.org/10.3390/diagnostics15212745 - 29 Oct 2025
Cited by 1 | Viewed by 1333
Abstract
Background/Objectives: The prevailing paradigm in ophthalmic AI involves siloed, single-disease models, which fails to address the complexity of differential diagnosis in clinical practice. This study aimed to develop and validate a unified deep learning framework for the automated multi-class classification of a [...] Read more.
Background/Objectives: The prevailing paradigm in ophthalmic AI involves siloed, single-disease models, which fails to address the complexity of differential diagnosis in clinical practice. This study aimed to develop and validate a unified deep learning framework for the automated multi-class classification of a wide spectrum of retinal pathologies from fundus photographs, moving beyond the single-disease paradigm to create a comprehensive screening tool. Methods: A publicly available dataset was manually curated by an ophthalmologist, resulting in 1841 images across nine classes, including Diabetic Retinopathy, Glaucoma, and Healthy retinas. After extensive data augmentation to mitigate class imbalance, three pre-trained CNN architectures (ResNet-152, EfficientNetV2, and a YOLOv11-based classifier) were comparatively evaluated. The models were trained using transfer learning and their performance was assessed on an independent test set using accuracy, macro-averaged F1-score, and Area Under the Curve (AUC). Results: The YOLOv11-based classifier demonstrated superior performance over the other architectures on the validation set. On the final independent test set, it achieved a robust overall accuracy of 0.861 and a macro-averaged F1-score of 0.861. The model yielded a validation set AUC of 0.961, which was statistically superior to both ResNet-152 (p < 0.001) and EfficientNetV2 (p < 0.01) as confirmed by the DeLong test. Conclusions: A unified deep learning framework, leveraging a YOLOv11 backbone, can accurately classify nine distinct retinal conditions from a single fundus photograph. This holistic approach moves beyond the limitations of single-disease algorithms, offering considerable promise as a comprehensive AI-driven screening tool to augment clinical decision-making and enhance diagnostic efficiency in ophthalmology. Full article
(This article belongs to the Special Issue Artificial Intelligence in Eye Disease, 4th Edition)
Show Figures

Figure 1

32 pages, 2758 KB  
Article
A Hybrid Convolutional Neural Network–Long Short-Term Memory (CNN–LSTM)–Attention Model Architecture for Precise Medical Image Analysis and Disease Diagnosis
by Md. Tanvir Hayat, Yazan M. Allawi, Wasan Alamro, Salman Md Sultan, Ahmad Abadleh, Hunseok Kang and Aymen I. Zreikat
Diagnostics 2025, 15(21), 2673; https://doi.org/10.3390/diagnostics15212673 - 23 Oct 2025
Cited by 1 | Viewed by 1984
Abstract
Background: Deep learning (DL)-based medical image classification is becoming increasingly reliable, enabling physicians to make faster and more accurate decisions in diagnosis and treatment. A plethora of algorithms have been developed to classify and analyze various types of medical images. Among them, Convolutional [...] Read more.
Background: Deep learning (DL)-based medical image classification is becoming increasingly reliable, enabling physicians to make faster and more accurate decisions in diagnosis and treatment. A plethora of algorithms have been developed to classify and analyze various types of medical images. Among them, Convolutional Neural Networks (CNNs) have proven highly effective, particularly in medical image analysis and disease detection. Methods: To further enhance these capabilities, this research introduces MediVision, a hybrid DL-based model that integrates a vision backbone based on CNNs for feature extraction, capturing detailed patterns and structures essential for precise classification. These features are then processed through Long Short-Term Memory (LSTM), which identifies sequential dependencies to better recognize disease progression. An attention mechanism is then incorporated that selectively focuses on salient features detected by the LSTM, improving the model’s ability to highlight critical abnormalities. Additionally, MediVision utilizes a skip connection, merging attention outputs with LSTM outputs along with Grad-CAM heatmap to visualize the most important regions of the analyzed medical image and further enhance feature representation and classification accuracy. Results: Tested on ten diverse medical image datasets (including, Alzheimer’s disease, breast ultrasound, blood cell, chest X-ray, chest CT scans, diabetic retinopathy, kidney diseases, bone fracture multi-region, retinal OCT, and brain tumor), MediVision consistently achieved classification accuracies above 95%, with a peak of 98%. Conclusions: The proposed MediVision model offers a robust and effective framework for medical image classification, improving interpretability, reliability, and automated disease diagnosis. To support research reproducibility, the codes and datasets used in this study have been publicly made available through an open-access repository. Full article
(This article belongs to the Special Issue Machine-Learning-Based Disease Diagnosis and Prediction)
Show Figures

Figure 1

16 pages, 1247 KB  
Article
Non-Invasive Retinal Pathology Assessment Using Haralick-Based Vascular Texture and Global Fundus Color Distribution Analysis
by Ouafa Sijilmassi
J. Imaging 2025, 11(9), 321; https://doi.org/10.3390/jimaging11090321 - 19 Sep 2025
Viewed by 570
Abstract
This study analyzes retinal fundus images to distinguish healthy retinas from those affected by diabetic retinopathy (DR) and glaucoma using a dual-framework approach: vascular texture analysis and global color distribution analysis. The texture-based approach involved segmenting the retinal vasculature and extracting eight Haralick [...] Read more.
This study analyzes retinal fundus images to distinguish healthy retinas from those affected by diabetic retinopathy (DR) and glaucoma using a dual-framework approach: vascular texture analysis and global color distribution analysis. The texture-based approach involved segmenting the retinal vasculature and extracting eight Haralick texture features from the Gray-Level Co-occurrence Matrix. Significant differences in features such as energy, contrast, correlation, and entropy were found between healthy and pathological retinas. Pathological retinas exhibited lower textural complexity and higher uniformity, which correlates with vascular thinning and structural changes observed in DR and glaucoma. In parallel, the global color distribution of the full fundus area was analyzed without segmentation. RGB intensity histograms were calculated for each channel and averaged across groups. Statistical tests revealed significant differences, particularly in the green and blue channels. The Mahalanobis distance quantified the separability of the groups per channel. These results indicate that pathological changes in retinal tissue can also lead to detectable chromatic shifts in the fundus. The findings underscore the potential of both vascular texture and color features as non-invasive biomarkers for early retinal disease detection and classification. Full article
(This article belongs to the Special Issue Emerging Technologies for Less Invasive Diagnostic Imaging)
Show Figures

Figure 1

37 pages, 9280 KB  
Article
A Multi-Model Image Enhancement and Tailored U-Net Architecture for Robust Diabetic Retinopathy Grading
by Archana Singh, Sushma Jain and Vinay Arora
Diagnostics 2025, 15(18), 2355; https://doi.org/10.3390/diagnostics15182355 - 17 Sep 2025
Cited by 2 | Viewed by 1078
Abstract
Background: Diabetic retinopathy (DR) is a leading cause of preventable vision impairment in individuals with diabetes. Early detection is essential, yet often hindered by subtle disease progression and reliance on manual expert screening. This study introduces an AI-based framework designed to achieve robust [...] Read more.
Background: Diabetic retinopathy (DR) is a leading cause of preventable vision impairment in individuals with diabetes. Early detection is essential, yet often hindered by subtle disease progression and reliance on manual expert screening. This study introduces an AI-based framework designed to achieve robust multiclass DR classification from retinal fundus images, addressing the challenges of early diagnosis and fine-grained lesion discrimination. Methods: The framework incorporates preprocessing steps such as pixel intensity normalization and geometric correction. A Hybrid Local-Global Retina Super-Resolution (HLG-RetinaSR) module is developed, combining deformable convolutional networks for local lesion enhancement with vision transformers for global contextual representation. Classification is performed using a hierarchical approach that integrates three models: a Convolutional Neural Network (CNN), DenseNet-121, and a custom multi-branch RefineNet-U architecture. Results: Experimental evaluation demonstrates that the combined HLG-RetinaSR and RefineNet-U approach consistently achieves precision, recall, F1-score, and accuracy values exceeding 99% across all DR severity levels. The system effectively emphasizes vascular abnormalities while suppressing background noise, surpassing existing state-of-the-art methods in accuracy and robustness. Conclusions: The proposed hybrid pipeline delivers a scalable, interpretable, and clinically relevant solution for DR screening. By improving diagnostic reliability and supporting early intervention, the system holds strong potential to assist ophthalmologists in reducing preventable vision loss. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

31 pages, 8445 KB  
Article
HIRD-Net: An Explainable CNN-Based Framework with Attention Mechanism for Diabetic Retinopathy Diagnosis Using CLAHE-D-DoG Enhanced Fundus Images
by Muhammad Hassaan Ashraf, Muhammad Nabeel Mehmood, Musharif Ahmed, Dildar Hussain, Jawad Khan, Younhyun Jung, Mohammed Zakariah and Deema Mohammed AlSekait
Life 2025, 15(9), 1411; https://doi.org/10.3390/life15091411 - 8 Sep 2025
Cited by 1 | Viewed by 1508
Abstract
Diabetic Retinopathy (DR) is a leading cause of vision impairment globally, underscoring the need for accurate and early diagnosis to prevent disease progression. Although fundus imaging serves as a cornerstone of Computer-Aided Diagnosis (CAD) systems, several challenges persist, including lesion scale variability, blurry [...] Read more.
Diabetic Retinopathy (DR) is a leading cause of vision impairment globally, underscoring the need for accurate and early diagnosis to prevent disease progression. Although fundus imaging serves as a cornerstone of Computer-Aided Diagnosis (CAD) systems, several challenges persist, including lesion scale variability, blurry morphological patterns, inter-class imbalance, limited labeled datasets, and computational inefficiencies. To address these issues, this study proposes an end-to-end diagnostic framework that integrates an enhanced preprocessing pipeline with a novel deep learning architecture, Hierarchical-Inception-Residual-Dense Network (HIRD-Net). The preprocessing stage combines Contrast Limited Adaptive Histogram Equalization (CLAHE) with Dilated Difference of Gaussian (D-DoG) filtering to improve image contrast and highlight fine-grained retinal structures. HIRD-Net features a hierarchical feature fusion stem alongside multiscale, multilevel inception-residual-dense blocks for robust representation learning. The Squeeze-and-Excitation Channel Attention (SECA) is introduced before each Global Average Pooling (GAP) layer to refine the Feature Maps (FMs). It further incorporates four GAP layers for multi-scale semantic aggregation, employs the Hard-Swish activation to enhance gradient flow, and utilizes the Focal Loss function to mitigate class imbalance issues. Experimental results on the IDRiD-APTOS2019, DDR, and EyePACS datasets demonstrate that the proposed framework achieves 93.46%, 82.45% and 79.94% overall classification accuracy using only 4.8 million parameters, highlighting its strong generalization capability and computational efficiency. Furthermore, to ensure transparent predictions, an Explainable AI (XAI) approach known as Gradient-weighted Class Activation Mapping (Grad-CAM) is employed to visualize HIRD-Net’s decision-making process. Full article
(This article belongs to the Special Issue Advanced Machine Learning for Disease Prediction and Prevention)
Show Figures

Figure 1

18 pages, 3709 KB  
Article
AI-Based Response Classification After Anti-VEGF Loading in Neovascular Age-Related Macular Degeneration
by Murat Fırat, İlknur Tuncer Fırat, Ziynet Fadıllıoğlu Üstündağ, Emrah Öztürk and Taner Tuncer
Diagnostics 2025, 15(17), 2253; https://doi.org/10.3390/diagnostics15172253 - 5 Sep 2025
Cited by 1 | Viewed by 1430
Abstract
Background/Objectives: Wet age-related macular degeneration (AMD) is a progressive retinal disease characterized by macular neovascularization (MNV). Currently, the standard treatment for wet AMD is intravitreal anti-VEGF administration, which aims to control disease activity by suppressing neovascularization. In clinical practice, the decision to [...] Read more.
Background/Objectives: Wet age-related macular degeneration (AMD) is a progressive retinal disease characterized by macular neovascularization (MNV). Currently, the standard treatment for wet AMD is intravitreal anti-VEGF administration, which aims to control disease activity by suppressing neovascularization. In clinical practice, the decision to continue or discontinue treatment is largely based on the presence of fluid on optical coherence tomography (OCT) and changes in visual acuity. However, discrepancies between anatomic and functional responses can occur during these assessments. Methods: This article presents an artificial intelligence (AI)-based classification model developed to objectively assess the response to anti-VEGF treatment in patients with AMD at 3 months. This retrospective study included 120 patients (144 eyes) who received intravitreal bevacizumab treatment. After bevacizumab loading treatment, the presence of subretinal/intraretinal fluid (SRF/IRF) on OCT images and changes in visual acuity (logMAR) were evaluated. Patients were divided into three groups: Class 0, active disease (persistent SRF/IRF); Class 1, good response (no SRF/IRF and ≥0.1 logMAR improvement); and Class 2, limited response (no SRF/IRF but with <0.1 logMAR improvement). Pre-treatment and 3-month post-treatment OCT image pairs were used for training and testing the artificial intelligence model. Based on this grouping, classification was performed with a Siamese neural network (ResNet-18-based) model. Results: The model achieved 95.4% accuracy. The macro precision, macro recall, and macro F1 scores for the classes were 0.948, 0.949, and 0.948, respectively. Layer Class Activation Map (LayerCAM) heat maps and Shapley Additive Explanations (SHAP) overlays confirmed that the model focused on pathology-related regions. Conclusions: In conclusion, the model classifies post-loading response by predicting both anatomic disease activity and visual prognosis from OCT images. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

26 pages, 2735 KB  
Article
Time Series Classification of Autism Spectrum Disorder Using the Light-Adapted Electroretinogram
by Sergey Chistiakov, Anton Dolganov, Paul A. Constable, Aleksei Zhdanov, Mikhail Kulyabin, Dorothy A. Thompson, Irene O. Lee, Faisal Albasu, Vasilii Borisov and Mikhail Ronkin
Bioengineering 2025, 12(9), 951; https://doi.org/10.3390/bioengineering12090951 - 2 Sep 2025
Viewed by 1836
Abstract
The clinical electroretinogram (ERG) is a non-invasive diagnostic test used to assess the functional state of the retina by recording changes in the bioelectric potential following brief flashes of light. The recorded ERG waveform offers ways for diagnosing both retinal dystrophies and neurological [...] Read more.
The clinical electroretinogram (ERG) is a non-invasive diagnostic test used to assess the functional state of the retina by recording changes in the bioelectric potential following brief flashes of light. The recorded ERG waveform offers ways for diagnosing both retinal dystrophies and neurological disorders such as autism spectrum disorder (ASD), attention deficit hyperactivity disorder (ADHD), and Parkinson’s disease. In this study, different time-series-based machine learning methods were used to classify ERG signals from ASD and typically developing individuals with the aim of interpreting the decisions made by the models to understand the classification process made by the models. Among the time-series classification (TSC) algorithms, the Random Convolutional Kernel Transform (ROCKET) algorithm showed the most accurate results with the fewest number of predictive errors. For the interpretation analysis of the model predictions, the SHapley Additive exPlanations (SHAP) algorithm was applied to each of the models’ predictions, with the ROCKET and KNeighborsTimeSeriesClassifier (TS-KNN) algorithms showing more suitability for ASD classification as they provided better-defined explanations by discarding the uninformative non-physiological part of the ERG waveform baseline signal and focused on the time regions incorporating the clinically significant a- and b-waves of the ERG. With the potential broadening scope of practice for visual electrophysiology within neurological disorders, TSC may support the identification of important regions in the ERG time series to support the classification of neurological disorders and potential retinal diseases. Full article
(This article belongs to the Special Issue Retinal Biomarkers: Seeing Diseases in the Eye)
Show Figures

Figure 1

12 pages, 1154 KB  
Article
A Comparative Study Between Clinical Optical Coherence Tomography (OCT) Analysis and Artificial Intelligence-Based Quantitative Evaluation in the Diagnosis of Diabetic Macular Edema
by Camila Brandão Fantozzi, Letícia Margaria Peres, Jogi Suda Neto, Cinara Cássia Brandão, Rodrigo Capobianco Guido and Rubens Camargo Siqueira
Vision 2025, 9(3), 75; https://doi.org/10.3390/vision9030075 - 1 Sep 2025
Viewed by 1643
Abstract
Recent advances in artificial intelligence (AI) have transformed ophthalmic diagnostics, particularly for retinal diseases. In this prospective, non-randomized study, we evaluated the performance of an AI-based software system against conventional clinical assessment—both quantitative and qualitative—of optical coherence tomography (OCT) images for diagnosing diabetic [...] Read more.
Recent advances in artificial intelligence (AI) have transformed ophthalmic diagnostics, particularly for retinal diseases. In this prospective, non-randomized study, we evaluated the performance of an AI-based software system against conventional clinical assessment—both quantitative and qualitative—of optical coherence tomography (OCT) images for diagnosing diabetic macular edema (DME). A total of 700 OCT exams were analyzed across 26 features, including demographic data (age, sex), eye laterality, visual acuity, and 21 quantitative OCT parameters (Macula Map A X-Y). We tested two classification scenarios: binary (DME presence vs. absence) and multiclass (six distinct DME phenotypes). To streamline feature selection, we applied paraconsistent feature engineering (PFE), isolating the most diagnostically relevant variables. We then compared the diagnostic accuracies of logistic regression, support vector machines (SVM), K-nearest neighbors (KNN), and decision tree models. In the binary classification using all features, SVM and KNN achieved 92% accuracy, while logistic regression reached 91%. When restricted to the four PFE-selected features, accuracy modestly declined to 84% for both logistic regression and SVM. These findings underscore the potential of AI—and particularly PFE—as an efficient, accurate aid for DME screening and diagnosis. Full article
(This article belongs to the Section Retinal Function and Disease)
Show Figures

Figure 1

15 pages, 446 KB  
Article
Minigene Splice Assays Allow Pathogenicity Reclassification of RPE65 Variants of Uncertain Significance
by Daan M. Panneman, Erica G. M. Boonen, Zelia Corradi, Frans P. M. Cremers and Susanne Roosing
Genes 2025, 16(9), 1022; https://doi.org/10.3390/genes16091022 - 28 Aug 2025
Cited by 1 | Viewed by 997
Abstract
Background/objectives: Obtaining a genetic diagnosis for patients with inherited retinal diseases has become even more important since gene-specific therapies have become available. When genetic screening reveals variants of uncertain significance (VUS), additional evidence is required to determine genetic eligibility for therapy. Confirming the [...] Read more.
Background/objectives: Obtaining a genetic diagnosis for patients with inherited retinal diseases has become even more important since gene-specific therapies have become available. When genetic screening reveals variants of uncertain significance (VUS), additional evidence is required to determine genetic eligibility for therapy. Confirming the effect on splicing that is predicted by SpliceAI could change their classification to either likely pathogenic or pathogenic and would therefore be of great importance when interpreting these variants when geneticists worldwide are trying to reach a diagnosis. Methods: Using minigene assays, we established a pipeline to assess the effect on splicing for all variants. We selected 73 RPE65 variants that were classified as either VUS or likely benign in the RPE65 Leiden Open Variant Database (LOVD) or ClinVar and were predicted to affect splicing by SpliceAI with a delta score of >0.1 and by using an analysis window of 5000 bp up- and downstream of the variant. Results: Using four wild-type vectors, we generated 59 constructs containing the variants of interest. Through these minigene assays, we assessed the effect on splicing of these VUS to enable reclassification. Upon quantification, we identified seven variants with a full, aberrant splicing effect without residual wild-type transcript. Eleven variants had between 5% and 20% remaining wild-type transcript. Forty-one variants had ≥20% residual wild-type transcript, among which fifteen variants showed no effect on splicing. Conclusions: Following the 2023 established ClinGen specific ACMG guidelines for RPE65 (Criteria Specification Registry), evidence from splice assays enabled reclassification of seven RPE65 variants from VUS to pathogenic through an assigned PVS1-very-strong criterium, as less than 5% of wild-type transcript was present. These findings contribute to the interpretation of variants observed in patients, which will in turn dictate their eligibility for gene therapy. Full article
(This article belongs to the Special Issue Genetics and Therapy of Retinal Diseases)
Show Figures

Figure 1

Back to TopTop