Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (653)

Search Parameters:
Keywords = fundus image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 12983 KiB  
Article
A Hybrid Model for Fluorescein Funduscopy Image Classification by Fusing Multi-Scale Context-Aware Features
by Yawen Wang, Chao Chen, Zhuo Chen and Lingling Wu
Technologies 2025, 13(8), 323; https://doi.org/10.3390/technologies13080323 (registering DOI) - 30 Jul 2025
Abstract
With the growing use of deep learning in medical image analysis, automated classification of fundus images is crucial for the early detection of fundus diseases. However, the complexity of fluorescein fundus angiography (FFA) images poses challenges in the accurate identification of lesions. To [...] Read more.
With the growing use of deep learning in medical image analysis, automated classification of fundus images is crucial for the early detection of fundus diseases. However, the complexity of fluorescein fundus angiography (FFA) images poses challenges in the accurate identification of lesions. To address these issues, we propose the Enhanced Feature Fusion ConvNeXt (EFF-ConvNeXt) model, a novel architecture combining VGG16 and an enhanced ConvNeXt for FFA image classification. VGG16 is employed to extract edge features, while an improved ConvNeXt incorporates the Context-Aware Feature Fusion (CAFF) strategy to enhance global contextual understanding. CAFF integrates an Improved Global Context (IGC) module with multi-scale feature fusion to jointly capture local and global features. Furthermore, an SKNet module is used in the final stages to adaptively recalibrate channel-wise features. The model demonstrates improved classification accuracy and robustness, achieving 92.50% accuracy and 92.30% F1 score on the APTOS2023 dataset—surpassing the baseline ConvNeXt-T by 3.12% in accuracy and 4.01% in F1 score. These results highlight the model’s ability to better recognize complex disease features, providing significant support for more accurate diagnosis of fundus diseases. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Medical Image Analysis)
Show Figures

Figure 1

1 pages, 148 KiB  
Correction
Correction: Baradad-Jurjo et al. Measurement of Melanocytic Choroidal Lesions: Ultrasound Versus Ultrawide-Field Fundus Imaging System. Cancers 2025, 17, 642
by Maria C. Baradad-Jurjo, Daniel Lorenzo, Estel·la Rojas-Pineda, Laura Vigués-Jorba, Rahul Morwani, Lluís Arias, Pere Garcia-Bru, Estefania Cobos, Juan Francisco Santamaria, Carmen Antia Rodríguez-Fernández and Josep M. Caminal
Cancers 2025, 17(15), 2463; https://doi.org/10.3390/cancers17152463 - 25 Jul 2025
Viewed by 81
Abstract
In the published publication [...] Full article
(This article belongs to the Section Cancer Causes, Screening and Diagnosis)
12 pages, 7016 KiB  
Article
Triamcinolone Acetonide-Assisted Visualization and Removal of Vitreous Cortex Remnants in Retinal Detachment: A Prospective Cohort Study
by Francesco Faraldi, Carlo Alessandro Lavia, Daniela Bacherini, Clara Rizzo, Maria Cristina Savastano, Marco Nassisi, Mariantonia Ferrara, Mario R Romano and Stanislao Rizzo
Diagnostics 2025, 15(15), 1854; https://doi.org/10.3390/diagnostics15151854 - 23 Jul 2025
Viewed by 262
Abstract
Background/Objectives: In rhegmatogenous retinal detachment (RRD), vitreous cortex remnants (VCRs) may contribute to the development and progression of proliferative vitreoretinopathy (PVR). This study aimed to evaluate potential toxicity and trauma secondary to VCRs visualization and removal during pars plana vitrectomy (PPV) for [...] Read more.
Background/Objectives: In rhegmatogenous retinal detachment (RRD), vitreous cortex remnants (VCRs) may contribute to the development and progression of proliferative vitreoretinopathy (PVR). This study aimed to evaluate potential toxicity and trauma secondary to VCRs visualization and removal during pars plana vitrectomy (PPV) for RRD. Methods: Prospective study on patients with primary RRD who underwent PPV. Imaging assessment included widefield OCT (WF-OCT), ultra-WF retinography and fundus autofluorescence (FAF). During PPV, a filtered and diluted triamcinolone acetonide (TA) solution (20 mg/mL) was used to evaluate the presence and extension of VCRs, removed using an extendible diamond-dusted sweeper (EDDS). After six months, retinal and retinal pigment epithelium toxicity and retinal trauma due to VCRs removal were investigated. Results: Retinal reattachment was achieved in 21/21 cases included in the study. No signs of retinal or RPE toxicity were detected and WF-OCT performed in the areas of VCRs removal revealed an intact inner retinal architecture in the majority of eyes, with minor and localized inner retinal indentations in 4 cases. Conclusions: VCRs visualization and removal using TA and EDDS appears to be safe, with no retinal toxicity and very limited and circumscribed mechanical trauma. This approach may contribute to reducing the risk of postoperative PVR. Full article
(This article belongs to the Section Biomedical Optics)
Show Figures

Figure 1

12 pages, 2353 KiB  
Article
Intergrader Agreement on Qualitative and Quantitative Assessment of Diabetic Retinopathy Severity Using Ultra-Widefield Imaging: INSPIRED Study Report 1
by Eleonora Riotto, Wei-Shan Tsai, Hagar Khalid, Francesca Lamanna, Louise Roch, Medha Manoj and Sobha Sivaprasad
Diagnostics 2025, 15(14), 1831; https://doi.org/10.3390/diagnostics15141831 - 21 Jul 2025
Viewed by 273
Abstract
Background/Objectives: Discrepancies in diabetic retinopathy (DR) grading are well-documented, with retinal non-perfusion (RNP) quantification posing greater challenges. This study assessed intergrader agreement in DR evaluation, focusing on qualitative severity grading and quantitative RNP measurement. We aimed to improve agreement through structured consensus [...] Read more.
Background/Objectives: Discrepancies in diabetic retinopathy (DR) grading are well-documented, with retinal non-perfusion (RNP) quantification posing greater challenges. This study assessed intergrader agreement in DR evaluation, focusing on qualitative severity grading and quantitative RNP measurement. We aimed to improve agreement through structured consensus meetings. Methods: A retrospective analysis of 100 comparisons from 50 eyes (36 patients) was conducted. Two paired medical retina fellows graded ultra-widefield color fundus photographs (CFP) and fundus fluorescein angiography (FFA) images. CFP assessments included DR severity using the International Clinical Diabetic Retinopathy (ICDR) grading system, DR Severity Scale (DRSS), and predominantly peripheral lesions (PPL). FFA-based RNP was defined as capillary loss with grayscale matching the foveal avascular zone. Weekly adjudication by a senior specialist resolved discrepancies. Intergrader agreement was evaluated using Cohen’s kappa (qualitative DRSS) and intraclass correlation coefficients (ICC) (quantitative RNP). Bland–Altman analysis assessed bias and variability. Results: After eight consensus meetings, CFP grading agreement improved to excellent: kappa = 91% (ICDR DR severity), 89% (DRSS), and 89% (PPL). FFA-based PPL agreement reached 100%. For RNP, the non-perfusion index (NPI) showed moderate overall ICC (0.49), with regional ICCs ranging from 0.40 to 0.57 (highest in the nasal region, ICC = 0.57). Bland–Altman analysis revealed a mean NPI difference of 0.12 (limits: −0.11 to 0.35), indicating acceptable variability despite outliers. Conclusions: Structured consensus training achieved excellent intergrader agreement for DR severity and PPL grading, supporting the clinical reliability of ultra-widefield imaging. However, RNP measurement variability underscores the need for standardized protocols and automated tools to enhance reproducibility. This process is critical for developing robust AI-based screening systems. Full article
(This article belongs to the Special Issue New Advances in Retinal Imaging)
Show Figures

Figure 1

20 pages, 688 KiB  
Article
Multi-Modal AI for Multi-Label Retinal Disease Prediction Using OCT and Fundus Images: A Hybrid Approach
by Amina Zedadra, Mahmoud Yassine Salah-Salah, Ouarda Zedadra and Antonio Guerrieri
Sensors 2025, 25(14), 4492; https://doi.org/10.3390/s25144492 - 19 Jul 2025
Viewed by 421
Abstract
Ocular diseases can significantly affect vision and overall quality of life, with diagnosis often being time-consuming and dependent on expert interpretation. While previous computer-aided diagnostic systems have focused primarily on medical imaging, this paper proposes VisionTrack, a multi-modal AI system for predicting multiple [...] Read more.
Ocular diseases can significantly affect vision and overall quality of life, with diagnosis often being time-consuming and dependent on expert interpretation. While previous computer-aided diagnostic systems have focused primarily on medical imaging, this paper proposes VisionTrack, a multi-modal AI system for predicting multiple retinal diseases, including Diabetic Retinopathy (DR), Age-related Macular Degeneration (AMD), Diabetic Macular Edema (DME), drusen, Central Serous Retinopathy (CSR), and Macular Hole (MH), as well as normal cases. The proposed framework integrates a Convolutional Neural Network (CNN) for image-based feature extraction, a Graph Neural Network (GNN) to model complex relationships among clinical risk factors, and a Large Language Model (LLM) to process patient medical reports. By leveraging diverse data sources, VisionTrack improves prediction accuracy and offers a more comprehensive assessment of retinal health. Experimental results demonstrate the effectiveness of this hybrid system, highlighting its potential for early detection, risk assessment, and personalized ophthalmic care. Experiments were conducted using two publicly available datasets, RetinalOCT and RFMID, which provide diverse retinal imaging modalities: OCT images and fundus images, respectively. The proposed multi-modal AI system demonstrated strong performance in multi-label disease prediction. On the RetinalOCT dataset, the model achieved an accuracy of 0.980, F1-score of 0.979, recall of 0.978, and precision of 0.979. Similarly, on the RFMID dataset, it reached an accuracy of 0.989, F1-score of 0.881, recall of 0.866, and precision of 0.897. These results confirm the robustness, reliability, and generalization capability of the proposed approach across different imaging modalities. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

14 pages, 3345 KiB  
Review
Fundus Autofluorescence in Inherited Retinal Disease: A Review
by Jin Kyun Oh, Omar Moussa, Byron L. Lam and Jesse D. Sengillo
Cells 2025, 14(14), 1092; https://doi.org/10.3390/cells14141092 - 16 Jul 2025
Viewed by 290
Abstract
Fundus autofluorescence (FAF) is a non-invasive retinal imaging technique that helps visualize naturally occurring fluorophores, such as lipofuscin, and provides valuable insight into retinal diseases—particularly inherited retinal diseases (IRDs). FAF is especially useful in detecting subclinical or early-stage IRDs and in monitoring disease [...] Read more.
Fundus autofluorescence (FAF) is a non-invasive retinal imaging technique that helps visualize naturally occurring fluorophores, such as lipofuscin, and provides valuable insight into retinal diseases—particularly inherited retinal diseases (IRDs). FAF is especially useful in detecting subclinical or early-stage IRDs and in monitoring disease progression over time. In Stargardt disease, areas of decreased autofluorescence correlate with disease progression and have been proposed as a biomarker for future clinical trials. FAF can also help differentiate Stargardt disease from other macular dystrophies. In retinitis pigmentosa, hyperautofluorescent rings are a common feature on FAF and serve as an important marker for disease monitoring, especially as changes align with those seen on other imaging modalities. FAF is valuable in tracking progression of choroideremia and may help identify disease carrier status. FAF has also improved the characterization of mitochondrial retinopathies such as maternally inherited diabetes and deafness. As a rapid and widely accessible imaging modality, FAF plays a critical role in both diagnosis and longitudinal care of patients with IRDs. Full article
(This article belongs to the Special Issue Retinal Pigment Epithelium in Degenerative Retinal Diseases)
Show Figures

Figure 1

20 pages, 526 KiB  
Article
Assessment of Retinal Microcirculation in Primary Open-Angle Glaucoma Using Adaptive Optics and OCT Angiography: Correlation with Structural and Functional Damage
by Anna Zaleska-Żmijewska, Alina Szewczuk, Zbigniew M. Wawrzyniak, Maria Żmijewska and Jacek P. Szaflik
J. Clin. Med. 2025, 14(14), 4978; https://doi.org/10.3390/jcm14144978 - 14 Jul 2025
Viewed by 257
Abstract
Background: This study aimed to evaluate retinal arteriole parameters using adaptive optics (AO) rtx1™ (Imagine Eyes, Orsay, France) and peripapillary and macular vessel densities with optical coherence tomography angiography (OCTA) in eyes with different stages of primary open-angle glaucoma (POAG) compared to healthy [...] Read more.
Background: This study aimed to evaluate retinal arteriole parameters using adaptive optics (AO) rtx1™ (Imagine Eyes, Orsay, France) and peripapillary and macular vessel densities with optical coherence tomography angiography (OCTA) in eyes with different stages of primary open-angle glaucoma (POAG) compared to healthy eyes. It also investigated the associations between vascular parameters and glaucoma severity, as defined by structural (OCT) and functional (visual field) changes. Methods: Fifty-seven eyes from 31 POAG patients and fifty from 25 healthy volunteers were examined. Retinal arteriole morphology was assessed using the AO rtx1™-fundus camera, which measured lumen diameter, wall thickness, total diameter, wall-to-lumen ratio (WLR), and wall cross-sectional area. OCTA was used to measure vessel densities in superficial (SCP) and deep (DCP) capillary plexuses of the macula and radial peripapillary capillary plexus (RPCP) and FAZ area. Structural OCT parameters (RNFL, GCC, rim area) and visual field tests (MD, PSD) were also performed. Results: Glaucoma eyes showed significantly thicker arteriole walls (12.8 ± 1.4 vs. 12.2 ± 1.3 µm; p = 0.030), narrower lumens (85.5 ± 10.4 vs. 100.6 ± 11.1 µm; p < 0.001), smaller total diameters (111.0 ± 10.4 vs. 124.1 ± 12.4 µm; p < 0.001), and higher WLRs (0.301 ± 0.04 vs. 0.238 ± 0.002; p < 0.001) than healthy eyes. In glaucoma patients, OCTA revealed significantly reduced vessel densities in SCP (36.39 ± 3.60 vs. 38.46 ± 1.41; p < 0.001), DCP (36.39 ± 3.60 vs. 38.46 ± 1.41; p < 0.001), and RPCP plexuses (35.42 ± 4.97 vs. 39.27 ± 1.48; p < 0.001). The FAZ area was enlarged in eyes with glaucoma (0.546 ± 0.299 vs. 0.295 ± 0.125 mm2); p < 0.001). Positive correlations were found between vessel densities and OCT parameters (RNFL, r = 0.621; GCC, r = 0.536; rim area, r = 0.489), while negative correlations were observed with visual field deficits (r = −0.517). Conclusions: Vascular deterioration, assessed by AO rtx1™ and OCTA, correlates closely with structural and functional damage in glaucoma. Retinal microcirculation changes may precede structural abnormalities in the optic nerve head. Both imaging methods enable the earlier detection, staging, and monitoring of glaucoma compared to conventional tests. Full article
(This article belongs to the Section Ophthalmology)
Show Figures

Graphical abstract

40 pages, 3646 KiB  
Article
Novel Deep Learning Model for Glaucoma Detection Using Fusion of Fundus and Optical Coherence Tomography Images
by Saad Islam, Ravinesh C. Deo, Prabal Datta Barua, Jeffrey Soar and U. Rajendra Acharya
Sensors 2025, 25(14), 4337; https://doi.org/10.3390/s25144337 - 11 Jul 2025
Viewed by 545
Abstract
Glaucoma is a leading cause of irreversible blindness worldwide, yet early detection can prevent vision loss. This paper proposes a novel deep learning approach that combines two ophthalmic imaging modalities, fundus photographs and optical coherence tomography scans, as paired images from the same [...] Read more.
Glaucoma is a leading cause of irreversible blindness worldwide, yet early detection can prevent vision loss. This paper proposes a novel deep learning approach that combines two ophthalmic imaging modalities, fundus photographs and optical coherence tomography scans, as paired images from the same eye of each patient for automated glaucoma detection. We develop separate convolutional neural network models for fundus and optical coherence tomography images and a fusion model that integrates features from both modalities for each eye. The models are trained and evaluated on a private clinical dataset (Bangladesh Eye Hospital and Institute Ltd.) consisting of 216 healthy eye images (108 fundus, 108 optical coherence tomography) from 108 patients and 200 glaucomatous eye images (100 fundus, 100 optical coherence tomography) from 100 patients. Our methodology includes image preprocessing pipelines for each modality, custom convolutional neural network/ResNet-based architectures for single-modality analysis, and a two-branch fusion network combining fundus and optical coherence tomography feature representations. We report the performance (accuracy, sensitivity, specificity, and area under curve) of the fundus-only, optical coherence tomography-only, and fusion models. In addition to a fixed test set evaluation, we perform five-fold cross-validation, confirming the robustness and consistency of the fusion model across multiple data partitions. On our fixed test set, the fundus-only model achieves 86% accuracy (AUC 0.89) and the optical coherence tomography-only model, 84% accuracy (AUC 0.87). Our fused model reaches 92% accuracy (AUC 0.95), an absolute improvement of 6 percentage points and 8 percentage points over the fundus and OCT baselines, respectively. McNemar’s test on pooled five-fold validation predictions (b = 3, c = 18) yields χ2=10.7 (p = 0.001), and on optical coherence tomography-only vs. fused (b_o = 5, c_o = 20) χo2=9.0 (p = 0.003), confirming that the fusion gains are significant. Five-fold cross-validation further confirms these improvements (mean AUC 0.952±0.011. We also compare our results with the existing literature and discuss the clinical significance, limitations, and future work. To the best of our knowledge, this is the first time a novel deep learning model has been used on a fusion of paired fundus and optical coherence tomography images of the same patient for the detection of glaucoma. Full article
(This article belongs to the Special Issue AI and Big Data Analytics for Medical E-Diagnosis)
Show Figures

Figure 1

10 pages, 2019 KiB  
Article
Bilateral Sector Macular Dystrophy Associated with PRPH2 Variant c.623G>A (p.Gly208Asp)
by Simone Kellner, Silke Weinitz, Ghazaleh Farmand, Heidi Stöhr, Bernhard H. F. Weber and Ulrich Kellner
J. Clin. Med. 2025, 14(14), 4893; https://doi.org/10.3390/jcm14144893 - 10 Jul 2025
Viewed by 271
Abstract
Objective: The clinical presentation of inherited retinal dystrophies associated with pathogenic variants in PRPH2 is highly variable. Here we present bilateral sector macular dystrophy as a novel clinical phenotype. Methods and analysis: Ophthalmologic examination, detailed retinal imaging with optical coherence tomography [...] Read more.
Objective: The clinical presentation of inherited retinal dystrophies associated with pathogenic variants in PRPH2 is highly variable. Here we present bilateral sector macular dystrophy as a novel clinical phenotype. Methods and analysis: Ophthalmologic examination, detailed retinal imaging with optical coherence tomography (OCT), OCT-angiography, fundus and near-infrared autofluorescence and molecular genetic testing were performed on a 30-year-old female. Results: The patient reported the onset of subjective visual disturbances 4.5 months prior to our first examination. Clinical examination and retinal imaging revealed bilateral sharply demarcated paracentral lesions in the temporal lower macula and otherwise normal retinal findings. Patient history revealed no medication or other possible causes for these unusual retinal lesions. Molecular genetic testing revealed a heterozygous c.623G>A variation (p.(Gly208Asp)) in the PRPH2 gene. Conclusions: Bilateral sectoral macular dystrophy has not been reported previously in any inherited retinal dystrophy. This feature adds to the wide spectrum of PRPH2-associated clinical presentations. Full article
Show Figures

Figure 1

17 pages, 1937 KiB  
Article
Hybrid Deep Learning Model for Improved Glaucoma Diagnostic Accuracy
by Nahum Flores, José La Rosa, Sebastian Tuesta, Luis Izquierdo, María Henriquez and David Mauricio
Information 2025, 16(7), 593; https://doi.org/10.3390/info16070593 - 10 Jul 2025
Viewed by 301
Abstract
Glaucoma is an irreversible neurodegenerative disease that affects the optic nerve, leading to partial or complete vision loss. Early and accurate detection is crucial to prevent vision impairment, which necessitates the development of highly precise diagnostic tools. Deep learning (DL) has emerged as [...] Read more.
Glaucoma is an irreversible neurodegenerative disease that affects the optic nerve, leading to partial or complete vision loss. Early and accurate detection is crucial to prevent vision impairment, which necessitates the development of highly precise diagnostic tools. Deep learning (DL) has emerged as a promising approach for glaucoma diagnosis, where the model is trained on datasets of fundus images. To improve the detection accuracy, we propose a hybrid model for glaucoma detection that combines multiple DL models with two fine-tuning strategies and uses a majority voting scheme to determine the final prediction. In experiments, the hybrid model achieved a detection accuracy of 96.55%, a sensitivity of 98.84%, and a specificity of 94.32%. Integrating datasets was found to improve the performance compared to using them separately even with transfer learning. When compared to individual DL models, the hybrid model achieved a 20.69% improvement in accuracy compared to the best model when applied to a single dataset, a 13.22% improvement when applied with transfer learning across all datasets, and a 1.72% improvement when applied to all datasets. These results demonstrate the potential of hybrid DL models to detect glaucoma more accurately than individual models. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

14 pages, 751 KiB  
Article
Comparison of Validity and Reliability of Manual Consensus Grading vs. Automated AI Grading for Diabetic Retinopathy Screening in Oslo, Norway: A Cross-Sectional Pilot Study
by Mia Karabeg, Goran Petrovski, Katrine Holen, Ellen Steffensen Sauesund, Dag Sigurd Fosmark, Greg Russell, Maja Gran Erke, Vallo Volke, Vidas Raudonis, Rasa Verkauskiene, Jelizaveta Sokolovska, Morten Carstens Moe, Inga-Britt Kjellevold Haugen and Beata Eva Petrovski
J. Clin. Med. 2025, 14(13), 4810; https://doi.org/10.3390/jcm14134810 - 7 Jul 2025
Viewed by 528
Abstract
Background: Diabetic retinopathy (DR) is a leading cause of visual impairment worldwide. Manual grading of fundus images is the gold standard in DR screening, although it is time-consuming. Artificial intelligence (AI)-based algorithms offer a faster alternative, though concerns remain about their diagnostic reliability. [...] Read more.
Background: Diabetic retinopathy (DR) is a leading cause of visual impairment worldwide. Manual grading of fundus images is the gold standard in DR screening, although it is time-consuming. Artificial intelligence (AI)-based algorithms offer a faster alternative, though concerns remain about their diagnostic reliability. Methods: A cross-sectional pilot study among patients (≥18 years) with diabetes was established for DR and diabetic macular edema (DME) screening at the Oslo University Hospital (OUH), Department of Ophthalmology, and the Norwegian Association of the Blind and Partially Sighted (NABP). The aim of the study was to evaluate the validity (accuracy, sensitivity, specificity) and reliability (inter-rater agreement) of automated AI-based compared to manual consensus (MC) grading of DR and DME, performed by a multidisciplinary team of healthcare professionals. Grading of DR and DME was performed manually and by EyeArt (Eyenuk) software version v2.1.0, based on the International Clinical Disease Severity Scale (ICDR) for DR. Agreement was measured by Quadratic Weighted Kappa (QWK) and Cohen’s Kappa (κ). Sensitivity, specificity, and diagnostic test accuracy (Area Under the Curve (AUC)) were also calculated. Results: A total of 128 individuals (247 eyes) (51 women, 77 men) were included, with a median age of 52.5 years. Prevalence of any vs. referable DR (RDR) was 20.2% vs. 11.7%, while sensitivity was 94.0% vs. 89.7%, specificity was 72.6% was 83.0%, and AUC was 83.5% vs. 86.3%, respectively. DME was detected only in one eye by both methods. Conclusions: AI-based grading offered high sensitivity and acceptable specificity for detecting DR, showing moderate agreement with manual assessments. Such grading may serve as an effective screening tool to support clinical evaluation, while ongoing training of human graders remains essential to ensure high-quality reference standards for accurate diagnostic accuracy and the development of AI algorithms. Full article
(This article belongs to the Special Issue Artificial Intelligence and Eye Disease)
Show Figures

Figure 1

27 pages, 12221 KiB  
Article
Retinal Vessel Segmentation Based on a Lightweight U-Net and Reverse Attention
by Fernando Daniel Hernandez-Gutierrez, Eli Gabriel Avina-Bravo, Mario Alberto Ibarra-Manzano, Jose Ruiz-Pinales, Emmanuel Ovalle-Magallanes and Juan Gabriel Avina-Cervantes
Mathematics 2025, 13(13), 2203; https://doi.org/10.3390/math13132203 - 5 Jul 2025
Viewed by 936
Abstract
U-shaped architectures have achieved exceptional performance in medical image segmentation. Their aim is to extract features by two symmetrical paths: an encoder and a decoder. We propose a lightweight U-Net incorporating reverse attention and a preprocessing framework for accurate retinal vessel segmentation. This [...] Read more.
U-shaped architectures have achieved exceptional performance in medical image segmentation. Their aim is to extract features by two symmetrical paths: an encoder and a decoder. We propose a lightweight U-Net incorporating reverse attention and a preprocessing framework for accurate retinal vessel segmentation. This concept could be of benefit to portable or embedded recognition systems with limited resources for real-time operation. Compared to the baseline model (7.7 M parameters), the proposed U-Net model has only 1.9 M parameters and was tested on the DRIVE (Digital Retinal Images for Vesselness Extraction), CHASE (Child Heart and Health Study in England), and HRF (High-Resolution Fundus) datasets for vesselness analysis. The proposed model achieved Dice coefficients and IoU scores of 0.7871 and 0.6318 on the DRIVE dataset, 0.8036 and 0.6910 on the CHASE-DB1 Retinal Vessel Reference dataset, as well as 0.6902 and 0.5270 on the HRF dataset, respectively. Notably, the integration of the reverse attention mechanism contributed to a more accurate delineation of thin and peripheral vessels, which are often undetected by conventional models. The model comprised 1.94 million parameters and 12.21 GFLOPs. Furthermore, during inference, the model achieved a frame rate average of 208 FPS and a latency of 4.81 ms. These findings support the applicability of the proposed model in real-world clinical and mobile healthcare environments where efficiency and Accuracy are essential. Full article
(This article belongs to the Special Issue Advanced Research in Image Processing and Optimization Methods)
Show Figures

Figure 1

32 pages, 4514 KiB  
Review
Blue Light and Green Light Fundus Autofluorescence, Complementary to Optical Coherence Tomography, in Age-Related Macular Degeneration Evaluation
by Antonia-Elena Ranetti, Horia Tudor Stanca, Mihnea Munteanu, Raluca Bievel Radulescu and Simona Stanca
Diagnostics 2025, 15(13), 1688; https://doi.org/10.3390/diagnostics15131688 - 2 Jul 2025
Viewed by 949
Abstract
Background: Age-related macular degeneration (AMD) is one of the leading causes of permanent vision loss in the elderly, particularly in higher-income countries. Fundus autofluorescence (FAF) imaging is a widely used, non-invasive technique that complements structural imaging in the assessment of retinal pigment epithelium [...] Read more.
Background: Age-related macular degeneration (AMD) is one of the leading causes of permanent vision loss in the elderly, particularly in higher-income countries. Fundus autofluorescence (FAF) imaging is a widely used, non-invasive technique that complements structural imaging in the assessment of retinal pigment epithelium (RPE) integrity. While optical coherence tomography (OCT) remains the gold standard for retinal imaging due to its high-resolution cross-sectional visualization, FAF offers unique metabolic insights. Among the FAF modalities, blue light FAF (B-FAF) is more commonly employed, whereas green light FAF (G-FAF) provides subtly different image characteristics, particularly improved visualization and contrast in the central macula. Despite identical acquisition times and nearly indistinguishable workflows, G-FAF is notably underutilized in clinical practice. Objectives: This narrative review critically compares green and blue FAF in terms of their diagnostic utility relative to OCT, with a focus on lesion detectability, macular pigment interference, and clinical decision-making in retinal disorders. Methods: A comprehensive literature search was performed using the PubMed database for studies published prior to February 2025. The search utilized the keywords fundus autofluorescence and age-related macular degeneration. The primary focus was on short-wavelength FAF and its clinical utility in AMD, considering three aspects: diagnosis, follow-up, and prognosis. The OCT findings served as the reference standard for anatomical correlation and diagnostic accuracy. Results: Both FAF modalities correlated well with OCT in detecting RPE abnormalities. G-FAF demonstrated improved visibility of central lesions due to reduced masking by macular pigment and enhanced contrast in the macula. However, clinical preference remained skewed toward B-FAF, driven more by tradition and device default settings than by evidence-based superiority. G-FAF’s diagnostic potential remains underrecognized despite its comparable practicality and subtle imaging advantages specifically for AMD patients. AMD stages were accurately characterized, and relevant images were used to highlight the significance of G-FAF and B-FAF in the examination of AMD patients. Conclusions: While OCT remains the gold standard, FAF provides complementary information that can guide management strategy. Since G-FAF is functionally equivalent in acquisition, it offers slight advantages. Broader awareness and more frequent integration of G-FAF that could optimize multimodal imaging strategies, particularly in the intermediate stage, should be developed. Full article
(This article belongs to the Special Issue OCT and OCTA Assessment of Retinal and Choroidal Diseases)
Show Figures

Figure 1

21 pages, 1998 KiB  
Article
Computational Modeling and Optimization of Deep Learning for Multi-Modal Glaucoma Diagnosis
by Vaibhav C. Gandhi, Priyesh Gandhi, John Omomoluwa Ogundiran, Maurice Samuntu Sakaji Tshibola and Jean-Paul Kapuya Bulaba Nyembwe
AppliedMath 2025, 5(3), 82; https://doi.org/10.3390/appliedmath5030082 - 2 Jul 2025
Viewed by 330
Abstract
Glaucoma is a leading cause of irreversible blindness globally, with early diagnosis being crucial to preventing vision loss. Traditional diagnostic methods, including fundus photography, OCT imaging, and perimetry, often fall short in sensitivity and fail to integrate structural and functional data. This study [...] Read more.
Glaucoma is a leading cause of irreversible blindness globally, with early diagnosis being crucial to preventing vision loss. Traditional diagnostic methods, including fundus photography, OCT imaging, and perimetry, often fall short in sensitivity and fail to integrate structural and functional data. This study proposes a novel multi-modal diagnostic framework that combines convolutional neural networks (CNNs), vision transformers (ViTs), and quantum-enhanced layers to improve glaucoma detection accuracy and efficiency. The framework integrates fundus images, OCT scans, and clinical biomarkers, leveraging their complementary strengths through a weighted fusion mechanism. Datasets, including the GRAPE and other public and clinical sources, were used, ensuring diverse demographic representation and supporting generalizability. The model was trained and validated using cross-entropy loss, L2 regularization, and adaptive learning strategies, achieving an accuracy of 96%, sensitivity of 94%, and an AUC of 0.97—outperforming CNN-only and ViT-only approaches. Additionally, the quantum-enhanced architecture reduced computational complexity from O(n2) to O (log n), enabling real-time deployment with a 40% reduction in FLOPs. The proposed system addresses key limitations of previous methods in terms of computational cost, data integration, and interpretability. The proposed system addresses key limitations of previous methods in terms of computational cost, data integration, and interpretability. This framework offers a scalable and clinically viable tool for early glaucoma detection, supporting personalized care and improving diagnostic workflows in ophthalmology. Full article
Show Figures

Figure 1

17 pages, 654 KiB  
Article
Phenotypic and Genotypic Characterization of 171 Patients with Syndromic Inherited Retinal Diseases Highlights the Importance of Genetic Testing for Accurate Clinical Diagnosis
by Sofia Kulyamzin, Rina Leibu, Hadas Newman, Miriam Ehrenberg, Nitza Goldenberg-Cohen, Shiri Zayit-Soudry, Eedy Mezer, Ygal Rotenstreich, Iris Deitch, Daan M. Panneman, Dinah Zur, Elena Chervinsky, Stavit A. Shalev, Frans P. M. Cremers, Dror Sharon, Susanne Roosing and Tamar Ben-Yosef
Genes 2025, 16(7), 745; https://doi.org/10.3390/genes16070745 - 26 Jun 2025
Viewed by 503
Abstract
Background: Syndromic inherited retinal diseases (IRDs) are a clinically and genetically heterogeneous group of disorders, involving the retina and additional organs. Over 80 forms of syndromic IRD have been described. Methods: We aimed to phenotypically and genotypically characterize a cohort of 171 individuals [...] Read more.
Background: Syndromic inherited retinal diseases (IRDs) are a clinically and genetically heterogeneous group of disorders, involving the retina and additional organs. Over 80 forms of syndromic IRD have been described. Methods: We aimed to phenotypically and genotypically characterize a cohort of 171 individuals from 140 Israeli families with syndromic IRD. Ophthalmic examination included best corrected visual acuity, fundus examination, visual field testing, retinal imaging and electrophysiological evaluation. Most participants were also evaluated by specialists in fields relevant to their extra-retinal symptoms. Genetic analyses included haplotype analysis, homozygosity mapping, Sanger sequencing and next-generation sequencing. Results: In total, 51% of the families in the cohort were consanguineous. The largest ethnic group was Muslim Arabs. The most common phenotype was Usher syndrome (USH). The most common causative gene was USH2A. In 29% of the families, genetic analysis led to a revised or modified clinical diagnosis. This included confirmation of an atypical USH diagnosis for individuals with late-onset retinitis pigmentosa (RP) and/or hearing loss (HL); diagnosis of Heimler syndrome in individuals with biallelic pathogenic variants in PEX6 and an original diagnosis of USH or nonsyndromic RP; and diagnosis of a mild form of Leber congenital amaurosis with early-onset deafness (LCAEOD) in an individual with a heterozygous pathogenic variant in TUBB4B and an original diagnosis of USH. Novel genotype–phenotype correlations included biallelic pathogenic variants in KATNIP, previously associated with Joubert syndrome (JBTS), in an individual who presented with kidney disease and IRD, but no other features of JBTS. Conclusions: Syndromic IRDs are a highly heterogeneous group of disorders. The rarity of some of these syndromes on one hand, and the co-occurrence of several syndromic and nonsyndromic conditions in some individuals, on the other hand, complicates the diagnostic process. Genetic analysis is the ultimate way to obtain an accurate clinical diagnosis in these individuals. Full article
(This article belongs to the Special Issue Advances in Medical Genetics)
Show Figures

Figure 1

Back to TopTop