Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (454)

Search Parameters:
Keywords = retinal fundus image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 4548 KB  
Article
Comparison of Epiretinal Membrane Detection Rates Between Optos® and Clarus Ultra-Widefield Fundus Imaging Systems
by Satoshi Kuwayama, Yoshio Hirano, Arisa Shibata, Hiroaki Sugiyama, Nariko Soga, Kihei Yoshida, Takaaki Yuguchi, Ryo Kurobe, Akiyo Tsukada, Shuntaro Ogura, Hiroya Hashimoto and Tsutomu Yasukawa
J. Clin. Med. 2026, 15(2), 883; https://doi.org/10.3390/jcm15020883 (registering DOI) - 21 Jan 2026
Viewed by 100
Abstract
Background: Ultra-widefield (UWF) images are frequently used for fundus examinations during medical screening. Optos® generates pseudo-color images using only red and green lasers, which may reduce the visibility of retinal interface lesions. In contrast, Clarus™ incorporates blue light, suggesting potential superiority in [...] Read more.
Background: Ultra-widefield (UWF) images are frequently used for fundus examinations during medical screening. Optos® generates pseudo-color images using only red and green lasers, which may reduce the visibility of retinal interface lesions. In contrast, Clarus™ incorporates blue light, suggesting potential superiority in epiretinal membrane (ERM) detection. Methods: This retrospective study included 233 patients (408 eyes; 816 UWF images per device) who underwent simultaneous Optos® and Clarus™ imaging plus optical coherence tomography (OCT) at our institution from March to April 2019. Ten blinded ophthalmologists assessed only the UWF images for ERM presence or absence. Diagnosis was confirmed by fundus examination and OCT. McNemar’s test compared detection accuracy. Results: Clarus™ consistently outperformed Optos®, with superior sensitivity [median 49% (range 42–70) vs. 14% (4–47); p = 0.002], correct judgment rate [85% (82–90) vs. 78% (44–88); p = 0.010], and lower unassessed rate [6% (2–13) vs. 13% (3–52); p = 0.002]. This superiority held across ERM stages, lens status, and ophthalmologist experience levels. Conclusions: This study demonstrated that Clarus™ significantly outperformed Optos® in ERM detection accuracy. These results suggest that true-color UWF systems like Clarus™ may be more useful for macular screening in routine practice and health examinations. Full article
(This article belongs to the Section Ophthalmology)
Show Figures

Figure 1

18 pages, 2295 KB  
Article
Automatic Retinal Nerve Fiber Segmentation and the Influence of Intersubject Variability in Ocular Parameters on the Mapping of Retinal Sites to the Pointwise Orientation Angles
by Diego Luján Villarreal and Adriana Leticia Vera-Tizatl
J. Imaging 2026, 12(1), 47; https://doi.org/10.3390/jimaging12010047 - 19 Jan 2026
Viewed by 158
Abstract
The current study investigates the influence of intersubject variability in ocular characteristics on the mapping of visual field (VF) sites to the pointwise directional angles in retinal nerve fiber layer (RNFL) bundle traces. In addition, the performance efficacy on the mapping of VF [...] Read more.
The current study investigates the influence of intersubject variability in ocular characteristics on the mapping of visual field (VF) sites to the pointwise directional angles in retinal nerve fiber layer (RNFL) bundle traces. In addition, the performance efficacy on the mapping of VF sites to the optic nerve head (ONH) was compared to ground truth baselines. Fundus photographs of 546 eyes of 546 healthy subjects (with no history of ocular disease or diabetic retinopathy) were enhanced digitally and RNFL bundle traces were segmented based on the Personalized Estimated Segmentation (PES) algorithm’s core technique. A 24-2 VF grid pattern was overlaid onto the photographs in order to relate VF test points to intersecting RNFL bundles. The PES algorithm effectively traced RNFL bundles in fundus images, achieving an average accuracy of 97.6% relative to the Jansonius map through the application of 10th-order Bezier curves. The PES algorithm assembled an average of 4726 RNFL bundles per fundus image based on 4975 sampling points, obtaining a total of 2,580,505 RNFL bundles based on 2,716,321 sampling points. The influence of ocular parameters could be evaluated for 34 out of 52 VF locations. The ONH-fovea angle and the ONH position in relation to the fovea were the most prominent predictors for variations in the mapping of retinal locations to the pointwise directional angle (p < 0.001). The variation explained by the model (R2 value) ranges from 27.6% for visual field location 15 to 77.8% in location 22, with a mean of 56%. Significant individual variability was found in the mapping of VF sites to the ONH, with a mean standard deviation (95% limit) of 16.55° (median 17.68°) for 50 out of 52 VF locations, ranging from less than 1° to 44.05°. The mean entry angles differed from previous baselines by a range of less than 1° to 23.9° (average difference of 10.6° ± 5.53°), and RMSE of 11.94. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

14 pages, 1788 KB  
Article
CDHR1-Associated Retinal Dystrophies: Expanding the Clinical and Genetic Spectrum with a Hungarian Cohort
by Ágnes Takács, Balázs Varsányi, Mirella Barboni, Rita Vámos, Balázs Lesch, Dominik Dobos, Emília Clapp, András Végh, Ditta Zobor, Krisztina Knézy, Zoltán Zsolt Nagy and Viktória Szabó
Genes 2026, 17(1), 102; https://doi.org/10.3390/genes17010102 - 19 Jan 2026
Viewed by 187
Abstract
Aim: To report on the clinical and genetic spectrum of retinopathy associated with CDHR1 variants in a Hungarian cohort. Methods: A retrospective cohort study was conducted at a single tertiary care referral center. The study enrolled nine patients harboring biallelic variants [...] Read more.
Aim: To report on the clinical and genetic spectrum of retinopathy associated with CDHR1 variants in a Hungarian cohort. Methods: A retrospective cohort study was conducted at a single tertiary care referral center. The study enrolled nine patients harboring biallelic variants in the CDHR1 gene. Detailed clinical history, multimodal imaging, electroretinography, and molecular genetics are presented. Results: We identified four CDHR1 variants predicted to cause loss-of-function and five phenotypes (cone dystrophy, central areolar choroidal dystrophy, cone-rod dystrophy, rod-cone dystrophy, and late-onset macular dystrophy). The most frequent variant was the synonymous CDHR1 c.783G>A (p.Pro261=) variant (10/18 alleles, 55.6%). A novel splice acceptor site variant, CDHR1 c.349-1G>A, and a novel intronic variant, CDHR1 c.1168-10A>G, were also detected. Fundus examination revealed macular atrophy with or without peripheral retinal changes. Full-field electroretinography, available in seven patients, demonstrated decreased light-adapted and extinguished dark-adapted responses in both the rod-cone dystrophy group and patients with macular involvement. OCT imaging indicated ellipsoid zone disruption with foveal sparing in two out of nine patients and severe retinal damage in rod-cone dystrophy cases. Conclusions: The predominant clinical manifestations of cone dystrophy, cone-rod dystrophy, and macular dystrophy in the Hungarian patient cohort showed heterogeneity, with a rod-cone dystrophy phenotype observed in five of nine cases (55.6%). The natural history of CDHR1-associated retinopathy typically follows a slow progression, providing a therapeutic window, which makes the disease a candidate for gene therapy. Full article
(This article belongs to the Special Issue Current Advances in Inherited Retinal Disease)
Show Figures

Figure 1

31 pages, 1485 KB  
Article
Explainable Multi-Modal Medical Image Analysis Through Dual-Stream Multi-Feature Fusion and Class-Specific Selection
by Naeem Ullah, Ivanoe De Falco and Giovanna Sannino
AI 2026, 7(1), 30; https://doi.org/10.3390/ai7010030 - 16 Jan 2026
Viewed by 313
Abstract
Effective and transparent medical diagnosis relies on accurate and interpretable classification of medical images across multiple modalities. This paper introduces an explainable multi-modal image analysis framework based on a dual-stream architecture that fuses handcrafted descriptors with deep features extracted from a custom MobileNet. [...] Read more.
Effective and transparent medical diagnosis relies on accurate and interpretable classification of medical images across multiple modalities. This paper introduces an explainable multi-modal image analysis framework based on a dual-stream architecture that fuses handcrafted descriptors with deep features extracted from a custom MobileNet. Handcrafted descriptors include frequency-domain and texture features, while deep features are summarized using 26 statistical metrics to enhance interpretability. In the fusion stage, complementary features are combined at both the feature and decision levels. Decision-level integration combines calibrated soft voting, weighted voting, and stacking ensembles with optimized classifiers, including decision trees, random forests, gradient boosting, and logistic regression. To further refine performance, a hybrid class-specific feature selection strategy is proposed, combining mutual information, recursive elimination, and random forest importance to select the most discriminative features for each class. This hybrid selection approach eliminates redundancy, improves computational efficiency, and ensures robust classification. Explainability is provided through Local Interpretable Model-Agnostic Explanations, which offer transparent details about the ensemble model’s predictions and link influential handcrafted features to clinically meaningful image characteristics. The framework is validated on three benchmark datasets, i.e., BTTypes (brain MRI), Ultrasound Breast Images, and ACRIMA Retinal Fundus Images, demonstrating generalizability across modalities (MRI, ultrasound, retinal fundus) and disease categories (brain tumor, breast cancer, glaucoma). Full article
(This article belongs to the Special Issue Digital Health: AI-Driven Personalized Healthcare and Applications)
Show Figures

Figure 1

15 pages, 5995 KB  
Article
A Multi-Scale Soft-Thresholding Attention Network for Diabetic Retinopathy Recognition
by Xin Ma, Linfeng Sui, Ruixuan Chen, Taiyo Maeda and Jianting Cao
Appl. Sci. 2026, 16(2), 685; https://doi.org/10.3390/app16020685 - 8 Jan 2026
Viewed by 192
Abstract
Diabetic retinopathy (DR) is a major cause of preventable vision loss, and its early detection is essential for timely clinical intervention. However, existing deep learning-based DR recognition methods still face two fundamental challenges: substantial lesion-scale variability and significant background noise in retinal fundus [...] Read more.
Diabetic retinopathy (DR) is a major cause of preventable vision loss, and its early detection is essential for timely clinical intervention. However, existing deep learning-based DR recognition methods still face two fundamental challenges: substantial lesion-scale variability and significant background noise in retinal fundus images. To address these issues, we propose a lightweight framework named Multi-Scale Soft-Thresholding Attention Network (MSA-Net). The model integrates three components: (1) parallel multi-scale convolutional branches to capture lesions of different spatial sizes; (2) a soft-thresholding attention module to suppress noise-dominated responses; and (3) hierarchical feature fusion to enhance cross-layer representation consistency. A squeeze-and-excitation module is further incorporated for channel recalibration. On the APTOS 2019 dataset, MSA-Net achieves 97.54% accuracy and 0.991 AUC-ROC for binary DR recognition. We further evaluate five-class DR grading on APTOS2019 with 5-fold stratified cross-validation, achieving 82.71 ± 1.25% accuracy and 0.8937 ± 0.0142 QWK, indicating stable performance for ordinal severity classification. With only 4.54 M parameters, MSA-Net remains lightweight and suitable for deployment in resource-constrained DR screening environments. Full article
Show Figures

Figure 1

27 pages, 13798 KB  
Article
A Hierarchical Deep Learning Architecture for Diagnosing Retinal Diseases Using Cross-Modal OCT to Fundus Translation in the Lack of Paired Data
by Ekaterina A. Lopukhova, Gulnaz M. Idrisova, Timur R. Mukhamadeev, Grigory S. Voronkov, Ruslan V. Kutluyarov and Elizaveta P. Topolskaya
J. Imaging 2026, 12(1), 36; https://doi.org/10.3390/jimaging12010036 - 8 Jan 2026
Viewed by 252
Abstract
The paper focuses on automated diagnosis of retinal diseases, particularly Age-related Macular Degeneration (AMD) and diabetic retinopathy (DR), using optical coherence tomography (OCT), while addressing three key challenges: disease comorbidity, severe class imbalance, and the lack of strictly paired OCT and fundus data. [...] Read more.
The paper focuses on automated diagnosis of retinal diseases, particularly Age-related Macular Degeneration (AMD) and diabetic retinopathy (DR), using optical coherence tomography (OCT), while addressing three key challenges: disease comorbidity, severe class imbalance, and the lack of strictly paired OCT and fundus data. We propose a hierarchical modular deep learning system designed for multi-label OCT screening with conditional routing to specialized staging modules. To enable DR staging when fundus images are unavailable, we use cross-modal alignment between OCT and fundus representations. This approach involves training a latent bridge that projects OCT embeddings into the fundus feature space. We enhance clinical reliability through per-class threshold calibration and implement quality control checks for OCT-only DR staging. Experiments demonstrate robust multi-label performance (macro-F1 =0.989±0.006 after per-class threshold calibration) and reliable calibration (ECE =2.1±0.4%), and OCT-only DR staging is feasible in 96.1% of cases that meet the quality control criterion. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

13 pages, 3572 KB  
Article
Diagnostic Performance of Ring Aperture Retro Mode Imaging for Detecting Pigment Migration in Age-Related Macular Degeneration
by Thomas Desmettre, Gerardo Ledesma-Gil and Michel Paques
Diagnostics 2026, 16(1), 42; https://doi.org/10.3390/diagnostics16010042 - 23 Dec 2025
Viewed by 329
Abstract
Background/Objectives: Pigment migration is a key biomarker of progression in age-related macular degeneration (AMD). This study assessed the diagnostic performance of ring aperture Retro mode (RAR) imaging for detecting pigment migration and compared its performance with established multimodal imaging techniques. Methods: [...] Read more.
Background/Objectives: Pigment migration is a key biomarker of progression in age-related macular degeneration (AMD). This study assessed the diagnostic performance of ring aperture Retro mode (RAR) imaging for detecting pigment migration and compared its performance with established multimodal imaging techniques. Methods: This retrospective study included 80 eyes from 61 consecutive patients with AMD who underwent multimodal imaging with color fundus images (CFIs), fundus autofluorescence (FAF), RAR imaging (Mirante, NIDEK), and en face optical coherence tomography (OCT) with B-scans (Cirrus HD-OCT 5000, Zeiss). Two independent retina specialists graded the AMD stage and the presence of pigment migration across modalities. Sensitivity and positive predictive value (PPV) of RAR were calculated using en face OCT as the reference standard. Results: RAR demonstrated high diagnostic performance, with a sensitivity of 94.7% and a PPV of 93.4% relative to en face OCT. RAR frequently identified pigment migration that was not visible on CFI or FAF, particularly in early AMD and in eyes with media opacity. Distinct morphologic patterns—including hyperreflective foci, thickened retinal pigment epithelium, refractile drusen, and cuticular drusen—were consistently identifiable on RAR. In four eyes with geographic atrophy, RAR detected perifoveal pigment redistribution at least six months before foveal involvement was confirmed by OCT and FAF. Conclusions: RAR imaging is a rapid, sensitive, and clinically practical technique for detecting pigment migration in AMD. By complementing en face OCT and enhancing visualization in cases where standard imaging is limited, RAR may strengthen early disease surveillance, support prognostic assessment, and improve multimodal diagnostic workflows in routine practice. Full article
(This article belongs to the Special Issue Diagnosis and Management of Ophthalmic Disorders)
Show Figures

Figure 1

9 pages, 939 KB  
Article
Clinical Utility of Ultra-Widefield Fundus Photography with SS-OCT Images in Justifying Prophylactic Laser Photocoagulation of Peripheral Retinal Lesions
by Joanna Żuk, Krzysztof Safranow and Anna Machalińska
Bioengineering 2025, 12(12), 1367; https://doi.org/10.3390/bioengineering12121367 - 16 Dec 2025
Viewed by 557
Abstract
We aimed to validate the feasibility of combining ultra-widefield (UWF) fundus photography with targeted swept-source optical coherence tomography (SS-OCT) for clinical decision-making regarding a prophylactic laser therapy. For this purpose we enrolled 119 patients (135 eyes) who, basis on fundus examination, were eligible [...] Read more.
We aimed to validate the feasibility of combining ultra-widefield (UWF) fundus photography with targeted swept-source optical coherence tomography (SS-OCT) for clinical decision-making regarding a prophylactic laser therapy. For this purpose we enrolled 119 patients (135 eyes) who, basis on fundus examination, were eligible for prophylactic photocoagulation of degenerative retinal lesions. Eyes were classified into two groups: (1) justified laser, when SS-OCT confirmed vitreoretinal traction and/or subretinal fluid beneath the neurosensory retina; and (2) non-justified laser, when SS-OCT did not confirm these criteria. Using this SS-OCT-guided UWF approach, we found that 25.1% of eyes that initially qualified for laser based on clinical examination did not meet the SS-OCT criteria. Patients in the justified laser group were significantly younger than those in the non-justified group. Horseshoe retinal tears, lattice degeneration and snail-track degenerations, multiple lesions, and lesions located in the far and mid-periphery were significantly more frequent in the justified laser group than in the non-justified group. By contrast, the prevalence of operculated holes, bilateral lesions, and degenerative lesions in patients with a retinal detachment in the fellow eye did not differ between groups. Our findings suggest the SS-OCT-guided UWF imaging may refine patient selection for prophylactic laser therapy. Full article
Show Figures

Graphical abstract

17 pages, 4965 KB  
Article
Expanding the Genetic Spectrum in IMPG1 and IMPG2 Retinopathy
by Saoud Al-Khuzaei, Ahmed K. Shalaby, Jing Yu, Morag Shanks, Penny Clouston, Robert E. MacLaren, Stephanie Halford, Samantha R. De Silva and Susan M. Downes
Genes 2025, 16(12), 1474; https://doi.org/10.3390/genes16121474 - 9 Dec 2025
Viewed by 493
Abstract
Background: Pathogenic variants in interphotoreceptor matrix proteoglycan 1 (IMPG1) have been associated with autosomal dominant and recessive retinitis pigmentosa (RP) and autosomal dominant adult vitelliform macular dystrophy (AVMD). Monoallelic pathogenic variants in IMPG2 have been linked to maculopathy and biallelic variants [...] Read more.
Background: Pathogenic variants in interphotoreceptor matrix proteoglycan 1 (IMPG1) have been associated with autosomal dominant and recessive retinitis pigmentosa (RP) and autosomal dominant adult vitelliform macular dystrophy (AVMD). Monoallelic pathogenic variants in IMPG2 have been linked to maculopathy and biallelic variants to RP with early onset macular atrophy. Herein we characterise the phenotypic and genotypic features of patients with IMPG1/IMPG2 retinopathy and report novel variants. Methods: Patients with IMPG1 and IMPG2 variants and compatible phenotypes were retrospectively identified. Clinical data were obtained from reviewing the medical records. Phenotypic data included visual acuity, imaging included ultra-widefield pseudo-colour, fundus autofluorescence, and optical coherence tomography (OCT). Genetic testing was performed using next generation sequencing (NGS). Variant pathogenicity was investigated using in silico analysis (SIFT, PolyPhen-2, mutation taster, SpliceAI). The evolutionary conservation of novel missense variants was also investigated. Results: A total of 13 unrelated patients were identified: 2 (1 male; 1 female) with IMPG1 retinopathy and 11 (7 male; 4 female) with IMPG2 retinopathy. Both IMPG1 retinopathy patients were monoallelic: one patient had adult vitelliform macular dystrophy (AVMD) with drusenoid changes while the other had pattern dystrophy (PD), and they presented to clinic at age 81 and 72 years, respectively. There were 5 monoallelic IMPG2 retinopathy patients with a maculopathy phenotype, of whom 1 had PD and 4 had AVMD. The mean age of symptom onset of this group was 54.2 ± 11.8 years, mean age at presentation was 54.8 ± 11.5 years, and mean BCVAs were 0.15 ± 0.12 logMAR OD and −0.01 ± 0.12 logMAR OS. Six biallelic IMPG2 patients had RP with maculopathy, where the mean age of onset symptom onset was 18.4 years, mean age at examination was 68.7 years, and mean BCVAs were 1.90 logMAR OD and 1.82 logMAR OS. Variants in IMPG1 included one missense and one exon deletion. A total of 11 different IMPG2 variants were identified (4 missense, 7 truncating). A splicing defect was predicted for the c.871C>A p.(Arg291Ser) missense IMPG2 variant. One IMPG1 and five IMPG2 variants were novel. Conclusions: This study describes the phenotypic spectrum of IMPG1/IMPG2 retinopathy and six novel variants are reported. The phenotypes of PD and AVMD in monoallelic IMPG2 patients may result from haploinsufficiency, supported by the presence of truncating variants in both monoallelic and biallelic cases. The identification of novel variants expands the known genetic landscape of IMPG1 and IMPG2 retinopathies. These findings contribute to diagnostic accuracy, informed patient counselling regarding inheritance pattern, and may help guide recruitment for future therapeutic interventions. Full article
(This article belongs to the Section Human Genomics and Genetic Diseases)
Show Figures

Figure 1

12 pages, 10042 KB  
Article
Optical Coherence Tomography Angiography Features and Flow-Based Classification of Retinal Artery Macroaneurysms
by Mohamed Oshallah, Anastasios E. Sepetis, Antonio Valastro, Eslam Ahmed, Sara Vaz-Pereira, Luca Ventre and Gabriella De Salvo
J. Clin. Med. 2025, 14(24), 8686; https://doi.org/10.3390/jcm14248686 - 8 Dec 2025
Viewed by 495
Abstract
Objectives: We propose a flow-signal-based classification of retinal artery macroaneurysms (RAMs) using Optical Coherence Tomography Angiography (OCTA) and compare the findings with fundus fluorescein angiography (FFA). Methods: A retrospective review of 49 RAM cases observed over 6 years (October 2017–March 2023) at a [...] Read more.
Objectives: We propose a flow-signal-based classification of retinal artery macroaneurysms (RAMs) using Optical Coherence Tomography Angiography (OCTA) and compare the findings with fundus fluorescein angiography (FFA). Methods: A retrospective review of 49 RAM cases observed over 6 years (October 2017–March 2023) at a medical retina clinic at the University Hospital Southampton, UK. Electronic clinical records, FFA, and OCTA images (en face and B-scan) were reviewed to identify pathology and assess RAM flow profiles. Results: In total, 30 eyes from 30 patients were included. The mean age of the patients was 76 years (range 49–91), with 17 females and 13 males. All eyes underwent OCTA, enabling classification of RAMs into three flow signal types: high (9 eyes), low (10 eyes), and absent (9 eyes), while 2 eyes had haemorrhage-related artefacts. A subgroup of 13 eyes also underwent FFA, allowing direct comparison, which showed flow profiles similar to those of OCTA: high (4 eyes), low (6 eyes), and absent (2 eyes), with 1 ungradable case due to subretinal haemorrhage masking. A discrepancy in flow was observed in one case where FFA indicated flow, but OCTA did not. Despite this, FFA and OCTA generally agreed on the flow levels, with a Spearman correlation of r = 0.79 (p = 0.004). Conclusions: OCTA flow profiles were directly comparable to FFA. OCTA effectively identified different levels of blood flow signal behaviour in RAMs. The proposed flow-based RAM classification may aid in prognosis, treatment indications, follow-up, and safe repeat imaging in clinical practice without systemic risk to the patient. Full article
(This article belongs to the Special Issue Macular Diseases: From Diagnosis to Treatment)
Show Figures

Figure 1

12 pages, 795 KB  
Article
Intraocular Cytokine Level Prediction from Fundus Images and Optical Coherence Tomography
by Hidenori Takahashi, Taiki Tsuge, Yusuke Kondo, Yasuo Yanagi, Satoru Inoda, Shohei Morikawa, Yuki Senoo, Toshikatsu Kaburaki, Tetsuro Oshika and Toshihiko Yamasaki
Sensors 2025, 25(23), 7382; https://doi.org/10.3390/s25237382 - 4 Dec 2025
Viewed by 496
Abstract
The relationship between retinal images and intraocular cytokine profiles remains largely unexplored, and no prior work has systematically compared fundus- and OCT-based deep learning models for cytokine prediction. We aimed to predict intraocular cytokine concentrations using color fundus photographs (CFP) and retinal optical [...] Read more.
The relationship between retinal images and intraocular cytokine profiles remains largely unexplored, and no prior work has systematically compared fundus- and OCT-based deep learning models for cytokine prediction. We aimed to predict intraocular cytokine concentrations using color fundus photographs (CFP) and retinal optical coherence tomography (OCT) with deep learning. Our pipeline consisted of image preprocessing, convolutional neural network–based feature extraction, and regression modeling for each cytokine. Deep learning was implemented using AutoGluon, which automatically explored multiple architectures and converged on ResNet18, reflecting the small dataset size. Four approaches were tested: (1) CFP alone, (2) CFP plus demographic/clinical features, (3) OCT alone, and (4) OCT plus these features. Prediction performance was defined as the mean coefficient of determination (R2) across 34 cytokines, and differences were evaluated using paired two-tailed t-tests. We used data from 139 patients (152 eyes) and 176 aqueous humor samples. The cohort consisted of 85 males (61%) with a mean age of 73 (SD 9.8). Diseases included 64 exudative age-related macular degeneration, 29 brolucizumab-associated endophthalmitis, 19 cataract surgeries, 15 retinal vein occlusion, and 8 diabetic macular edema. Prediction performance was generally poor, with mean R2 values below zero across all approaches. The CFP-only model (–0.19) outperformed CFP plus demographics (–24.1; p = 0.0373), and the OCT-only model (–0.18) outperformed OCT plus demographics (–14.7; p = 0.0080). No significant difference was observed between CFP and OCT (p = 0.9281). Notably, VEGF showed low predictability (31st with CFP, 12th with OCT). Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

10 pages, 782 KB  
Article
Development of an Algorithm to Assist in the Diagnosis of Combined Retinal Vein Occlusion and Glaucoma
by Hiroshi Kasai, Kazuyoshi Kitamura, Yuka Hasebe, Junya Mizutani, Kengo Utsunomiya, Shiori Sato, Kohei Murao, Yoichiro Ninomiya, Kensaku Mori, Kazuhide Kawase, Masaki Tanito, Toru Nakazawa, Atsuya Miki, Kazuhiko Mori, Takeshi Yoshitomi and Kenji Kashiwagi
J. Clin. Med. 2025, 14(23), 8547; https://doi.org/10.3390/jcm14238547 - 2 Dec 2025
Viewed by 407
Abstract
Objectives: To develop an algorithm to assist in the diagnosis of glaucoma with concomitant retinal vein occlusion (RVO) and to compare its diagnostic accuracy with that of ophthalmology residents and specialists. Methods: Fundus photographs of eyes with RVO and those with both RVO [...] Read more.
Objectives: To develop an algorithm to assist in the diagnosis of glaucoma with concomitant retinal vein occlusion (RVO) and to compare its diagnostic accuracy with that of ophthalmology residents and specialists. Methods: Fundus photographs of eyes with RVO and those with both RVO and glaucoma were obtained from patients who visited the University of Yamanashi Hospital. All images were preprocessed through normalization and resized to 512 × 512 pixels to ensure uniformity before model training. The diagnostic accuracy of two algorithms—the Comprehensive Fundus Disease Diagnostic Artificial Intelligence Algorithm (CD-AI) and the Glaucoma Concomitant RVO Artificial Intelligence Algorithm (RVO-GLA AI)—was evaluated. CD-AI is a clinical decision support algorithm originally developed to detect eleven common fundus diseases, including glaucoma and RVO. RVO-GLA AI is a fine-tuned version of CD-AI that is specifically adapted to detect glaucoma with or without RVO. Fine-tuning was performed using 1234 images of glaucoma, 1233 images of nonglaucomatous conditions, including RVO, and 15 images of cases with both glaucoma and RVO. The number of comorbid cases was determined empirically by gradually adding glaucomatous eyes with concomitant RVO to the training set, and 15 images provided the best balance between sensitivity and specificity. Because the available number of such cases was limited, this small sample size may have influenced the stability of the performance estimates. For the final evaluation, both algorithms and all ophthalmologists assessed the same independent test dataset comprising 66 fundus images (16 eyes with glaucoma and RVO and 50 eyes with RVO alone). The diagnostic performance of both algorithms was compared with that of three first-year ophthalmology residents and three board-certified ophthalmologists. Results: CD-AI demonstrated high diagnostic accuracy (92.5%) in eyes with glaucoma alone. However, its sensitivity and specificity decreased to 0.375 and 1.0, respectively, in patients with concomitant RVO. In contrast, the RVO-GLA AI achieved an area under the curve (AUC) of 0.875, with a sensitivity of 0.87 and a specificity of 0.71. Across all the ophthalmologists, the average sensitivity was 0.63, and the specificity was 0.87. Specialists achieved a sensitivity of 0.80 and a specificity of 0.89, while residents had a sensitivity of 0.46 and a specificity of 0.85. Conclusions: An AI-based clinical decision support system specifically designed for glaucoma detection significantly improved diagnostic performance in eyes with combined RVO and glaucoma, achieving an accuracy comparable to that of ophthalmologists, even with a limited number of training cases. Full article
(This article belongs to the Special Issue Advances in the Diagnosis and Treatment of Glaucoma)
Show Figures

Figure 1

11 pages, 360 KB  
Article
AI-Augmented Fundus Disease Screening by Non-Ophthalmologist Physicians: A Paired Before–After Study
by EunAh Kim and Su Jeong Song
Bioengineering 2025, 12(12), 1304; https://doi.org/10.3390/bioengineering12121304 - 27 Nov 2025
Viewed by 646
Abstract
Screening for retinal disease is increasingly performed by general practitioners and other non-ophthalmologist clinicians in primary care, especially where access to ophthalmology is limited and diagnostic accuracy may be suboptimal. To investigate the role of an automated fundus-interpretation support solution in improving general [...] Read more.
Screening for retinal disease is increasingly performed by general practitioners and other non-ophthalmologist clinicians in primary care, especially where access to ophthalmology is limited and diagnostic accuracy may be suboptimal. To investigate the role of an automated fundus-interpretation support solution in improving general physicians’ screening accuracy and referral decisions, we conducted a paired before–after study evaluating an AI-based decision support tool. Four non-ophthalmologists who have been involved in screen fundus images in clinical practice reviewed 500 de-identified color fundus photographs twice—first unaided and, after a washout period, with AI assistance. With AI support, diagnostic accuracy improved significantly from 82.8% to 91.1% (p < 0.0001), with the greatest benefit observed in glaucoma-suspect and multi-pathology cases. Clinicians retained final diagnostic authority, and a favorable safety profile was observed. These results demonstrate that AI-assisted diagnosis aid can meaningfully augment non-ophthalmologist screening and referral decision-making in real-world primary care, while underscoring the need for broader validation and implementation studies. Full article
Show Figures

Figure 1

11 pages, 2253 KB  
Case Report
Longitudinal Multimodal Assessment of Structure and Function in INPP5E-Related Retinopathy
by Andrea Cusumano, Marco Lombardo, Benedetto Falsini, Michele D’Ambrosio, Jacopo Sebastiani, Enrica Marchionni, Maria Rosaria D’Apice, Barbara Rizzacasa, Francesco Martelli and Giuseppe Novelli
Genes 2025, 16(12), 1407; https://doi.org/10.3390/genes16121407 - 26 Nov 2025
Viewed by 405
Abstract
Background: INPP5E-related retinopathy (INPP5E-RR) is a rare genetic disorder caused by biallelic pathogenic variants in the INPP5E gene, which encodes an enzyme critical for phosphoinositide signaling. While early-onset rod–cone dystrophy is a hallmark feature, detailed longitudinal data on the [...] Read more.
Background: INPP5E-related retinopathy (INPP5E-RR) is a rare genetic disorder caused by biallelic pathogenic variants in the INPP5E gene, which encodes an enzyme critical for phosphoinositide signaling. While early-onset rod–cone dystrophy is a hallmark feature, detailed longitudinal data on the phenotype are scarce. This study aims to report a 6-year longitudinal assessment of retinal structure and function in a case of non-syndromic INPP5E-RR. Methods: A 42-year-old female proband with compound heterozygous pathogenic missense variants in INPP5E (p.Arg486Cys and p.Arg378Cys) was monitored from 2019 to 2025. She underwent serial comprehensive ophthalmologic evaluations, including optical coherence tomography (OCT), fundus autofluorescence, adaptive optics transscleral flood illumination, full-field 30Hz flicker electroretinography (ERG), and macular frequency-doubling technology perimetry. Results: Over the 6-year follow-up, OCT imaging revealed a progressive decline in the ellipsoid zone (EZ) width, from 1220 µm to 720 µm (~80 µm/year), and in the inner nuclear layer (INL) thickness. The central outer nuclear layer (ONL) thickness was preserved, but intraretinal cysts developed. Functional testing revealed a progressive decline in cone flicker ERG amplitudes, while visual acuity and macular perimetry remained stable. Conclusions: In this genotypically confirmed case, the longitudinal data identify EZ width, INL thickness, and cone flicker ERG as robust biomarkers of disease progression in INPP5E-RR. These parameters are ideal candidates for monitoring therapeutic outcomes in future clinical trials. Full article
(This article belongs to the Special Issue Current Advances in Inherited Retinal Disease)
Show Figures

Figure 1

15 pages, 1093 KB  
Article
AI-Based Retinal Image Analysis for the Detection of Choroidal Neovascular Age-Related Macular Degeneration (AMD) and Its Association with Brain Health
by Chuying Shi, Jack Lee, Di Shi, Gechun Wang, Fei Yuan, Timothy Y. Y. Lai, Jingwen Liu, Yijie Lu, Dongcheng Liu, Bo Qin and Benny Chung-Ying Zee
Brain Sci. 2025, 15(11), 1249; https://doi.org/10.3390/brainsci15111249 - 20 Nov 2025
Viewed by 660
Abstract
Purpose: This study aims to develop a method for detecting referable (intermediate and advanced) age-related macular degeneration (AMD) and neovascular AMD, as well as providing an automatic segmentation of choroidal neovascularisation (CNV) on colour fundus retinal images. We also demonstrated that brain [...] Read more.
Purpose: This study aims to develop a method for detecting referable (intermediate and advanced) age-related macular degeneration (AMD) and neovascular AMD, as well as providing an automatic segmentation of choroidal neovascularisation (CNV) on colour fundus retinal images. We also demonstrated that brain health risk scores estimated by AI-based Retinal Image Analysis (ARIA), such as white matter hyperintensities and depression, are significantly associated with AMD and neovascular AMD. Methods: A primary dataset of 1480 retinal images was collected from Zhongshan Hospital of Fudan University for training and 10-fold cross-validation. Additionally, two validation subdataset comprising 238 images (retinal images and wide-field images) were used. Using fluorescein angiography-based labels, we applied the InceptionResNetV2 deep network with the ARIA method to detect AMD, and a transfer ResNet50_Unet was used to segment CNV. The risks of cerebral white matter hyperintensities and depression were estimated using an AI-based Retinal Image Analysis approach. Results: In a 10-fold cross-validation, we achieved sensitivities of 97.4% and 98.1%, specificities of 96.8% and 96.1%, and accuracies of 97.0% and 96.4% in detecting referable AMD and neovascular AMD, respectively. In the external validation, we achieved accuracies of 92.9% and 93.7% and AUCs of 0.967 and 0.967, respectively. The performances on two validation sub-datasets show no statistically significant difference in detecting referable AMD (p = 0.704) and neovascular AMD (p = 0.213). In the segmentation of CNV, we achieved a global accuracy of 93.03%, a mean accuracy of 91.83%, a mean intersection over union (IoU) of 68.7%, a weighted IoU of 89.63%, and a mean boundary F1 (BF) of 67.77%. Conclusions: The proposed method shows promising results as a highly efficient and cost-effective screening tool for detecting neovascular and referable AMD on both retinal and wide-field images, and providing critical insights into CNV. Its implementation could be particularly valuable in resource-limited settings, enabling timely referrals, enhancing patient care, and supporting decision-making across AMD classifications. In addition, we demonstrated that AMD and neovascular AMD are significantly associated with increased risks of WMH and depression. Full article
Show Figures

Figure 1

Back to TopTop