Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (25)

Search Parameters:
Keywords = color fundus photographs

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 2353 KiB  
Article
Intergrader Agreement on Qualitative and Quantitative Assessment of Diabetic Retinopathy Severity Using Ultra-Widefield Imaging: INSPIRED Study Report 1
by Eleonora Riotto, Wei-Shan Tsai, Hagar Khalid, Francesca Lamanna, Louise Roch, Medha Manoj and Sobha Sivaprasad
Diagnostics 2025, 15(14), 1831; https://doi.org/10.3390/diagnostics15141831 - 21 Jul 2025
Viewed by 337
Abstract
Background/Objectives: Discrepancies in diabetic retinopathy (DR) grading are well-documented, with retinal non-perfusion (RNP) quantification posing greater challenges. This study assessed intergrader agreement in DR evaluation, focusing on qualitative severity grading and quantitative RNP measurement. We aimed to improve agreement through structured consensus [...] Read more.
Background/Objectives: Discrepancies in diabetic retinopathy (DR) grading are well-documented, with retinal non-perfusion (RNP) quantification posing greater challenges. This study assessed intergrader agreement in DR evaluation, focusing on qualitative severity grading and quantitative RNP measurement. We aimed to improve agreement through structured consensus meetings. Methods: A retrospective analysis of 100 comparisons from 50 eyes (36 patients) was conducted. Two paired medical retina fellows graded ultra-widefield color fundus photographs (CFP) and fundus fluorescein angiography (FFA) images. CFP assessments included DR severity using the International Clinical Diabetic Retinopathy (ICDR) grading system, DR Severity Scale (DRSS), and predominantly peripheral lesions (PPL). FFA-based RNP was defined as capillary loss with grayscale matching the foveal avascular zone. Weekly adjudication by a senior specialist resolved discrepancies. Intergrader agreement was evaluated using Cohen’s kappa (qualitative DRSS) and intraclass correlation coefficients (ICC) (quantitative RNP). Bland–Altman analysis assessed bias and variability. Results: After eight consensus meetings, CFP grading agreement improved to excellent: kappa = 91% (ICDR DR severity), 89% (DRSS), and 89% (PPL). FFA-based PPL agreement reached 100%. For RNP, the non-perfusion index (NPI) showed moderate overall ICC (0.49), with regional ICCs ranging from 0.40 to 0.57 (highest in the nasal region, ICC = 0.57). Bland–Altman analysis revealed a mean NPI difference of 0.12 (limits: −0.11 to 0.35), indicating acceptable variability despite outliers. Conclusions: Structured consensus training achieved excellent intergrader agreement for DR severity and PPL grading, supporting the clinical reliability of ultra-widefield imaging. However, RNP measurement variability underscores the need for standardized protocols and automated tools to enhance reproducibility. This process is critical for developing robust AI-based screening systems. Full article
(This article belongs to the Special Issue New Advances in Retinal Imaging)
Show Figures

Figure 1

17 pages, 8265 KiB  
Article
Automated Foveal Avascular Zone Segmentation in Optical Coherence Tomography Angiography Across Multiple Eye Diseases Using Knowledge Distillation
by Peter Racioppo, Aya Alhasany, Nhuan Vu Pham, Ziyuan Wang, Giulia Corradetti, Gary Mikaelian, Yannis M. Paulus, SriniVas R. Sadda and Zhihong Hu
Bioengineering 2025, 12(4), 334; https://doi.org/10.3390/bioengineering12040334 - 23 Mar 2025
Cited by 2 | Viewed by 1076
Abstract
Optical coherence tomography angiography (OCTA) is a noninvasive imaging technique used to visualize retinal blood flow and identify changes in vascular density and enlargement or distortion of the foveal avascular zone (FAZ), which are indicators of various eye diseases. Although several automated FAZ [...] Read more.
Optical coherence tomography angiography (OCTA) is a noninvasive imaging technique used to visualize retinal blood flow and identify changes in vascular density and enlargement or distortion of the foveal avascular zone (FAZ), which are indicators of various eye diseases. Although several automated FAZ detection and segmentation algorithms have been developed for use with OCTA, their performance can vary significantly due to differences in data accessibility of OCTA in different retinal pathologies, and differences in image quality in different subjects and/or different OCTA devices. For example, data from subjects with direct macular damage, such as in age-related macular degeneration (AMD), are more readily available in eye clinics, while data on macular damage due to systemic diseases like Alzheimer’s disease are often less accessible; data from healthy subjects may have better OCTA quality than subjects with ophthalmic pathologies. Typically, segmentation algorithms make use of convolutional neural networks and, more recently, vision transformers, which make use of both long-range context and fine-grained detail. However, transformers are known to be data-hungry, and may overfit small datasets, such as those common for FAZ segmentation in OCTA, to which there is limited access in clinical practice. To improve model generalization in low-data or imbalanced settings, we propose a multi-condition transformer-based architecture that uses four teacher encoders to distill knowledge into a shared base model, enabling the transfer of learned features across multiple datasets. These include intra-modality distillation using OCTA datasets from four ocular conditions: healthy aging eyes, Alzheimer’s disease, AMD, and diabetic retinopathy; and inter-modality distillation incorporating color fundus photographs of subjects undergoing laser photocoagulation therapy. Our multi-condition model achieved a mean Dice Index of 83.8% with pretraining, outperforming single-condition models (mean of 83.1%) across all conditions. Pretraining on color fundus photocoagulation images improved the average Dice Index by a small margin on all conditions except AMD (1.1% on single-condition models, and 0.1% on multi-condition models). Our architecture demonstrates potential for broader applications in detecting and analyzing ophthalmic and systemic diseases across diverse imaging datasets and settings. Full article
(This article belongs to the Special Issue AI in OCT (Optical Coherence Tomography) Image Analysis)
Show Figures

Figure 1

14 pages, 2016 KiB  
Article
DNA Methyltransferase Expression (DNMT1, DNMT3a, and DNMT3b) as a Potential Biomarker in Age-Related Macular Degeneration
by Pedro Camacho, Edna Ribeiro, Bruno Pereira, João Nascimento, Paulo Caldeira Rosa, José Henriques, Sandra Barrão, Silvia Sadio, Bruno Quendera, Mariana Delgadinho, Catarina Ginete, Carina Silva and Miguel Brito
J. Clin. Med. 2025, 14(2), 559; https://doi.org/10.3390/jcm14020559 - 16 Jan 2025
Cited by 3 | Viewed by 1297
Abstract
Background/Objectives: Age-related macular degeneration (AMD) is a global cause of vision loss, with limited therapeutic options highlighting the need for effective biomarkers. This study aimed to characterize plasma DNA methyltransferase expression (DNMT1, DNMT3A, and DNMT3B) in AMD patients and [...] Read more.
Background/Objectives: Age-related macular degeneration (AMD) is a global cause of vision loss, with limited therapeutic options highlighting the need for effective biomarkers. This study aimed to characterize plasma DNA methyltransferase expression (DNMT1, DNMT3A, and DNMT3B) in AMD patients and explore divergent expression patterns across different stages of AMD. Methods: Thirty-eight AMD patients were prospectively enrolled and stratified by disease severity: eAMD, iAMD, nAMD, and aAMD. Comprehensive ophthalmological assessments were performed, including best-corrected visual acuity, digital color fundus photographs, and Spectral Domain Optical Coherence Tomography. Peripheral blood samples were collected for RNA extraction and qRT-PCR to access epigenetic effectors’ transcriptional expression, namely DNMT1, DNMT3A, and DNMT3B genes. The collected data were analyzed using IBM SPSS 29. Results: DNMT1 expression was significantly downregulated in late AMD (−0.186 ± 0.341) compared to early/intermediate AMD (0.026 ± 0.246). Within late AMD, aAMD exhibited a marked downregulation of DNMT1 (−0.375 ± 0.047) compared to nAMD (0.129 ± 0.392). DNMT3A and DNMT3B showed similar divergent expression patterns, correlating with disease stage. Conclusions: This study identified stage-specific transcriptional differences in DNMT expression, emphasizing its potential as a biomarker for AMD progression and a target for future research into personalized therapeutic strategies. Full article
(This article belongs to the Section Ophthalmology)
Show Figures

Figure 1

8 pages, 229 KiB  
Article
Evaluating Diagnostic Concordance in Primary Open-Angle Glaucoma Among Academic Glaucoma Subspecialists
by Chenmin Wang, De-Fu Chen, Xiao Shang, Xiaoyan Wang, Xizhong Chu, Chengju Hu, Qiangjie Huang, Gangwei Cheng, Jianjun Li, Ruiyi Ren and Yuanbo Liang
Diagnostics 2024, 14(21), 2460; https://doi.org/10.3390/diagnostics14212460 - 3 Nov 2024
Cited by 1 | Viewed by 1454
Abstract
Objective: The study aimed to evaluate the interobserver agreement among glaucoma subspecialists in diagnosing glaucoma and to explore the causes of diagnostic discrepancies. Methods: Three experienced glaucoma subspecialists independently assessed frequency domain optical coherence tomography, fundus color photographs, and static perimetry results from [...] Read more.
Objective: The study aimed to evaluate the interobserver agreement among glaucoma subspecialists in diagnosing glaucoma and to explore the causes of diagnostic discrepancies. Methods: Three experienced glaucoma subspecialists independently assessed frequency domain optical coherence tomography, fundus color photographs, and static perimetry results from 464 eyes of 275 participants, adhering to unified glaucoma diagnostic criteria. All data were collected from the Wenzhou Glaucoma Progression Study between August 2014 and June 2021. Results: The overall interobserver agreement among the three experts was poor, with a Fleiss’ kappa value of 0.149. The kappa values interobserver agreement between pairs of experts ranged from 0.133 to 0.282. In 50 cases, or approximately 10.8%, the three experts reached completely different diagnoses. Agreement was more likely in cases involving larger average cup-to-disc ratios, greater vertical cup-to-disc ratios, more severe visual field defects, and thicker retinal nerve fiber layer measurements, particularly in the temporal and inferior quadrants. High myopia also negatively impacted interobserver agreement. Conclusions: Despite using unified diagnostic criteria for glaucoma, significant differences in interobserver consistency persist among glaucoma subspecialists. To improve interobserver agreement, it is recommended to provide additional training on standardized diagnostic criteria. Furthermore, for cases with inconsistent diagnoses, long-term follow-up is essential to confirm the diagnosis of glaucoma. Full article
(This article belongs to the Special Issue Advances in the Diagnosis of Eye Diseases)
12 pages, 3133 KiB  
Article
Using Deep Learning to Distinguish Highly Malignant Uveal Melanoma from Benign Choroidal Nevi
by Laura Hoffmann, Constance B. Runkel, Steffen Künzel, Payam Kabiri, Anne Rübsam, Theresa Bonaventura, Philipp Marquardt, Valentin Haas, Nathalie Biniaminov, Sergey Biniaminov, Antonia M. Joussen and Oliver Zeitz
J. Clin. Med. 2024, 13(14), 4141; https://doi.org/10.3390/jcm13144141 - 16 Jul 2024
Cited by 7 | Viewed by 2279
Abstract
Background: This study aimed to evaluate the potential of human–machine interaction (HMI) in a deep learning software for discerning the malignancy of choroidal melanocytic lesions based on fundus photographs. Methods: The study enrolled individuals diagnosed with a choroidal melanocytic lesion at a tertiary [...] Read more.
Background: This study aimed to evaluate the potential of human–machine interaction (HMI) in a deep learning software for discerning the malignancy of choroidal melanocytic lesions based on fundus photographs. Methods: The study enrolled individuals diagnosed with a choroidal melanocytic lesion at a tertiary clinic between 2011 and 2023, resulting in a cohort of 762 eligible cases. A deep learning-based assistant integrated into the software underwent training using a dataset comprising 762 color fundus photographs (CFPs) of choroidal lesions captured by various fundus cameras. The dataset was categorized into benign nevi, untreated choroidal melanomas, and irradiated choroidal melanomas. The reference standard for evaluation was established by retinal specialists using multimodal imaging. Trinary and binary models were trained, and their classification performance was evaluated on a test set consisting of 100 independent images. The discriminative performance of deep learning models was evaluated based on accuracy, recall, and specificity. Results: The final accuracy rates on the independent test set for multi-class and binary (benign vs. malignant) classification were 84.8% and 90.9%, respectively. Recall and specificity ranged from 0.85 to 0.90 and 0.91 to 0.92, respectively. The mean area under the curve (AUC) values were 0.96 and 0.99, respectively. Optimal discriminative performance was observed in binary classification with the incorporation of a single imaging modality, achieving an accuracy of 95.8%. Conclusions: The deep learning models demonstrated commendable performance in distinguishing the malignancy of choroidal lesions. The software exhibits promise for resource-efficient and cost-effective pre-stratification. Full article
(This article belongs to the Section Ophthalmology)
Show Figures

Figure 1

13 pages, 1280 KiB  
Article
Automated Classification of Physiologic, Glaucomatous, and Glaucoma-Suspected Optic Discs Using Machine Learning
by Raphael Diener, Alexander W. Renz, Florian Eckhard, Helmar Segbert, Nicole Eter, Arnim Malcherek and Julia Biermann
Diagnostics 2024, 14(11), 1073; https://doi.org/10.3390/diagnostics14111073 - 22 May 2024
Cited by 1 | Viewed by 1477
Abstract
In order to generate a machine learning algorithm (MLA) that can support ophthalmologists with the diagnosis of glaucoma, a carefully selected dataset that is based on clinically confirmed glaucoma patients as well as borderline cases (e.g., patients with suspected glaucoma) is required. The [...] Read more.
In order to generate a machine learning algorithm (MLA) that can support ophthalmologists with the diagnosis of glaucoma, a carefully selected dataset that is based on clinically confirmed glaucoma patients as well as borderline cases (e.g., patients with suspected glaucoma) is required. The clinical annotation of datasets is usually performed at the expense of the data volume, which results in poorer algorithm performance. This study aimed to evaluate the application of an MLA for the automated classification of physiological optic discs (PODs), glaucomatous optic discs (GODs), and glaucoma-suspected optic discs (GSODs). Annotation of the data to the three groups was based on the diagnosis made in clinical practice by a glaucoma specialist. Color fundus photographs and 14 types of metadata (including visual field testing, retinal nerve fiber layer thickness, and cup–disc ratio) of 1168 eyes from 584 patients (POD = 321, GOD = 336, GSOD = 310) were used for the study. Machine learning (ML) was performed in the first step with the color fundus photographs only and in the second step with the images and metadata. Sensitivity, specificity, and accuracy of the classification of GSOD vs. GOD and POD vs. GOD were evaluated. Classification of GOD vs. GSOD and GOD vs. POD performed in the first step had AUCs of 0.84 and 0.88, respectively. By combining the images and metadata, the AUCs increased to 0.92 and 0.99, respectively. By combining images and metadata, excellent performance of the MLA can be achieved despite having only a small amount of data, thus supporting ophthalmologists with glaucoma diagnosis. Full article
(This article belongs to the Special Issue Artificial Intelligence in Ophthalmology)
Show Figures

Figure 1

14 pages, 663 KiB  
Review
Clinical Perspectives on the Use of Computer Vision in Glaucoma Screening
by José Camara and Antonio Cunha
Medicina 2024, 60(3), 428; https://doi.org/10.3390/medicina60030428 - 2 Mar 2024
Viewed by 2217
Abstract
Glaucoma is one of the leading causes of irreversible blindness in the world. Early diagnosis and treatment increase the chances of preserving vision. However, despite advances in techniques for the functional and structural assessment of the retina, specialists still encounter many challenges, in [...] Read more.
Glaucoma is one of the leading causes of irreversible blindness in the world. Early diagnosis and treatment increase the chances of preserving vision. However, despite advances in techniques for the functional and structural assessment of the retina, specialists still encounter many challenges, in part due to the different presentations of the standard optic nerve head (ONH) in the population, the lack of explicit references that define the limits of glaucomatous optic neuropathy (GON), specialist experience, and the quality of patients’ responses to some ancillary exams. Computer vision uses deep learning (DL) methodologies, successfully applied to assist in the diagnosis and progression of GON, with the potential to provide objective references for classification, avoiding possible biases in experts’ decisions. To this end, studies have used color fundus photographs (CFPs), functional exams such as visual field (VF), and structural exams such as optical coherence tomography (OCT). However, it is still necessary to know the minimum limits of detection of GON characteristics performed through these methodologies. This study analyzes the use of deep learning (DL) methodologies in the various stages of glaucoma screening compared to the clinic to reduce the costs of GON assessment and the work carried out by specialists, to improve the speed of diagnosis, and to homogenize opinions. It concludes that the DL methodologies used in automated glaucoma screening can bring more robust results closer to reality. Full article
(This article belongs to the Section Ophthalmology)
Show Figures

Figure 1

17 pages, 5836 KiB  
Article
Interpretable Detection of Diabetic Retinopathy, Retinal Vein Occlusion, Age-Related Macular Degeneration, and Other Fundus Conditions
by Wenlong Li, Linbo Bian, Baikai Ma, Tong Sun, Yiyun Liu, Zhengze Sun, Lin Zhao, Kang Feng, Fan Yang, Xiaona Wang, Szyyann Chan, Hongliang Dou and Hong Qi
Diagnostics 2024, 14(2), 121; https://doi.org/10.3390/diagnostics14020121 - 5 Jan 2024
Cited by 6 | Viewed by 3073
Abstract
Diabetic retinopathy (DR), retinal vein occlusion (RVO), and age-related macular degeneration (AMD) pose significant global health challenges, often resulting in vision impairment and blindness. Automatic detection of these conditions is crucial, particularly in underserved rural areas with limited access to ophthalmic services. Despite [...] Read more.
Diabetic retinopathy (DR), retinal vein occlusion (RVO), and age-related macular degeneration (AMD) pose significant global health challenges, often resulting in vision impairment and blindness. Automatic detection of these conditions is crucial, particularly in underserved rural areas with limited access to ophthalmic services. Despite remarkable advancements in artificial intelligence, especially convolutional neural networks (CNNs), their complexity can make interpretation difficult. In this study, we curated a dataset consisting of 15,089 color fundus photographs (CFPs) obtained from 8110 patients who underwent fundus fluorescein angiography (FFA) examination. The primary objective was to construct integrated models that merge CNNs with an attention mechanism. These models were designed for a hierarchical multilabel classification task, focusing on the detection of DR, RVO, AMD, and other fundus conditions. Furthermore, our approach extended to the detailed classification of DR, RVO, and AMD according to their respective subclasses. We employed a methodology that entails the translation of diagnostic information obtained from FFA results into CFPs. Our investigation focused on evaluating the models’ ability to achieve precise diagnoses solely based on CFPs. Remarkably, our models showcased improvements across diverse fundus conditions, with the ConvNeXt-base + attention model standing out for its exceptional performance. The ConvNeXt-base + attention model achieved remarkable metrics, including an area under the receiver operating characteristic curve (AUC) of 0.943, a referable F1 score of 0.870, and a Cohen’s kappa of 0.778 for DR detection. For RVO, it attained an AUC of 0.960, a referable F1 score of 0.854, and a Cohen’s kappa of 0.819. Furthermore, in AMD detection, the model achieved an AUC of 0.959, an F1 score of 0.727, and a Cohen’s kappa of 0.686. Impressively, the model demonstrated proficiency in subclassifying RVO and AMD, showcasing commendable sensitivity and specificity. Moreover, our models enhanced interpretability by visualizing attention weights on fundus images, aiding in the identification of disease findings. These outcomes underscore the substantial impact of our models in advancing the detection of DR, RVO, and AMD, offering the potential for improved patient outcomes and positively influencing the healthcare landscape. Full article
(This article belongs to the Special Issue Artificial Intelligence in Ophthalmology)
Show Figures

Figure 1

19 pages, 5352 KiB  
Article
Classification of Color Fundus Photographs Using Fusion Extracted Features and Customized CNN Models
by Jing-Zhe Wang, Nan-Han Lu, Wei-Chang Du, Kuo-Ying Liu, Shih-Yen Hsu, Chi-Yuan Wang, Yun-Ju Chen, Li-Ching Chang, Wen-Hung Twan, Tai-Been Chen and Yung-Hui Huang
Healthcare 2023, 11(15), 2228; https://doi.org/10.3390/healthcare11152228 - 7 Aug 2023
Cited by 5 | Viewed by 1955
Abstract
This study focuses on overcoming challenges in classifying eye diseases using color fundus photographs by leveraging deep learning techniques, aiming to enhance early detection and diagnosis accuracy. We utilized a dataset of 6392 color fundus photographs across eight disease categories, which was later [...] Read more.
This study focuses on overcoming challenges in classifying eye diseases using color fundus photographs by leveraging deep learning techniques, aiming to enhance early detection and diagnosis accuracy. We utilized a dataset of 6392 color fundus photographs across eight disease categories, which was later augmented to 17,766 images. Five well-known convolutional neural networks (CNNs)—efficientnetb0, mobilenetv2, shufflenet, resnet50, and resnet101—and a custom-built CNN were integrated and trained on this dataset. Image sizes were standardized, and model performance was evaluated via accuracy, Kappa coefficient, and precision metrics. Shufflenet and efficientnetb0demonstrated strong performances, while our custom 17-layer CNN outperformed all with an accuracy of 0.930 and a Kappa coefficient of 0.920. Furthermore, we found that the fusion of image features with classical machine learning classifiers increased the performance, with Logistic Regression showcasing the best results. Our study highlights the potential of AI and deep learning models in accurately classifying eye diseases and demonstrates the efficacy of custom-built models and the fusion of deep learning and classical methods. Future work should focus on validating these methods across larger datasets and assessing their real-world applicability. Full article
Show Figures

Figure 1

17 pages, 4788 KiB  
Review
New Concepts for the Diagnosis of Polypoidal Choroidal Vasculopathy
by Jinzhi Zhao, Priya R Chandrasekaran, Kai Xiong Cheong, Mark Wong and Kelvin Teo
Diagnostics 2023, 13(10), 1680; https://doi.org/10.3390/diagnostics13101680 - 9 May 2023
Cited by 10 | Viewed by 3464
Abstract
Polypoidal choroidal vasculopathy (PCV) is a subtype of neovascular age-related macular degeneration (nAMD) that is characterized by a branching neovascular network and polypoidal lesions. It is important to differentiate PCV from typical nAMD as there are differences in treatment response between subtypes. Indocyanine [...] Read more.
Polypoidal choroidal vasculopathy (PCV) is a subtype of neovascular age-related macular degeneration (nAMD) that is characterized by a branching neovascular network and polypoidal lesions. It is important to differentiate PCV from typical nAMD as there are differences in treatment response between subtypes. Indocyanine green angiography (ICGA) is the gold standard for diagnosing PCV; however, ICGA is an invasive detection method and impractical for extensive use for regular long-term monitoring. In addition, access to ICGA may be limited in some settings. The purpose of this review is to summarize the utilization of multimodal imaging modalities (color fundus photography, optical coherence tomography (OCT), OCT angiography (OCTA), and fundus autofluorescence (FAF)) in differentiating PCV from typical nAMD and predicting disease activity and prognosis. In particular, OCT shows tremendous potential in diagnosing PCV. Characteristics such as subretinal pigment epithelium (RPE) ring-like lesion, en face OCT-complex RPE elevation, and sharp-peaked pigment epithelial detachment provide high sensitivity and specificity for differentiating PCV from nAMD. With the use of more practical, non-ICGA imaging modalities, the diagnosis of PCV can be more easily made and treatment tailored as necessary for optimal outcomes. Full article
Show Figures

Figure 1

11 pages, 494 KiB  
Article
A Pilot Study of Implementing Diabetic Retinopathy Screening in the Oslo Region, Norway: Baseline Results
by Ellen Steffenssen Sauesund, Øystein Kalsnes Jørstad, Cathrine Brunborg, Morten Carstens Moe, Maja Gran Erke, Dag Sigurd Fosmark and Goran Petrovski
Biomedicines 2023, 11(4), 1222; https://doi.org/10.3390/biomedicines11041222 - 19 Apr 2023
Cited by 4 | Viewed by 2550
Abstract
Purpose: to gain insight into the baseline parameters of a population with diabetes mellitus (DM) included in a pilot diabetic retinopathy (DR) screening program at Oslo University Hospital (OUH), Norway. Methods: This was a cross-sectional study of a cohort of adult patients (≥18 [...] Read more.
Purpose: to gain insight into the baseline parameters of a population with diabetes mellitus (DM) included in a pilot diabetic retinopathy (DR) screening program at Oslo University Hospital (OUH), Norway. Methods: This was a cross-sectional study of a cohort of adult patients (≥18 years) with type 1 or 2 DM (T1D and T2D). We measured the best-corrected visual acuity (BCVA), blood pressure (BP), heart rate (HR), intraocular pressure (IOP), height and weight. We also collected HbA1c, total serum cholesterol and urine-albumin, -creatinine and -albumin-to-creatinine ratio (ACR), as well as socio-demographic parameters, medications and previous screening history. We obtained color fundus photographs, which were graded by two experienced ophthalmologists according to the International Clinical Disease Severity Scale for DR. Results: The study included 180 eyes of 90 patients: 12 patients (13.3%) had T1D and 78 (86.7%) had T2D. In the T1D group, 5 patients (41.7%) had no DR, and 7 (58.3%) had some degree of DR. In the T2D group, 60 patients (76.9%) had no DR, and 18 (23.1%) had some degree of DR. None of the patients had proliferative DR. Of the 43 patients not newly diagnosed (time of diagnosis > 5 years for T1D and >1 years for T2D), 37.5% of the T1D patients and 5.7% of the T2D patients had previously undergone regular screening. Univariate analyses found for the whole cohort significant associations between DR and age, HbA1c, urine albumin-to-creatinine ratio, body mass index (BMI) and duration of DM. For the T2D group alone, there were significant associations between DR and HbA1c, BMI, urine creatinine, urine albumin-to-creatinine ratio and duration of DM. The analysis also showed three times higher odds for DR in the T1D group than the T2D group. Conclusions: This study underscores the need for implementing a systematic DR screening program in the Oslo region, Norway, to better reach out to patients with DM and improve their screening adherence. Timely and proper treatment can prevent or mitigate vision loss and improve the prognosis. A considerable number of patients were referred from general practitioners for not being followed by an ophthalmologist.Among patients not newly diagnosed with DM, 62.8% had never had an eye exam, and the duration of DM for these patients was up to 18 years (median: 8 years). Full article
(This article belongs to the Special Issue Molecular Research and Recent Advances in Diabetic Retinopathy)
Show Figures

Figure 1

9 pages, 1537 KiB  
Article
Discriminating Healthy Optic Discs and Visible Optic Disc Drusen on Fundus Autofluorescence and Color Fundus Photography Using Deep Learning—A Pilot Study
by Raphael Diener, Jost Lennart Lauermann, Nicole Eter and Maximilian Treder
J. Clin. Med. 2023, 12(5), 1951; https://doi.org/10.3390/jcm12051951 - 1 Mar 2023
Cited by 2 | Viewed by 4682
Abstract
The aim of this study was to use deep learning based on a deep convolutional neural network (DCNN) for automated image classification of healthy optic discs (OD) and visible optic disc drusen (ODD) on fundus autofluorescence (FAF) and color fundus photography (CFP). In [...] Read more.
The aim of this study was to use deep learning based on a deep convolutional neural network (DCNN) for automated image classification of healthy optic discs (OD) and visible optic disc drusen (ODD) on fundus autofluorescence (FAF) and color fundus photography (CFP). In this study, a total of 400 FAF and CFP images of patients with ODD and healthy controls were used. A pre-trained multi-layer Deep Convolutional Neural Network (DCNN) was trained and validated independently on FAF and CFP images. Training and validation accuracy and cross-entropy were recorded. Both generated DCNN classifiers were tested with 40 FAF and CFP images (20 ODD and 20 controls). After the repetition of 1000 training cycles, the training accuracy was 100%, the validation accuracy was 92% (CFP) and 96% (FAF), respectively. The cross-entropy was 0.04 (CFP) and 0.15 (FAF). The sensitivity, specificity, and accuracy of the DCNN for classification of FAF images was 100%. For the DCNN used to identify ODD on color fundus photographs, sensitivity was 85%, specificity 100%, and accuracy 92.5%. Differentiation between healthy controls and ODD on CFP and FAF images was possible with high specificity and sensitivity using a deep learning approach. Full article
Show Figures

Figure 1

26 pages, 8130 KiB  
Article
Artificial Intelligence for Personalised Ophthalmology Residency Training
by George Adrian Muntean, Adrian Groza, Anca Marginean, Radu Razvan Slavescu, Mihnea Gabriel Steiu, Valentin Muntean and Simona Delia Nicoara
J. Clin. Med. 2023, 12(5), 1825; https://doi.org/10.3390/jcm12051825 - 24 Feb 2023
Cited by 7 | Viewed by 3374
Abstract
Residency training in medicine lays the foundation for future medical doctors. In real-world settings, training centers face challenges in trying to create balanced residency programs, with cases encountered by residents not always being fairly distributed among them. In recent years, there has been [...] Read more.
Residency training in medicine lays the foundation for future medical doctors. In real-world settings, training centers face challenges in trying to create balanced residency programs, with cases encountered by residents not always being fairly distributed among them. In recent years, there has been a tremendous advancement in developing artificial intelligence (AI)-based algorithms with human expert guidance for medical imaging segmentation, classification, and prediction. In this paper, we turned our attention from training machines to letting them train us and developed an AI framework for personalised case-based ophthalmology residency training. The framework is built on two components: (1) a deep learning (DL) model and (2) an expert-system-powered case allocation algorithm. The DL model is trained on publicly available datasets by means of contrastive learning and can classify retinal diseases from color fundus photographs (CFPs). Patients visiting the retina clinic will have a CFP performed and afterward, the image will be interpreted by the DL model, which will give a presumptive diagnosis. This diagnosis is then passed to a case allocation algorithm which selects the resident who would most benefit from the specific case, based on their case history and performance. At the end of each case, the attending expert physician assesses the resident’s performance based on standardised examination files, and the results are immediately updated in their portfolio. Our approach provides a structure for future precision medical education in ophthalmology. Full article
(This article belongs to the Special Issue Big Data and Artificial Intelligence-Driven Research in Ophthalmology)
Show Figures

Figure 1

25 pages, 8475 KiB  
Article
Neural Networks Application for Accurate Retina Vessel Segmentation from OCT Fundus Reconstruction
by Tomasz Marciniak, Agnieszka Stankiewicz and Przemyslaw Zaradzki
Sensors 2023, 23(4), 1870; https://doi.org/10.3390/s23041870 - 7 Feb 2023
Cited by 5 | Viewed by 3100
Abstract
The use of neural networks for retinal vessel segmentation has gained significant attention in recent years. Most of the research related to the segmentation of retinal blood vessels is based on fundus images. In this study, we examine five neural network architectures to [...] Read more.
The use of neural networks for retinal vessel segmentation has gained significant attention in recent years. Most of the research related to the segmentation of retinal blood vessels is based on fundus images. In this study, we examine five neural network architectures to accurately segment vessels in fundus images reconstructed from 3D OCT scan data. OCT-based fundus reconstructions are of much lower quality compared to color fundus photographs due to noise and lower and disproportionate resolutions. The fundus image reconstruction process was performed based on the segmentation of the retinal layers in B-scans. Three reconstruction variants were proposed, which were then used in the process of detecting blood vessels using neural networks. We evaluated performance using a custom dataset of 24 3D OCT scans (with manual annotations performed by an ophthalmologist) using 6-fold cross-validation and demonstrated segmentation accuracy up to 98%. Our results indicate that the use of neural networks is a promising approach to segmenting the retinal vessel from a properly reconstructed fundus. Full article
(This article belongs to the Special Issue Vision- and Image-Based Biomedical Diagnostics)
Show Figures

Figure 1

9 pages, 269 KiB  
Review
Application of Deep Learning to Retinal-Image-Based Oculomics for Evaluation of Systemic Health: A Review
by Jo-Hsuan Wu and Tin Yan Alvin Liu
J. Clin. Med. 2023, 12(1), 152; https://doi.org/10.3390/jcm12010152 - 24 Dec 2022
Cited by 26 | Viewed by 4464
Abstract
The retina is a window to the human body. Oculomics is the study of the correlations between ophthalmic biomarkers and systemic health or disease states. Deep learning (DL) is currently the cutting-edge machine learning technique for medical image analysis, and in recent years, [...] Read more.
The retina is a window to the human body. Oculomics is the study of the correlations between ophthalmic biomarkers and systemic health or disease states. Deep learning (DL) is currently the cutting-edge machine learning technique for medical image analysis, and in recent years, DL techniques have been applied to analyze retinal images in oculomics studies. In this review, we summarized oculomics studies that used DL models to analyze retinal images—most of the published studies to date involved color fundus photographs, while others focused on optical coherence tomography images. These studies showed that some systemic variables, such as age, sex and cardiovascular disease events, could be consistently robustly predicted, while other variables, such as thyroid function and blood cell count, could not be. DL-based oculomics has demonstrated fascinating, “super-human” predictive capabilities in certain contexts, but it remains to be seen how these models will be incorporated into clinical care and whether management decisions influenced by these models will lead to improved clinical outcomes. Full article
Back to TopTop