Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (53)

Search Parameters:
Keywords = retinal OCT classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 2784 KiB  
Article
Development of Stacked Neural Networks for Application with OCT Data, to Improve Diabetic Retinal Health Care Management
by Pedro Rebolo, Guilherme Barbosa, Eduardo Carvalho, Bruno Areias, Ana Guerra, Sónia Torres-Costa, Nilza Ramião, Manuel Falcão and Marco Parente
Information 2025, 16(8), 649; https://doi.org/10.3390/info16080649 - 30 Jul 2025
Viewed by 24
Abstract
Background: Retinal diseases are becoming an important public health issue, with early diagnosis and timely intervention playing a key role in preventing vision loss. Optical coherence tomography (OCT) remains the leading non-invasive imaging technique for identifying retinal conditions. However, distinguishing between diabetic macular [...] Read more.
Background: Retinal diseases are becoming an important public health issue, with early diagnosis and timely intervention playing a key role in preventing vision loss. Optical coherence tomography (OCT) remains the leading non-invasive imaging technique for identifying retinal conditions. However, distinguishing between diabetic macular edema (DME) and macular edema resulting from retinal vein occlusion (RVO) can be particularly challenging, especially for clinicians without specialized training in retinal disorders, as both conditions manifest through increased retinal thickness. Due to the limited research exploring the application of deep learning methods, particularly for RVO detection using OCT scans, this study proposes a novel diagnostic approach based on stacked convolutional neural networks. This architecture aims to enhance classification accuracy by integrating multiple neural network layers, enabling more robust feature extraction and improved differentiation between retinal pathologies. Methods: The VGG-16, VGG-19, and ResNet50 models were fine-tuned using the Kermany dataset to classify the OCT images and afterwards were trained using a private OCT dataset. Four stacked models were then developed using these models: a model using the VGG-16 and VGG-19 networks, a model using the VGG-16 and ResNet50 networks, a model using the VGG-19 and ResNet50 models, and finally a model using all three networks. The performance metrics of the model includes accuracy, precision, recall, F2-score, and area under of the receiver operating characteristic curve (AUROC). Results: The stacked neural network using all three models achieved the best results, having an accuracy of 90.7%, precision of 99.2%, a recall of 90.7%, and an F2-score of 92.3%. Conclusions: This study presents a novel method for distinguishing retinal disease by using stacked neural networks. This research aims to provide a reliable tool for ophthalmologists to improve diagnosis accuracy and speed. Full article
(This article belongs to the Special Issue AI-Based Biomedical Signal Processing)
Show Figures

Figure 1

27 pages, 2478 KiB  
Article
Early Diabetic Retinopathy Detection from OCT Images Using Multifractal Analysis and Multi-Layer Perceptron Classification
by Ahlem Aziz, Necmi Serkan Tezel, Seydi Kaçmaz and Youcef Attallah
Diagnostics 2025, 15(13), 1616; https://doi.org/10.3390/diagnostics15131616 - 25 Jun 2025
Viewed by 538
Abstract
Background/Objectives: Diabetic retinopathy (DR) remains one of the primary causes of preventable vision impairment worldwide, particularly among individuals with long-standing diabetes. The progressive damage of retinal microvasculature can lead to irreversible blindness if not detected and managed at an early stage. Therefore, the [...] Read more.
Background/Objectives: Diabetic retinopathy (DR) remains one of the primary causes of preventable vision impairment worldwide, particularly among individuals with long-standing diabetes. The progressive damage of retinal microvasculature can lead to irreversible blindness if not detected and managed at an early stage. Therefore, the development of reliable, non-invasive, and automated screening tools has become increasingly vital in modern ophthalmology. With the evolution of medical imaging technologies, Optical Coherence Tomography (OCT) has emerged as a valuable modality for capturing high-resolution cross-sectional images of retinal structures. In parallel, machine learning has shown considerable promise in supporting early disease recognition by uncovering complex and often imperceptible patterns in image data. Methods: This study introduces a novel framework for the early detection of DR through multifractal analysis of OCT images. Multifractal features, extracted using a box-counting approach, provide quantitative descriptors that reflect the structural irregularities of retinal tissue associated with pathological changes. Results: A comparative evaluation of several machine learning algorithms was conducted to assess classification performance. Among them, the Multi-Layer Perceptron (MLP) achieved the highest predictive accuracy, with a score of 98.02%, along with precision, recall, and F1-score values of 98.24%, 97.80%, and 98.01%, respectively. Conclusions: These results highlight the strength of combining OCT imaging with multifractal geometry and deep learning methods to build robust and scalable systems for DR screening. The proposed approach could contribute significantly to improving early diagnosis, clinical decision-making, and patient outcomes in diabetic eye care. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

45 pages, 14000 KiB  
Article
Automated Eye Disease Diagnosis Using a 2D CNN with Grad-CAM: High-Accuracy Detection of Retinal Asymmetries for Multiclass Classification
by Sameh Abd El-Ghany, Mahmood A. Mahmood and A. A. Abd El-Aziz
Symmetry 2025, 17(5), 768; https://doi.org/10.3390/sym17050768 - 15 May 2025
Viewed by 801
Abstract
Eye diseases (EDs), including glaucoma, diabetic retinopathy, and cataracts, are major contributors to vision loss and reduced quality of life worldwide. These conditions not only affect millions of individuals but also impose a significant burden on global healthcare systems. As the population ages [...] Read more.
Eye diseases (EDs), including glaucoma, diabetic retinopathy, and cataracts, are major contributors to vision loss and reduced quality of life worldwide. These conditions not only affect millions of individuals but also impose a significant burden on global healthcare systems. As the population ages and lifestyle changes increase the prevalence of conditions like diabetes, the incidence of EDs is expected to rise, further straining diagnostic and treatment resources. Timely and accurate diagnosis is critical for effective management and prevention of vision loss, as early intervention can significantly slow disease progression and improve patient outcomes. However, traditional diagnostic methods rely heavily on manual analysis of fundus imaging, which is labor-intensive, time-consuming, and subject to human error. This underscores the urgent need for automated, efficient, and accurate diagnostic systems that can handle the growing demand while maintaining high diagnostic standards. Current approaches, while advancing, still face challenges such as inefficiency, susceptibility to errors, and limited ability to detect subtle retinal asymmetries, which are critical early indicators of disease. Effective solutions must address these issues while ensuring high accuracy, interpretability, and scalability. This research introduces a 2D single-channel convolutional neural network (CNN) based on ResNet101-V2 architecture. The model integrates gradient-weighted class activation mapping (Grad-CAM) to highlight retinal asymmetries linked to EDs, thereby enhancing interpretability and detection precision. Evaluated on retinal Optical Coherence Tomography (OCT) datasets for multiclass classification tasks, the model demonstrated exceptional performance, achieving accuracy rates of 99.90% for four-class tasks and 99.27% for eight-class tasks. By leveraging patterns of retinal symmetry and asymmetry, the proposed model improves early detection and simplifies the diagnostic workflow, offering a promising advancement in the field of automated eye disease diagnosis. Full article
Show Figures

Figure 1

23 pages, 345 KiB  
Article
Stratified Multisource Optical Coherence Tomography Integration and Cross-Pathology Validation Framework for Automated Retinal Diagnostics
by Michael Sher, Riah Sharma, David Remyes, Daniel Nasef, Demarcus Nasef and Milan Toma
Appl. Sci. 2025, 15(9), 4985; https://doi.org/10.3390/app15094985 - 30 Apr 2025
Cited by 1 | Viewed by 771
Abstract
This study presents a clinical utility-driven machine learning framework for retinal Optical Coherence Tomography classification, addressing challenges posed by manual interpretation variability and dataset heterogeneity. The methodology integrates biomimetic data partitioning, deep biomarker extraction via pretrained VGG16 networks, and automated model selection optimized [...] Read more.
This study presents a clinical utility-driven machine learning framework for retinal Optical Coherence Tomography classification, addressing challenges posed by manual interpretation variability and dataset heterogeneity. The methodology integrates biomimetic data partitioning, deep biomarker extraction via pretrained VGG16 networks, and automated model selection optimized for clinical decision-making. Stratified data curation preserved pathological distributions across training, validation, and testing subsets, while SMOTE optimization mitigated class imbalance. Cross-pathology testing evaluated generalizability on anatomically distinct retinal conditions excluded from training, assessing the framework’s robustness to unseen pathologies. Clinical utility metrics prioritized alignment with ophthalmological imperatives, emphasizing negative predictive value to minimize false negatives and enhance diagnostic reliability. The framework advances AI-driven Optical Coherence Tomography diagnostics by harmonizing computational performance with patient-centered outcomes, enabling standardized disease detection across diverse clinical datasets through robust feature generalization. Full article
(This article belongs to the Collection Machine Learning for Biomedical Application)
Show Figures

Figure 1

13 pages, 1246 KiB  
Article
Comparing Auto-Machine Learning and Expert-Designed Models in Diagnosing Vitreomacular Interface Disorders
by Ceren Durmaz Engin, Mahmut Ozan Gokkan, Seher Koksaldi, Mustafa Kayabasi, Ufuk Besenk, Mustafa Alper Selver and Andrzej Grzybowski
J. Clin. Med. 2025, 14(8), 2774; https://doi.org/10.3390/jcm14082774 - 17 Apr 2025
Viewed by 925
Abstract
Background: The vitreomacular interface (VMI) encompasses a group of retinal disorders that significantly impact vision, requiring accurate classification for effective management. This study aims to compare the effectiveness of an expert-designed custom deep learning (DL) model and a code free Auto Machine Learning [...] Read more.
Background: The vitreomacular interface (VMI) encompasses a group of retinal disorders that significantly impact vision, requiring accurate classification for effective management. This study aims to compare the effectiveness of an expert-designed custom deep learning (DL) model and a code free Auto Machine Learning (ML) model in classifying optical coherence tomography (OCT) images of VMI disorders. Materials and Methods: A balanced dataset of OCT images across five classes—normal, epiretinal membrane (ERM), idiopathic full-thickness macular hole (FTMH), lamellar macular hole (LMH), and vitreomacular traction (VMT)—was used. The expert-designed model combined ResNet-50 and EfficientNet-B0 architectures with Monte Carlo cross-validation. The AutoML model was created on Google Vertex AI, which handled data processing, model selection, and hyperparameter tuning automatically. Performance was evaluated using average precision, precision, and recall metrics. Results: The expert-designed model achieved an overall balanced accuracy of 95.97% and a Matthews Correlation Coefficient (MCC) of 94.65%. Both models attained 100% precision and recall for normal cases. For FTMH, the expert model reached perfect precision and recall, while the AutoML model scored 97.8% average precision, and 97.4% recall. In VMT detection, the AutoML model showed 99.5% average precision with a slightly lower recall of 94.7% compared to the expert model’s 95%. For ERM, the expert model achieved 95% recall, while the AutoML model had higher precision at 93.9% but a lower recall of 79.5%. In LMH classification, the expert model exhibited 95% precision, compared to 72.3% for the AutoML model, with similar recall for both (88% and 87.2%, respectively). Conclusions: While the AutoML model demonstrated strong performance, the expert-designed model achieved superior accuracy across certain classes. AutoML platforms, although accessible to healthcare professionals, may require further advancements to match the performance of expert-designed models in clinical applications. Full article
(This article belongs to the Special Issue Artificial Intelligence and Eye Disease)
Show Figures

Figure 1

15 pages, 3318 KiB  
Article
Deep Learning to Distinguish Edema Secondary to Retinal Vein Occlusion and Diabetic Macular Edema: A Multimodal Approach Using OCT and Infrared Imaging
by Guilherme Barbosa, Eduardo Carvalho, Ana Guerra, Sónia Torres-Costa, Nilza Ramião, Marco L. P. Parente and Manuel Falcão
J. Clin. Med. 2025, 14(3), 1008; https://doi.org/10.3390/jcm14031008 - 5 Feb 2025
Cited by 1 | Viewed by 1159
Abstract
Background: Retinal diseases are emerging as a significant health concern, making early detection and prompt treatment crucial to prevent visual impairment. Optical coherence tomography (OCT) is the preferred imaging modality for non-invasive diagnosis. Both diabetic macular edema (DME) and macular edema secondary to [...] Read more.
Background: Retinal diseases are emerging as a significant health concern, making early detection and prompt treatment crucial to prevent visual impairment. Optical coherence tomography (OCT) is the preferred imaging modality for non-invasive diagnosis. Both diabetic macular edema (DME) and macular edema secondary to retinal vein occlusion (RVO) present an increase in retinal thickness, posing etiologic diagnostic challenges for non-specialists in retinal diseases. The lack of research on deep learning classification of macular edema secondary to RVO using OCT images motivated us to propose a convolutional neural network model for this task. Methods: The VGG-19 network was fine-tuned with a public dataset to classify OCT images. This network was then used to develop three models: unimodal—the input is only the OCT B-scan; multimodal—the inputs are the OCT B-scan and diabetes information, and multi-image—the inputs are the OCT B-scan, the infrared image, and the diabetes information. Seven hundred sixty-six patients from ULS São João were selected, comprising 208 healthy eyes, 207 with macular edema secondary to RVO, 218 with DME, and 200 with other pathologies. The performance metrics include accuracy, precision, recall, F0.5 score, and area under the receiver operating characteristic curve (AUROC). Results: The multi-image model achieved better results, with an accuracy of 95.20%, precision of 95.43%, recall of 95.20%, F0.5-score of 95.32%, F1-score of 95.21%, and AUROC of 99.59% on the classification task between four classes. Conclusions: This study presents a novel method to distinguish macular edema secondary to RVO and DME using diabetes diagnosis, OCT, and infrared images. This research aims to provide a reliable tool for ophthalmologists, improving the accuracy and speed of diagnoses. Full article
(This article belongs to the Section Ophthalmology)
Show Figures

Figure 1

11 pages, 2296 KiB  
Communication
SVM-Based Optical Detection of Retinal Ganglion Cell Apoptosis
by Mukhit Kulmaganbetov, Ryan Bevan, Andrew Want, Nantheera Anantrasirichai, Alin Achim, Julie Albon and James Morgan
Photonics 2025, 12(2), 128; https://doi.org/10.3390/photonics12020128 - 31 Jan 2025
Cited by 1 | Viewed by 815
Abstract
Background: Retinal ganglion cell (RGC) loss is crucial in eye diseases like glaucoma. Axon damage and dendritic degeneration precede cell death, detectable within optical coherence tomography (OCT) resolution, indicating their correlation with neuronal degeneration. The purpose of this study is to evaluate [...] Read more.
Background: Retinal ganglion cell (RGC) loss is crucial in eye diseases like glaucoma. Axon damage and dendritic degeneration precede cell death, detectable within optical coherence tomography (OCT) resolution, indicating their correlation with neuronal degeneration. The purpose of this study is to evaluate the optical changes of early retinal degeneration. Methods: The detection of optical changes in the axotomised retinal explants was completed in six C57BL/6J mice. OCT images were acquired up to 120 min from enucleation. A grey-level co-occurrence-based texture analysis was performed on the inner plexiform layer (IPL) to monitor changes in the optical speckles using a principal component analysis (PCA) and a support vector machine (SVM). In parallel tests, retinal transparency was confirmed by a comparison of the modulation transfer functions (MTFs) at 0 and 120 min. Results: Quantitative confirmation by analysis of the MTFs confirmed the non-degradation of optical transparency during the imaging period: MTF (fx) = 0.267 ± 0.02. Textural features in the IPL could discriminate between the optical signals of RGC degeneration. The mean accuracy of the SVM classification was 86.3%; discrimination was not enhanced by the combination of the SVM and PCA (81.9%). Conclusions: Optical changes in the IPL can be detected using OCT following RGC axotomy. High-resolution OCT can provide an index of retinal neuronal integrity and its degeneration. Full article
(This article belongs to the Special Issue Biomedical Optics:Imaging, Sensing and Therapy)
Show Figures

Figure 1

19 pages, 3563 KiB  
Article
Impact of Histogram Equalization on the Classification of Retina Lesions from OCT B-Scans
by Tomasz Marciniak and Agnieszka Stankiewicz
Electronics 2024, 13(24), 4996; https://doi.org/10.3390/electronics13244996 - 19 Dec 2024
Cited by 1 | Viewed by 970
Abstract
Deep learning solutions can be used to classify pathological changes of the human retina visualized in OCT images. Available datasets that can be used to train neural network models include OCT images (B-scans) of classes with selected pathological changes and images of the [...] Read more.
Deep learning solutions can be used to classify pathological changes of the human retina visualized in OCT images. Available datasets that can be used to train neural network models include OCT images (B-scans) of classes with selected pathological changes and images of the healthy retina. These images often require correction due to improper acquisition or intensity variations related to the type of OCT device. This article provides a detailed assessment of the impact of preprocessing on classification efficiency. The histograms of OCT images were examined and, depending on the histogram distribution, incorrect image fragments were removed. At the same time, the impact of histogram equalization using the standard method and the Contrast-Limited Adaptive Histogram Equalization (CLAHE) method was analyzed. The most extensive dataset of Labeled Optical Coherence Tomography (LOCT) images was used for the experimental studies. The impact of changes was assessed for different neural network architectures and various learning parameters, assuming classes of equal size. Comprehensive studies have shown that removing unnecessary white parts from the input image combined with CLAHE improves classification accuracy up to as much as 4.75% depending on the used network architecture and optimizer type. Full article
(This article belongs to the Special Issue Biometrics and Pattern Recognition)
Show Figures

Figure 1

13 pages, 3041 KiB  
Article
Detection of Disease Features on Retinal OCT Scans Using RETFound
by Katherine Du, Atharv Ramesh Nair, Stavan Shah, Adarsh Gadari, Sharat Chandra Vupparaboina, Sandeep Chandra Bollepalli, Shan Sutharahan, José-Alain Sahel, Soumya Jana, Jay Chhablani and Kiran Kumar Vupparaboina
Bioengineering 2024, 11(12), 1186; https://doi.org/10.3390/bioengineering11121186 - 25 Nov 2024
Cited by 1 | Viewed by 2157
Abstract
Eye diseases such as age-related macular degeneration (AMD) are major causes of irreversible vision loss. Early and accurate detection of these diseases is essential for effective management. Optical coherence tomography (OCT) imaging provides clinicians with in vivo, cross-sectional views of the retina, enabling [...] Read more.
Eye diseases such as age-related macular degeneration (AMD) are major causes of irreversible vision loss. Early and accurate detection of these diseases is essential for effective management. Optical coherence tomography (OCT) imaging provides clinicians with in vivo, cross-sectional views of the retina, enabling the identification of key pathological features. However, manual interpretation of OCT scans is labor-intensive and prone to variability, often leading to diagnostic inconsistencies. To address this, we leveraged the RETFound model, a foundation model pretrained on 1.6 million unlabeled retinal OCT images, to automate the classification of key disease signatures on OCT. We finetuned RETFound and compared its performance with the widely used ResNet-50 model, using single-task and multitask modes. The dataset included 1770 labeled B-scans with various disease features, including subretinal fluid (SRF), intraretinal fluid (IRF), drusen, and pigment epithelial detachment (PED). The performance was evaluated using accuracy and AUC-ROC values, which ranged across models from 0.75 to 0.77 and 0.75 to 0.80, respectively. RETFound models display comparable specificity and sensitivity to ResNet-50 models overall, making it also a promising tool for retinal disease diagnosis. These findings suggest that RETFound may offer improved diagnostic accuracy and interpretability for specific tasks, potentially aiding clinicians in more efficient and reliable OCT image analysis. Full article
(This article belongs to the Special Issue AI in OCT (Optical Coherence Tomography) Image Analysis)
Show Figures

Graphical abstract

15 pages, 2365 KiB  
Article
Session-by-Session Prediction of Anti-Endothelial Growth Factor Injection Needs in Neovascular Age-Related Macular Degeneration Using Optical-Coherence-Tomography-Derived Features and Machine Learning
by Flavio Ragni, Stefano Bovo, Andrea Zen, Diego Sona, Katia De Nadai, Ginevra Giovanna Adamo, Marco Pellegrini, Francesco Nasini, Chiara Vivarelli, Marco Tavolato, Marco Mura, Francesco Parmeggiani and Giuseppe Jurman
Diagnostics 2024, 14(23), 2609; https://doi.org/10.3390/diagnostics14232609 - 21 Nov 2024
Viewed by 1059
Abstract
Background/Objectives: Neovascular age-related macular degeneration (nAMD) is a retinal disorder leading to irreversible central vision loss. The pro-re-nata (PRN) treatment for nAMD involves frequent intravitreal injections of anti-VEGF medications, placing a burden on patients and healthcare systems. Predicting injections needs at each monitoring [...] Read more.
Background/Objectives: Neovascular age-related macular degeneration (nAMD) is a retinal disorder leading to irreversible central vision loss. The pro-re-nata (PRN) treatment for nAMD involves frequent intravitreal injections of anti-VEGF medications, placing a burden on patients and healthcare systems. Predicting injections needs at each monitoring session could optimize treatment outcomes and reduce unnecessary interventions. Methods: To achieve these aims, machine learning (ML) models were evaluated using different combinations of clinical variables, including retinal thickness and volume, best-corrected visual acuity, and features derived from macular optical coherence tomography (OCT). A “Leave Some Subjects Out” (LSSO) nested cross-validation approach ensured robust evaluation. Moreover, the SHapley Additive exPlanations (SHAP) analysis was employed to quantify the contribution of each feature to model predictions. Results: Results demonstrated that models incorporating both structural and functional features achieved high classification accuracy in predicting injection necessity (AUC = 0.747 ± 0.046, MCC = 0.541 ± 0.073). Moreover, the explainability analysis identified as key predictors both subretinal and intraretinal fluid, alongside central retinal thickness. Conclusions: These findings suggest that session-by-session prediction of injection needs in nAMD patients is feasible, even without processing the entire OCT image. The proposed ML framework has the potential to be integrated into routine clinical workflows, thereby optimizing nAMD therapeutic management. Full article
Show Figures

Figure 1

19 pages, 48904 KiB  
Article
OCTNet: A Modified Multi-Scale Attention Feature Fusion Network with InceptionV3 for Retinal OCT Image Classification
by Irshad Khalil, Asif Mehmood, Hyunchul Kim and Jungsuk Kim
Mathematics 2024, 12(19), 3003; https://doi.org/10.3390/math12193003 - 26 Sep 2024
Cited by 8 | Viewed by 2022
Abstract
Classification and identification of eye diseases using Optical Coherence Tomography (OCT) has been a challenging task and a trending research area in recent years. Accurate classification and detection of different diseases are crucial for effective care management and improving vision outcomes. Current detection [...] Read more.
Classification and identification of eye diseases using Optical Coherence Tomography (OCT) has been a challenging task and a trending research area in recent years. Accurate classification and detection of different diseases are crucial for effective care management and improving vision outcomes. Current detection methods fall into two main categories: traditional methods and deep learning-based approaches. Traditional approaches rely on machine learning for feature extraction, while deep learning methods utilize data-driven classification model training. In recent years, Deep Learning (DL) and Machine Learning (ML) algorithms have become essential tools, particularly in medical image classification, and are widely used to classify and identify various diseases. However, due to the high spatial similarities in OCT images, accurate classification remains a challenging task. In this paper, we introduce a novel model called “OCTNet” that integrates a deep learning model combining InceptionV3 with a modified multi-scale attention-based spatial attention block to enhance model performance. OCTNet employs an InceptionV3 backbone with a fusion of dual attention modules to construct the proposed architecture. The InceptionV3 model generates rich features from images, capturing both local and global aspects, which are then enhanced by utilizing the modified multi-scale spatial attention block, resulting in a significantly improved feature map. To evaluate the model’s performance, we utilized two state-of-the-art (SOTA) datasets that include images of normal cases, Choroidal Neovascularization (CNV), Drusen, and Diabetic Macular Edema (DME). Through experimentation and simulation, the proposed OCTNet improves the classification accuracy of the InceptionV3 model by 1.3%, yielding higher accuracy than other SOTA models. We also performed an ablation study to demonstrate the effectiveness of the proposed method. The model achieved an overall average accuracy of 99.50% and 99.65% with two different OCT datasets. Full article
Show Figures

Figure 1

12 pages, 1532 KiB  
Review
Classification of Hydroxychloroquine Retinopathy: A Literature Review and Proposal for Revision
by Seong Joon Ahn
Diagnostics 2024, 14(16), 1803; https://doi.org/10.3390/diagnostics14161803 - 19 Aug 2024
Cited by 1 | Viewed by 2085
Abstract
Establishing universal standards for the nomenclature and classification of hydroxychloroquine retinopathy is essential. This review summarizes the classifications used for categorizing the patterns of hydroxychloroquine retinopathy and grading its severity in the literature, highlighting the limitations of these classifications based on recent findings. [...] Read more.
Establishing universal standards for the nomenclature and classification of hydroxychloroquine retinopathy is essential. This review summarizes the classifications used for categorizing the patterns of hydroxychloroquine retinopathy and grading its severity in the literature, highlighting the limitations of these classifications based on recent findings. To overcome these limitations, I propose categorizing hydroxychloroquine retinopathy into four categories based on optical coherence tomography (OCT) findings: parafoveal (parafoveal damage only), pericentral (pericentral damage only), combined parafoveal and pericentral (both parafoveal and pericentral damage), and posterior polar (widespread damage over parafoveal, pericentral, and more peripheral areas), with or without foveal involvement. Alternatively, eyes can be categorized simply into parafoveal and pericentral retinopathy based on the most dominant area of damage, rather than the topographic distribution of overall retinal damage. Furthermore, I suggest a five-stage modified version of the current three-stage grading system of disease severity based on fundus autofluorescence (FAF) as follows: 0, no hyperautofluorescence (normal); 1, localized parafoveal or pericentral hyperautofluorescence on FAF; 2, hyperautofluorescence extending greater than 180° around the fovea; 3, combined retinal pigment epithelium (RPE) defects (hypoautofluorescence on FAF) without foveal involvement; and 4, fovea-involving hypoautofluorescence. These classification systems can better address the topographic characteristics of hydroxychloroquine retinopathy using disease patterns and assess the risk of vision-threatening retinopathy by stage, particularly with foveal involvement. Full article
(This article belongs to the Special Issue Eye Disease: Diagnosis, Management, and Prognosis)
Show Figures

Figure 1

18 pages, 5196 KiB  
Article
The Framework of Quantifying Biomarkers of OCT and OCTA Images in Retinal Diseases
by Xiaoli Liu, Haogang Zhu, Hanji Zhang and Shaoyan Xia
Sensors 2024, 24(16), 5227; https://doi.org/10.3390/s24165227 - 13 Aug 2024
Cited by 1 | Viewed by 2310
Abstract
Despite the significant advancements facilitated by previous research in introducing a plethora of retinal biomarkers, there is a lack of research addressing the clinical need for quantifying different biomarkers and prioritizing their importance for guiding clinical decision making in the context of retinal [...] Read more.
Despite the significant advancements facilitated by previous research in introducing a plethora of retinal biomarkers, there is a lack of research addressing the clinical need for quantifying different biomarkers and prioritizing their importance for guiding clinical decision making in the context of retinal diseases. To address this issue, our study introduces a novel framework for quantifying biomarkers derived from optical coherence tomography (OCT) and optical coherence tomography angiography (OCTA) images in retinal diseases. We extract 452 feature parameters from five feature types, including local binary patterns (LBP) features of OCT and OCTA, capillary and large vessel features, and the foveal avascular zone (FAZ) feature. Leveraging this extensive feature set, we construct a classification model using a statistically relevant p value for feature selection to predict retinal diseases. We obtain a high accuracy of 0.912 and F1-score of 0.906 in the task of disease classification using this framework. We find that OCT and OCTA’s LBP features provide a significant contribution of 77.12% to the significance of biomarkers in predicting retinal diseases, suggesting their potential as latent indicators for clinical diagnosis. This study employs a quantitative analysis framework to identify potential biomarkers for retinal diseases in OCT and OCTA images. Our findings suggest that LBP parameters, skewness and kurtosis values of capillary, the maximum, mean, median, and standard deviation of large vessel, as well as the eccentricity, compactness, flatness, and anisotropy index of FAZ, may serve as significant indicators of retinal conditions. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

19 pages, 3542 KiB  
Article
Advancing Ocular Imaging: A Hybrid Attention Mechanism-Based U-Net Model for Precise Segmentation of Sub-Retinal Layers in OCT Images
by Prakash Kumar Karn and Waleed H. Abdulla
Bioengineering 2024, 11(3), 240; https://doi.org/10.3390/bioengineering11030240 - 28 Feb 2024
Cited by 12 | Viewed by 3651
Abstract
This paper presents a novel U-Net model incorporating a hybrid attention mechanism for automating the segmentation of sub-retinal layers in Optical Coherence Tomography (OCT) images. OCT is an ophthalmology tool that provides detailed insights into retinal structures. Manual segmentation of these layers is [...] Read more.
This paper presents a novel U-Net model incorporating a hybrid attention mechanism for automating the segmentation of sub-retinal layers in Optical Coherence Tomography (OCT) images. OCT is an ophthalmology tool that provides detailed insights into retinal structures. Manual segmentation of these layers is time-consuming and subjective, calling for automated solutions. Our proposed model combines edge and spatial attention mechanisms with the U-Net architecture to improve segmentation accuracy. By leveraging attention mechanisms, the U-Net focuses selectively on image features. Extensive evaluations using datasets demonstrate that our model outperforms existing approaches, making it a valuable tool for medical professionals. The study also highlights the model’s robustness through performance metrics such as an average Dice score of 94.99%, Adjusted Rand Index (ARI) of 97.00%, and Strength of Agreement (SOA) classifications like “Almost Perfect”, “Excellent”, and “Very Strong”. This advanced predictive model shows promise in expediting processes and enhancing the precision of ocular imaging in real-world applications. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Diagnostics and Biomedical Analytics)
Show Figures

Figure 1

16 pages, 4072 KiB  
Article
OCT Retinopathy Classification via a Semi-Supervised Pseudo-Label Sub-Domain Adaptation and Fine-Tuning Method
by Zhicong Tan, Qinqin Zhang, Gongpu Lan, Jingjiang Xu, Chubin Ou, Lin An, Jia Qin and Yanping Huang
Mathematics 2024, 12(2), 347; https://doi.org/10.3390/math12020347 - 21 Jan 2024
Cited by 2 | Viewed by 1927
Abstract
Conventional OCT retinal disease classification methods primarily rely on fully supervised learning, which requires a large number of labeled images. However, sometimes the number of labeled images in a private domain is small but there exists a large annotated open dataset in the [...] Read more.
Conventional OCT retinal disease classification methods primarily rely on fully supervised learning, which requires a large number of labeled images. However, sometimes the number of labeled images in a private domain is small but there exists a large annotated open dataset in the public domain. In response to this scenario, a new transfer learning method based on sub-domain adaptation (TLSDA), which involves a first sub-domain adaptation and then fine-tuning, was proposed in this study. Firstly, a modified deep sub-domain adaptation network with pseudo-label (DSAN-PL) was proposed to align the feature spaces of a public domain (labeled) and a private domain (unlabeled). The DSAN-PL model was then fine-tuned using a small amount of labeled OCT data from the private domain. We tested our method on three open OCT datasets, using one as the public domain and the other two as the private domains. Remarkably, with only 10% labeled OCT images (~100 images per category), TLSDA achieved classification accuracies of 93.63% and 96.59% on the two private datasets, significantly outperforming conventional transfer learning approaches. With the Gradient-weighted Class Activation Map (Grad-CAM) technique, it was observed that the proposed method could more precisely localize the subtle lesion regions for OCT image classification. TLSDA could be a potential technique for applications where only a small number of images is labeled in a private domain and there exists a public database having a large number of labeled images with domain difference. Full article
Show Figures

Figure 1

Back to TopTop