Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (25)

Search Parameters:
Keywords = APTOS dataset

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 6302 KB  
Article
Pixel-Attention W-Shaped Network for Joint Lesion Segmentation and Diabetic Retinopathy Severity Staging
by Archana Singh, Sushma Jain and Vinay Arora
Diagnostics 2025, 15(20), 2619; https://doi.org/10.3390/diagnostics15202619 - 17 Oct 2025
Viewed by 238
Abstract
Background: Visual impairment remains a critical public health challenge, and diabetic retinopathy (DR) is a leading cause of preventable blindness worldwide. Early stages of the disease are particularly difficult to identify, as lesions are subtle, expert review is time-consuming, and conventional diagnostic workflows [...] Read more.
Background: Visual impairment remains a critical public health challenge, and diabetic retinopathy (DR) is a leading cause of preventable blindness worldwide. Early stages of the disease are particularly difficult to identify, as lesions are subtle, expert review is time-consuming, and conventional diagnostic workflows remain subjective. Methods: To address these challenges, we propose a novel Pixel-Attention W-shaped (PAW-Net) deep learning framework that integrates a Lesion-Prior Cross Attention (LPCA) module with a W-shaped encoder–decoder architecture. The LPCA module enhances pixel-level representation of microaneurysms, hemorrhages, and exudates, while the dual-branch W-shaped design jointly performs lesion segmentation and disease severity grading in a single, clinically interpretable pass. The framework has been trained and validated using DDR and a preprocessed Messidor + EyePACS dataset, with APTOS-2019 reserved for external, out-of-distribution evaluation. Results: The proposed PAW-Net framework achieved robust performance across severity levels, with an accuracy of 98.65%, precision of 98.42%, recall (sensitivity) of 98.83%, specificity of 99.12%, F1-score of 98.61%, and a Dice coefficient of 98.61%. Comparative analyses demonstrate consistent improvements over contemporary architectures, particularly in accuracy and F1-score. Conclusions: The PAW-Net framework generates interpretable lesion overlays that facilitate rapid triage and follow-up, exhibits resilience under domain shift, and maintains an efficient computational footprint suitable for telemedicine and mobile deployment. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

31 pages, 8445 KB  
Article
HIRD-Net: An Explainable CNN-Based Framework with Attention Mechanism for Diabetic Retinopathy Diagnosis Using CLAHE-D-DoG Enhanced Fundus Images
by Muhammad Hassaan Ashraf, Muhammad Nabeel Mehmood, Musharif Ahmed, Dildar Hussain, Jawad Khan, Younhyun Jung, Mohammed Zakariah and Deema Mohammed AlSekait
Life 2025, 15(9), 1411; https://doi.org/10.3390/life15091411 - 8 Sep 2025
Viewed by 898
Abstract
Diabetic Retinopathy (DR) is a leading cause of vision impairment globally, underscoring the need for accurate and early diagnosis to prevent disease progression. Although fundus imaging serves as a cornerstone of Computer-Aided Diagnosis (CAD) systems, several challenges persist, including lesion scale variability, blurry [...] Read more.
Diabetic Retinopathy (DR) is a leading cause of vision impairment globally, underscoring the need for accurate and early diagnosis to prevent disease progression. Although fundus imaging serves as a cornerstone of Computer-Aided Diagnosis (CAD) systems, several challenges persist, including lesion scale variability, blurry morphological patterns, inter-class imbalance, limited labeled datasets, and computational inefficiencies. To address these issues, this study proposes an end-to-end diagnostic framework that integrates an enhanced preprocessing pipeline with a novel deep learning architecture, Hierarchical-Inception-Residual-Dense Network (HIRD-Net). The preprocessing stage combines Contrast Limited Adaptive Histogram Equalization (CLAHE) with Dilated Difference of Gaussian (D-DoG) filtering to improve image contrast and highlight fine-grained retinal structures. HIRD-Net features a hierarchical feature fusion stem alongside multiscale, multilevel inception-residual-dense blocks for robust representation learning. The Squeeze-and-Excitation Channel Attention (SECA) is introduced before each Global Average Pooling (GAP) layer to refine the Feature Maps (FMs). It further incorporates four GAP layers for multi-scale semantic aggregation, employs the Hard-Swish activation to enhance gradient flow, and utilizes the Focal Loss function to mitigate class imbalance issues. Experimental results on the IDRiD-APTOS2019, DDR, and EyePACS datasets demonstrate that the proposed framework achieves 93.46%, 82.45% and 79.94% overall classification accuracy using only 4.8 million parameters, highlighting its strong generalization capability and computational efficiency. Furthermore, to ensure transparent predictions, an Explainable AI (XAI) approach known as Gradient-weighted Class Activation Mapping (Grad-CAM) is employed to visualize HIRD-Net’s decision-making process. Full article
(This article belongs to the Special Issue Advanced Machine Learning for Disease Prediction and Prevention)
Show Figures

Figure 1

22 pages, 12983 KB  
Article
A Hybrid Model for Fluorescein Funduscopy Image Classification by Fusing Multi-Scale Context-Aware Features
by Yawen Wang, Chao Chen, Zhuo Chen and Lingling Wu
Technologies 2025, 13(8), 323; https://doi.org/10.3390/technologies13080323 - 30 Jul 2025
Viewed by 353
Abstract
With the growing use of deep learning in medical image analysis, automated classification of fundus images is crucial for the early detection of fundus diseases. However, the complexity of fluorescein fundus angiography (FFA) images poses challenges in the accurate identification of lesions. To [...] Read more.
With the growing use of deep learning in medical image analysis, automated classification of fundus images is crucial for the early detection of fundus diseases. However, the complexity of fluorescein fundus angiography (FFA) images poses challenges in the accurate identification of lesions. To address these issues, we propose the Enhanced Feature Fusion ConvNeXt (EFF-ConvNeXt) model, a novel architecture combining VGG16 and an enhanced ConvNeXt for FFA image classification. VGG16 is employed to extract edge features, while an improved ConvNeXt incorporates the Context-Aware Feature Fusion (CAFF) strategy to enhance global contextual understanding. CAFF integrates an Improved Global Context (IGC) module with multi-scale feature fusion to jointly capture local and global features. Furthermore, an SKNet module is used in the final stages to adaptively recalibrate channel-wise features. The model demonstrates improved classification accuracy and robustness, achieving 92.50% accuracy and 92.30% F1 score on the APTOS2023 dataset—surpassing the baseline ConvNeXt-T by 3.12% in accuracy and 4.01% in F1 score. These results highlight the model’s ability to better recognize complex disease features, providing significant support for more accurate diagnosis of fundus diseases. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Medical Image Analysis)
Show Figures

Figure 1

21 pages, 733 KB  
Article
A Secure and Privacy-Preserving Approach to Healthcare Data Collaboration
by Amna Adnan, Firdous Kausar, Muhammad Shoaib, Faiza Iqbal, Ayesha Altaf and Hafiz M. Asif
Symmetry 2025, 17(7), 1139; https://doi.org/10.3390/sym17071139 - 16 Jul 2025
Cited by 1 | Viewed by 2360
Abstract
Combining a large collection of patient data and advanced technology, healthcare organizations can excel in medical research and increase the quality of patient care. At the same time, health records present serious privacy and security challenges because they are confidential and can be [...] Read more.
Combining a large collection of patient data and advanced technology, healthcare organizations can excel in medical research and increase the quality of patient care. At the same time, health records present serious privacy and security challenges because they are confidential and can be breached through networks. Even traditional methods with federated learning are used to share data, patient information might still be at risk of interference while updating the model. This paper proposes the Privacy-Preserving Federated Learning with Homomorphic Encryption (PPFLHE) framework, which strongly supports secure cooperation in healthcare and at the same time providing symmetric privacy protection among participating institutions. Everyone in the collaboration used the same EfficientNet-B0 architecture and training conditions and keeping the model symmetrical throughout the network to achieve a balanced learning process and fairness. All the institutions used CKKS encryption symmetrically for their models to keep data concealed and stop any attempts at inference. Our federated learning process uses FedAvg on the server to symmetrically aggregate encrypted model updates and decrease any delays in our server communication. We attained a classification accuracy of 83.19% and 81.27% when using the APTOS 2019 Blindness Detection dataset and MosMedData CT scan dataset, respectively. Such findings confirm that the PPFLHE framework is generalizable among the broad range of medical imaging methods. In this way, patient data are kept secure while encouraging medical research and treatment to move forward, helping healthcare systems cooperate more effectively. Full article
(This article belongs to the Special Issue Exploring Symmetry in Wireless Communication)
Show Figures

Figure 1

27 pages, 20364 KB  
Article
A Comparative Study of Lesion-Centered and Severity-Based Approaches to Diabetic Retinopathy Classification: Improving Interpretability and Performance
by Gang-Min Park, Ji-Hoon Moon and Ho-Gil Jung
Biomedicines 2025, 13(6), 1446; https://doi.org/10.3390/biomedicines13061446 - 12 Jun 2025
Viewed by 807
Abstract
Background: Despite advances in artificial intelligence (AI) for Diabetic Retinopathy (DR) classification, traditional severity-based approaches often lack interpretability and fail to capture specific lesion-centered characteristics. To address these limitations, we constructed the National Medical Center (NMC) dataset, independently annotated by medical professionals with [...] Read more.
Background: Despite advances in artificial intelligence (AI) for Diabetic Retinopathy (DR) classification, traditional severity-based approaches often lack interpretability and fail to capture specific lesion-centered characteristics. To address these limitations, we constructed the National Medical Center (NMC) dataset, independently annotated by medical professionals with detailed labels of major DR lesions, including retinal hemorrhages, microaneurysms, and exudates. Methods: This study explores four critical research questions. First, we assess the analytical advantages of lesion-centered labeling compared to traditional severity-based labeling. Second, we investigate the potential complementarity between these labeling approaches through integration experiments. Third, we analyze how various model architectures and classification strategies perform under different labeling schemes. Finally, we evaluate decision-making differences between labeling methods using visualization techniques. We benchmarked the lesion-centered NMC dataset against the severity-based public Asia Pacific Tele-Ophthalmology Society (APTOS) dataset, conducting experiments with EfficientNet—a convolutional neural network architecture—and diverse classification strategies. Results: Our results demonstrate that binary classification effectively identifies severe non-proliferative Diabetic Retinopathy (Severe NPDR) exhibiting complex lesion patterns, while relationship-based learning enhances performance for underrepresented classes. Transfer learning from NMC to APTOS notably improved severity classification, achieving performance gains of 15.2% in mild cases and 66.3% in severe cases through feature fusion using Bidirectional Feature Pyramid Network (BiFPN) and Feature Pyramid Network (FPN). Visualization results confirmed that lesion-centered models focus more precisely on pathological features. Conclusions: Our findings highlight the benefits of integrating lesion-centered and severity-based information to enhance both accuracy and interpretability in DR classification. Future research directions include spatial lesion mapping and the development of clinically grounded learning methodologies. Full article
(This article belongs to the Section Endocrinology and Metabolism Research)
Show Figures

Figure 1

32 pages, 1448 KB  
Article
Early Detection and Classification of Diabetic Retinopathy: A Deep Learning Approach
by Mustafa Youldash, Atta Rahman, Manar Alsayed, Abrar Sebiany, Joury Alzayat, Noor Aljishi, Ghaida Alshammari and Mona Alqahtani
AI 2024, 5(4), 2586-2617; https://doi.org/10.3390/ai5040125 - 29 Nov 2024
Cited by 8 | Viewed by 6150
Abstract
Background—Diabetes is a rapidly spreading chronic disease that poses a significant risk to individual health as the population grows. This increase is largely attributed to busy lifestyles, unhealthy eating habits, and a lack of awareness about the disease. Diabetes impacts the human [...] Read more.
Background—Diabetes is a rapidly spreading chronic disease that poses a significant risk to individual health as the population grows. This increase is largely attributed to busy lifestyles, unhealthy eating habits, and a lack of awareness about the disease. Diabetes impacts the human body in various ways, one of the most serious being diabetic retinopathy (DR), which can result in severely reduced vision or even blindness if left untreated. Therefore, an effective early detection and diagnosis system is essential. As part of the Kingdom of Saudi Arabia’s Vision 2030 initiative, which emphasizes the importance of digital transformation in the healthcare sector, it is vital to equip healthcare professionals with effective tools for diagnosing DR. This not only ensures high-quality patient care but also results in cost savings and contributes to the kingdom’s economic growth, as the traditional process of diagnosing diabetic retinopathy can be both time-consuming and expensive. Methods—Artificial intelligence (AI), particularly deep learning, has played an important role in various areas of human life, especially in healthcare. This study leverages AI technology, specifically deep learning, to achieve two primary objectives: binary classification to determine whether a patient has DR, and multi-class classification to identify the stage of DR accurately and in a timely manner. The proposed model utilizes six pre-trained convolutional neural networks (CNNs): EfficientNetB3, EfficientNetV2B1, RegNetX008, RegNetX080, RegNetY006, and RegNetY008. In our study, we conducted two experiments. In the first experiment, we trained and evaluated different models using fundus images from the publicly available APTOS dataset. Results—The RegNetX080 model achieved 98.6% accuracy in binary classification, while the EfficientNetB3 model achieved 85.1% accuracy in multi-classification, respectively. For the second experiment, we trained the models using the APTOS dataset and evaluated them using fundus images from Al-Saif Medical Center in Saudi Arabia. In this experiment, EfficientNetB3 achieved 98.2% accuracy in binary classification and EfficientNetV2B1 achieved 84.4% accuracy in multi-classification, respectively. Conclusions—These results indicate the potential of AI technology for early and accurate detection and classification of DR. The study is a potential contribution towards improved healthcare and clinical decision support for an early detection of DR in Saudi Arabia. Full article
Show Figures

Figure 1

17 pages, 56471 KB  
Article
Attention-Enhanced Guided Multimodal and Semi-Supervised Networks for Visual Acuity (VA) Prediction after Anti-VEGF Therapy
by Yizhen Wang , Yaqi Wang, Xianwen Liu, Weiwei Cui, Peng Jin, Yuxia Cheng and Gangyong Jia
Electronics 2024, 13(18), 3701; https://doi.org/10.3390/electronics13183701 - 18 Sep 2024
Viewed by 1245
Abstract
The development of telemedicine technology has provided new avenues for the diagnosis and treatment of patients with DME, especially after anti-vascular endothelial growth factor (VEGF) therapy, and accurate prediction of patients’ visual acuity (VA) is important for optimizing follow-up treatment plans. However, current [...] Read more.
The development of telemedicine technology has provided new avenues for the diagnosis and treatment of patients with DME, especially after anti-vascular endothelial growth factor (VEGF) therapy, and accurate prediction of patients’ visual acuity (VA) is important for optimizing follow-up treatment plans. However, current automated prediction methods often require human intervention and have poor interpretability, making it difficult to be widely applied in telemedicine scenarios. Therefore, an efficient, automated prediction model with good interpretability is urgently needed to improve the treatment outcomes of DME patients in telemedicine settings. In this study, we propose a multimodal algorithm based on a semi-supervised learning framework, which aims to combine optical coherence tomography (OCT) images and clinical data to automatically predict the VA values of patients after anti-VEGF treatment. Our approach first performs retinal segmentation of OCT images via a semi-supervised learning framework, which in turn extracts key biomarkers such as central retinal thickness (CST). Subsequently, these features are combined with the patient’s clinical data and fed into a multimodal learning algorithm for VA prediction. Our model performed well in the Asia Pacific Tele-Ophthalmology Society (APTOS) Big Data Competition, earning fifth place in the overall score and third place in VA prediction accuracy. Retinal segmentation achieved an accuracy of 99.03 ± 0.19% on the HZO dataset. This multimodal algorithmic framework is important in the context of telemedicine, especially for the treatment of DME patients. Full article
Show Figures

Figure 1

24 pages, 13431 KB  
Article
Toward Lightweight Diabetic Retinopathy Classification: A Knowledge Distillation Approach for Resource-Constrained Settings
by Niful Islam, Md. Mehedi Hasan Jony, Emam Hasan, Sunny Sutradhar, Atikur Rahman and Md. Motaharul Islam
Appl. Sci. 2023, 13(22), 12397; https://doi.org/10.3390/app132212397 - 16 Nov 2023
Cited by 7 | Viewed by 3673
Abstract
Diabetic retinopathy (DR), a consequence of diabetes, is one of the prominent contributors to blindness. Effective intervention necessitates accurate classification of DR; this is a need that computer vision-based technologies address. However, using large-scale deep learning models for DR classification presents difficulties, especially [...] Read more.
Diabetic retinopathy (DR), a consequence of diabetes, is one of the prominent contributors to blindness. Effective intervention necessitates accurate classification of DR; this is a need that computer vision-based technologies address. However, using large-scale deep learning models for DR classification presents difficulties, especially when integrating them into devices with limited resources, particularly in places with poor technological infrastructure. In order to address this, our research presents a knowledge distillation-based approach, where we train a fusion model, composed of ResNet152V2 and Swin Transformer, as the teacher model. The knowledge learned from the heavy teacher model is transferred to the lightweight student model of 102 megabytes, which consists of Xception with a customized convolutional block attention module (CBAM). The system also integrates a four-stage image enhancement technique to improve the image quality. We compared the model against eight state-of-the-art classifiers on five evaluation metrics; the experiments show superior performance of the model over other methods on two datasets (APTOS and IDRiD). The model performed exceptionally well on the APTOS dataset, achieving 100% accuracy in binary classification and 99.04% accuracy in multi-class classification. On the IDRiD dataset, the results were 98.05% for binary classification accuracy and 94.17% for multi-class accuracy. The proposed approach shows promise for practical applications, enabling accessible DR assessment even in technologically underdeveloped environments. Full article
(This article belongs to the Special Issue AI Technologies in Biomedical Image Processing and Analysis)
Show Figures

Figure 1

18 pages, 2832 KB  
Article
A Lightweight Diabetic Retinopathy Detection Model Using a Deep-Learning Technique
by Abdul Rahaman Wahab Sait
Diagnostics 2023, 13(19), 3120; https://doi.org/10.3390/diagnostics13193120 - 3 Oct 2023
Cited by 28 | Viewed by 6141
Abstract
Diabetic retinopathy (DR) is a severe complication of diabetes. It affects a large portion of the population of the Kingdom of Saudi Arabia. Existing systems assist clinicians in treating DR patients. However, these systems entail significantly high computational costs. In addition, dataset imbalances [...] Read more.
Diabetic retinopathy (DR) is a severe complication of diabetes. It affects a large portion of the population of the Kingdom of Saudi Arabia. Existing systems assist clinicians in treating DR patients. However, these systems entail significantly high computational costs. In addition, dataset imbalances may lead existing DR detection systems to produce false positive outcomes. Therefore, the author intended to develop a lightweight deep-learning (DL)-based DR-severity grading system that could be used with limited computational resources. The proposed model followed an image pre-processing approach to overcome the noise and artifacts found in fundus images. A feature extraction process using the You Only Look Once (Yolo) V7 technique was suggested. It was used to provide feature sets. The author employed a tailored quantum marine predator algorithm (QMPA) for selecting appropriate features. A hyperparameter-optimized MobileNet V3 model was utilized for predicting severity levels using images. The author generalized the proposed model using the APTOS and EyePacs datasets. The APTOS dataset contained 5590 fundus images, whereas the EyePacs dataset included 35,100 images. The outcome of the comparative analysis revealed that the proposed model achieved an accuracy of 98.0 and 98.4 and an F1 Score of 93.7 and 93.1 in the APTOS and EyePacs datasets, respectively. In terms of computational complexity, the proposed DR model required fewer parameters, fewer floating-point operations (FLOPs), a lower learning rate, and less training time to learn the key patterns of the fundus images. The lightweight nature of the proposed model can allow healthcare centers to serve patients in remote locations. The proposed model can be implemented as a mobile application to support clinicians in treating DR patients. In the future, the author will focus on improving the proposed model’s efficiency to detect DR from low-quality fundus images. Full article
(This article belongs to the Special Issue Artificial Intelligence in Eye Disease, 3rd Edition)
Show Figures

Figure 1

19 pages, 4549 KB  
Article
Automatic Detection and Classification of Diabetic Retinopathy Using the Improved Pooling Function in the Convolution Neural Network
by Usharani Bhimavarapu, Nalini Chintalapudi and Gopi Battineni
Diagnostics 2023, 13(15), 2606; https://doi.org/10.3390/diagnostics13152606 - 5 Aug 2023
Cited by 18 | Viewed by 4553
Abstract
Diabetic retinopathy (DR) is an eye disease associated with diabetes that can lead to blindness. Early diagnosis is critical to ensure that patients with diabetes are not affected by blindness. Deep learning plays an important role in diagnosing diabetes, reducing the human effort [...] Read more.
Diabetic retinopathy (DR) is an eye disease associated with diabetes that can lead to blindness. Early diagnosis is critical to ensure that patients with diabetes are not affected by blindness. Deep learning plays an important role in diagnosing diabetes, reducing the human effort to diagnose and classify diabetic and non-diabetic patients. The main objective of this study was to provide an improved convolution neural network (CNN) model for automatic DR diagnosis from fundus images. The pooling function increases the receptive field of convolution kernels over layers. It reduces computational complexity and memory requirements because it reduces the resolution of feature maps while preserving the essential characteristics required for subsequent layer processing. In this study, an improved pooling function combined with an activation function in the ResNet-50 model was applied to the retina images in autonomous lesion detection with reduced loss and processing time. The improved ResNet-50 model was trained and tested over the two datasets (i.e., APTOS and Kaggle). The proposed model achieved an accuracy of 98.32% for APTOS and 98.71% for Kaggle datasets. It is proven that the proposed model has produced greater accuracy when compared to their state-of-the-art work in diagnosing DR with retinal fundus images. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging)
Show Figures

Figure 1

23 pages, 9869 KB  
Article
Enhancement of Diabetic Retinopathy Prognostication Using Deep Learning, CLAHE, and ESRGAN
by Ghadah Alwakid, Walaa Gouda and Mamoona Humayun
Diagnostics 2023, 13(14), 2375; https://doi.org/10.3390/diagnostics13142375 - 14 Jul 2023
Cited by 19 | Viewed by 3518
Abstract
One of the primary causes of blindness in the diabetic population is diabetic retinopathy (DR). Many people could have their sight saved if only DR were detected and treated in time. Numerous Deep Learning (DL)-based methods have been presented to improve human analysis. [...] Read more.
One of the primary causes of blindness in the diabetic population is diabetic retinopathy (DR). Many people could have their sight saved if only DR were detected and treated in time. Numerous Deep Learning (DL)-based methods have been presented to improve human analysis. Using a DL model with three scenarios, this research classified DR and its severity stages from fundus images using the “APTOS 2019 Blindness Detection” dataset. Following the adoption of the DL model, augmentation methods were implemented to generate a balanced dataset with consistent input parameters across all test scenarios. As a last step in the categorization process, the DenseNet-121 model was employed. Several methods, including Enhanced Super-resolution Generative Adversarial Networks (ESRGAN), Histogram Equalization (HIST), and Contrast Limited Adaptive HIST (CLAHE), have been used to enhance image quality in a variety of contexts. The suggested model detected the DR across all five APTOS 2019 grading process phases with the highest test accuracy of 98.36%, top-2 accuracy of 100%, and top-3 accuracy of 100%. Further evaluation criteria (precision, recall, and F1-score) for gauging the efficacy of the proposed model were established with the help of APTOS 2019. Furthermore, comparing CLAHE + ESRGAN against both state-of-the-art technology and other recommended methods, it was found that its use was more effective in DR classification. Full article
Show Figures

Figure 1

18 pages, 3601 KB  
Article
Using Deep Learning Architectures for Detection and Classification of Diabetic Retinopathy
by Cheena Mohanty, Sakuntala Mahapatra, Biswaranjan Acharya, Fotis Kokkoras, Vassilis C. Gerogiannis, Ioannis Karamitsos and Andreas Kanavos
Sensors 2023, 23(12), 5726; https://doi.org/10.3390/s23125726 - 19 Jun 2023
Cited by 120 | Viewed by 12622
Abstract
Diabetic retinopathy (DR) is a common complication of long-term diabetes, affecting the human eye and potentially leading to permanent blindness. The early detection of DR is crucial for effective treatment, as symptoms often manifest in later stages. The manual grading of retinal images [...] Read more.
Diabetic retinopathy (DR) is a common complication of long-term diabetes, affecting the human eye and potentially leading to permanent blindness. The early detection of DR is crucial for effective treatment, as symptoms often manifest in later stages. The manual grading of retinal images is time-consuming, prone to errors, and lacks patient-friendliness. In this study, we propose two deep learning (DL) architectures, a hybrid network combining VGG16 and XGBoost Classifier, and the DenseNet 121 network, for DR detection and classification. To evaluate the two DL models, we preprocessed a collection of retinal images obtained from the APTOS 2019 Blindness Detection Kaggle Dataset. This dataset exhibits an imbalanced image class distribution, which we addressed through appropriate balancing techniques. The performance of the considered models was assessed in terms of accuracy. The results showed that the hybrid network achieved an accuracy of 79.50%, while the DenseNet 121 model achieved an accuracy of 97.30%. Furthermore, a comparative analysis with existing methods utilizing the same dataset revealed the superior performance of the DenseNet 121 network. The findings of this study demonstrate the potential of DL architectures for the early detection and classification of DR. The superior performance of the DenseNet 121 model highlights its effectiveness in this domain. The implementation of such automated methods can significantly improve the efficiency and accuracy of DR diagnosis, benefiting both healthcare providers and patients. Full article
(This article belongs to the Special Issue Machine and Deep Learning in Sensing and Imaging)
Show Figures

Figure 1

17 pages, 5641 KB  
Article
Deep Learning-Based Prediction of Diabetic Retinopathy Using CLAHE and ESRGAN for Enhancement
by Ghadah Alwakid, Walaa Gouda and Mamoona Humayun
Healthcare 2023, 11(6), 863; https://doi.org/10.3390/healthcare11060863 - 15 Mar 2023
Cited by 70 | Viewed by 10740
Abstract
Vision loss can be avoided if diabetic retinopathy (DR) is diagnosed and treated promptly. The main five DR stages are none, moderate, mild, proliferate, and severe. In this study, a deep learning (DL) model is presented that diagnoses all five stages of DR [...] Read more.
Vision loss can be avoided if diabetic retinopathy (DR) is diagnosed and treated promptly. The main five DR stages are none, moderate, mild, proliferate, and severe. In this study, a deep learning (DL) model is presented that diagnoses all five stages of DR with more accuracy than previous methods. The suggested method presents two scenarios: case 1 with image enhancement using a contrast limited adaptive histogram equalization (CLAHE) filtering algorithm in conjunction with an enhanced super-resolution generative adversarial network (ESRGAN), and case 2 without image enhancement. Augmentation techniques were then performed to generate a balanced dataset utilizing the same parameters for both cases. Using Inception-V3 applied to the Asia Pacific Tele-Ophthalmology Society (APTOS) datasets, the developed model achieved an accuracy of 98.7% for case 1 and 80.87% for case 2, which is greater than existing methods for detecting the five stages of DR. It was demonstrated that using CLAHE and ESRGAN improves a model’s performance and learning ability. Full article
Show Figures

Figure 1

17 pages, 9655 KB  
Article
A Regression-Based Approach to Diabetic Retinopathy Diagnosis Using Efficientnet
by Midhula Vijayan and Venkatakrishnan S
Diagnostics 2023, 13(4), 774; https://doi.org/10.3390/diagnostics13040774 - 17 Feb 2023
Cited by 23 | Viewed by 3981
Abstract
The aim of this study is to develop a computer-assisted solution for the efficient and effective detection of diabetic retinopathy (DR), a complication of diabetes that can damage the retina and cause vision loss if not treated in a timely manner. Manually diagnosing [...] Read more.
The aim of this study is to develop a computer-assisted solution for the efficient and effective detection of diabetic retinopathy (DR), a complication of diabetes that can damage the retina and cause vision loss if not treated in a timely manner. Manually diagnosing DR through color fundus images requires a skilled clinician to spot lesions, but this can be challenging, especially in areas with a shortage of trained experts. As a result, there is a push to create computer-aided diagnosis systems for DR to help reduce the time it takes to diagnose the condition. The detection of diabetic retinopathy through automation is challenging, but convolutional neural networks (CNNs) play a vital role in achieving success. CNNs have been proven to be more effective in image classification than methods based on handcrafted features. This study proposes a CNN-based approach for the automated detection of DR using Efficientnet-B0 as the backbone network. The authors of this study take a unique approach by viewing the detection of diabetic retinopathy as a regression problem rather than a traditional multi-class classification problem. This is because the severity of DR is often rated on a continuous scale, such as the international clinical diabetic retinopathy (ICDR) scale. This continuous representation provides a more nuanced understanding of the condition, making regression a more suitable approach for DR detection compared to multi-class classification. This approach has several benefits. Firstly, it allows for more fine-grained predictions as the model can assign a value that falls between the traditional discrete labels. Secondly, it allows for better generalization. The model was tested on the APTOS and DDR datasets. The proposed model demonstrated improved efficiency and accuracy in detecting DR compared to traditional methods. This method has the potential to enhance the efficiency and accuracy of DR diagnosis, making it a valuable tool for healthcare professionals. The model has the potential to aid in the rapid and accurate diagnosis of DR, leading to the improved early detection, and management, of the disease. Full article
(This article belongs to the Special Issue Data Analysis in Ophthalmic Diagnostics)
Show Figures

Figure 1

16 pages, 4313 KB  
Article
A Novel Approach for Diabetic Retinopathy Screening Using Asymmetric Deep Learning Features
by Pradeep Kumar Jena, Bonomali Khuntia, Charulata Palai, Manjushree Nayak, Tapas Kumar Mishra and Sachi Nandan Mohanty
Big Data Cogn. Comput. 2023, 7(1), 25; https://doi.org/10.3390/bdcc7010025 - 29 Jan 2023
Cited by 87 | Viewed by 6474
Abstract
Automatic screening of diabetic retinopathy (DR) is a well-identified area of research in the domain of computer vision. It is challenging due to structural complexity and a marginal contrast difference between the retinal vessels and the background of the fundus image. As bright [...] Read more.
Automatic screening of diabetic retinopathy (DR) is a well-identified area of research in the domain of computer vision. It is challenging due to structural complexity and a marginal contrast difference between the retinal vessels and the background of the fundus image. As bright lesions are prominent in the green channel, we applied contrast-limited adaptive histogram equalization (CLAHE) on the green channel for image enhancement. This work proposes a novel diabetic retinopathy screening technique using an asymmetric deep learning feature. The asymmetric deep learning features are extracted using U-Net for segmentation of the optic disc and blood vessels. Then a convolutional neural network (CNN) with a support vector machine (SVM) is used for the DR lesions classification. The lesions are classified into four classes, i.e., normal, microaneurysms, hemorrhages, and exudates. The proposed method is tested with two publicly available retinal image datasets, i.e., APTOS and MESSIDOR. The accuracy achieved for non-diabetic retinopathy detection is 98.6% and 91.9% for the APTOS and MESSIDOR datasets, respectively. The accuracies of exudate detection for these two datasets are 96.9% and 98.3%, respectively. The accuracy of the DR screening system is improved due to the precise retinal image segmentation. Full article
Show Figures

Figure 1

Back to TopTop