RetinoDeep: Leveraging Deep Learning Models for Advanced Retinopathy Diagnostics
Abstract
1. Introduction
2. Related Works
3. Methodology
3.1. Dataset Description
- Training split: 4000 images per class (20,000 total) obtained via random under-sampling of majority classes and mild data augmentation (horizontal/vertical flips, ±15° rotations).
- Test split: 500 untouched images per class (2500 in total), strictly disjoint from the training set at the patient level.
3.2. Class Balance Strategy
3.3. Pre-Processing
3.4. Data Augmentation
- Horizontal flip (p = 0.5);
- Vertical flip (p = 0.2);
- Rotation ± 15° (p = 0.5);
- Brightness/contrast jitter ± 10% (p = 0.3);
- Gaussian blur (p = 0.2);
- Additive Gaussian noise (p = 0.2).
3.5. Proposed Models for Diabetic Retinopathy Detection
3.5.1. The Proposed Models’ Novelty
- Hybrid Architectures:These models leverage the advantages of many deep learning paradigms to efficiently handle spatial, sequential, and global information by seamlessly integrating CNNs [44], transformers, and Bi-LSTMs.
- Clinical Trustworthiness and Explainability:These models can offer clear insights into the decision-making process by incorporating SHAP explainability, which satisfies the crucial requirement for interpretability in clinical contexts.
- Efficiency Optimization:By ensuring effective hyperparameter tuning, the use of genetic algorithms permits higher performance while preserving adaptability to a variety of datasets.
- Pay Attention to Details:Sophisticated methods like as Bi-LSTM structures and SPCL transformers improve the capacity to record progressive and localized patterns, which are essential for recognizing the complex phases of diabetic retinopathy.
3.5.2. Hybrid Bi-LSTM Model with SHAP Explainability
3.5.3. Explainability with SHAP
3.5.4. EfficientNetB0 with SPCL Transformer
3.5.5. Genetic Algorithms for Bi-LSTM Hyperparameter Optimization
3.5.6. Ensembled Classification Using ResNet50 and Bi-LSTM
4. Experimental Results and Discussion
4.1. Evaluation Metrics
- = true positive;
- = true negative;
- = false positive;
- = false negative.
4.2. Results and Discussion
4.2.1. Contrast Models
4.2.2. Proposed RetinoDeep Models
4.2.3. SHAP Explainability and Performance
4.2.4. Graphs of SHAP Feature Importance
- Image 1—Label 0 (healthy). The highest mean; |SHAP| values (<0.06) belong to latent features 1199, 1058, and 1164, indicating that uniform background texture and intact vascular geometry drive the model’s “no-DR” prediction.
- Image 2—Label 0 (healthy). A nearly identical importance profile confirms that color homogeneity and optic-disc morphology consistently dominate the decision; mid-tier shifts reflect minor illumination differences without affecting the healthy label.
- Image 3—Label 0 (healthy). Core features remain predominant, with a modest rise in attributions for features 1229 and 1155, attributed to disc–fovea contrast variation. All contributions stay below the pathological threshold, supporting a non-diseased classification.
4.2.5. SHAP Heatmaps
- Row 1—Label 0 (healthy). A uniformly blue overlay yields negative SHAP values across the fundus, signaling that lesion-free regions lower the model’s DR probability. The thin red band at the superior rim is an illumination artifact and does not influence the final score.
- Row 2—Label 1 (mild NPDR). High positive SHAP values (red) cluster around the macula and major vessels, coinciding with micro-aneurysms and punctate hemorrhages. These features raise the predicted likelihood of disease and align precisely with the ground-truth label.
- Row 3—Label 0 (healthy). Predominantly blue shading once again supports a healthy classification; only faint red near the optic disc appears, indicating minimal contribution from normal anatomical structures.
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Grzybowski, A.; Singhanetr, P.; Nanegrungsunk, O.; Ruamviboonsuk, P. Artificial Intelligence for Diabetic Retinopathy Screening Using Color Retinal Photographs: From Development to Deployment. Ophthalmol. Ther. 2023, 12, 1419–1437. [Google Scholar] [CrossRef]
- Venkatesh, R.; Gandhi, P.; Choudhary, A.; Kathare, R.; Chhablani, J.; Prabhu, V.; Bavaskar, S.; Hande, P.; Shetty, R.; Reddy, N.G.; et al. Evaluation of Systemic Risk Factors in Patients with Diabetes Mellitus for Detecting Diabetic Retinopathy with Random Forest Classification Model. Diagnostics 2024, 14, 1765. [Google Scholar] [CrossRef]
- Abushawish, I.Y.; Modak, S.; Abdel-Raheem, E.; Mahmoud, S.A.; Hussain, A.J. Deep Learning in Automatic Diabetic Retinopathy Detection and Grading Systems: A Comprehensive survey and comparison of methods. IEEE Access 2024, 12, 84785–84802. [Google Scholar] [CrossRef]
- Paranjpe, M.J.; Kakatkar, M.N. Review of methods for diabetic retinopathy detection and severity classification. Int. J. Res. Eng. Technol. 2014, 3, 619–624. [Google Scholar] [CrossRef]
- van der Heijden, A.A.; Nijpels, G.; Badloe, F.; Lovejoy, H.L.; Peelen, L.M.; Feenstra, T.L.; Moons, K.G.; Slieker, R.C.; Herings, R.M.; Elders, P.J.; et al. Prediction models for development of retinopathy in people with type 2 diabetes: Systematic review and external validation in a Dutch primary care setting. Diabetologia 2020, 63, 1110–1119. [Google Scholar] [CrossRef] [PubMed]
- Tan, Y.Y.; Kang, H.G.; Lee, C.J.; Kim, S.S.; Park, S.; Thakur, S.; Da Soh, Z.; Cho, Y.; Peng, Q.; Lee, K.; et al. Prognostic potentials of AI in ophthalmology: Systemic disease forecasting via retinal imaging. Eye Vis. 2024, 11, 17. [Google Scholar] [CrossRef]
- Das, R.; Spence, G.; Hogg, R.E.; Stevenson, M.; Chakravarthy, U. Disorganization of Inner Retina and Outer Retinal Morphology in Diabetic Macular Edema. JAMA Ophthalmol. 2018, 136, 202–208. [Google Scholar] [CrossRef]
- Bansal, V.; Jain, A.; Walia, N.K. Diabetic retinopathy detection through generative AI techniques: A review. Results Opt. 2024, 16, 100700. [Google Scholar] [CrossRef]
- Aspinall, P.A.; Kinnear, P.R.; Duncan, L.J.; Clarke, B.F. Prediction of diabetic retinopathy from clinical variables and color vision data. Diabetes Care 1983, 6, 144–148. [Google Scholar] [CrossRef]
- Kong, M.; Song, S.J. Artificial Intelligence Applications in Diabetic Retinopathy: What We Have Now and What to Expect in the Future. Endocrinol. Metab. 2024, 39, 416–424. [Google Scholar] [CrossRef]
- Zhang, Z.; Deng, C.; Paulus, Y.M. Advances in Structural and Functional Retinal Imaging and Biomarkers for Early Detection of Diabetic Retinopathy. Biomedicines 2024, 12, 1405. [Google Scholar] [CrossRef] [PubMed]
- Tan, T.E.; Wong, T.Y. Diabetic retinopathy: Looking forward to 2030. Front. Endocrinol. 2023, 13, 1077669. [Google Scholar] [CrossRef] [PubMed]
- Das, D.; Biswas, S.K.; Bandyopadhyay, S. A critical review on diagnosis of diabetic retinopathy using machine learning and deep learning. Multimed. Tools Appl. 2022, 81, 25613–25655. [Google Scholar] [CrossRef] [PubMed]
- Micheletti, J.M.; Hendrick, A.M.; Khan, F.N.; Ziemer, D.C.; Pasquel, F.J. Current and Next Generation Portable Screening Devices for Diabetic Retinopathy. J. Diabetes Sci. Technol. 2016, 10, 295–300. [Google Scholar] [CrossRef]
- Steffy, R.A.; Evelin, D. Implementation and prediction of diabetic retinopathy types based on deep convolutional neural networks. Int. J. Adv. Trends Eng. Manag. 2023, 2, 16–28. [Google Scholar]
- Muthusamy, D.; Palani, P. Deep learning model using classification for diabetic retinopathy detection: An overview. Artif. Intell. Rev. 2024, 57, 185. [Google Scholar] [CrossRef]
- Senapati, A.; Tripathy, H.K.; Sharma, V.; Gandomi, A.H. Artificial intelligence for diabetic retinopathy detection: A systematic review. Inform. Med. Unlocked 2023, 45, 101445. [Google Scholar] [CrossRef]
- Alsadoun, L.; Ali, H.; Mushtaq, M.M.; Mushtaq, M.; Burhanuddin, M.; Anwar, R.; Liaqat, M.; Bokhari, S.F.H.; Hasan, A.H.; Ahmed, F. Artificial Intelligence (AI)-Enhanced Detection of Diabetic Retinopathy From Fundus Images: The Current Landscape and Future Directions. Cureus 2024, 16, e67844. [Google Scholar] [CrossRef]
- Bellemo, V.; Lim, Z.W.; Lim, G.; Nguyen, Q.D.; Xie, Y.; Yip, M.Y.; Hamzah, H.; Ho, J.; Lee, X.Q.; Hsu, W.; et al. Artificial intelligence using deep learning to screen for referable and vision-threatening diabetic retinopathy in Africa: A clinical validation study. Lancet Digit. Health 2019, 1, e35–e44. [Google Scholar] [CrossRef]
- Huang, X.; Zhang, L.; Chen, W. Optimized ResNet for Diabetic Retinopathy Grading on EyePACS Dataset. arXiv 2021, arXiv:2110.14160. [Google Scholar]
- Sadek, N.A.; Al-Dahan, Z.T.; Rattan, S.A.; Hussein, A.F.; Geraghty, B.; Kazaili, A. Advanced CNN Deep Learning Model for Diabetic Retinopathy Classification. J. Biomed. Phys. Eng. 2025, 2025, 1–14. [Google Scholar]
- SK, S.; P, A. A Machine Learning Ensemble Classifier for Early Prediction of Diabetic Retinopathy. J. Med. Syst. 2017, 41, 201. [Google Scholar] [CrossRef]
- Dai, L.; Sheng, B.; Chen, T.; Wu, Q.; Liu, R.; Cai, C.; Wu, L.; Yang, D.; Hamzah, H.; Liu, Y.; et al. A deep learning system for predicting time to progression of diabetic retinopathy. Nat. Med. 2024, 30, 584–594. [Google Scholar] [CrossRef]
- Arora, L.; Singh, S.K.; Kumar, S.; Gupta, H.; Alhalabi, W.; Arya, V.; Bansal, S.; Chui, K.T.; Gupta, B.B. Ensemble deep learning and EfficientNet for accurate diagnosis of diabetic retinopathy. Sci. Rep. 2024, 14, 30554. [Google Scholar] [CrossRef]
- Shen, Z.; Wu, Q.; Wang, Z.; Chen, G.; Lin, B. Diabetic Retinopathy Prediction by Ensemble Learning Based on Biochemical and Physical Data. Sensors 2021, 21, 3663. [Google Scholar] [CrossRef]
- Yao, J.; Lim, J.; Lim, G.Y.S.; Ong, J.C.L.; Ke, Y.; Tan, T.F.; Tan, T.E.; Vujosevic, S.; Ting, D.S.W. Novel artificial intelligence algorithms for diabetic retinopathy and diabetic macular edema. Eye Vis. 2024, 11, 23. [Google Scholar] [CrossRef]
- Oulhadj, M.; Riffi, J.; Chaimae, K.; Mahraz, A.M.; Ahmed, B.; Yahyaouy, A.; Fouad, C.; Meriem, A.; Idriss, B.A.; Tairi, H. Diabetic retinopathy prediction based on deep learning and deformable registration. Multimed. Tools Appl. 2022, 81, 28709–28727. [Google Scholar] [CrossRef]
- Gupta, S.; Thakur, S.; Gupta, A. Optimized hybrid machine learning approach for smartphone based diabetic retinopathy detection. Multimed. Tools Appl. 2022, 81, 14475–14501. [Google Scholar] [CrossRef] [PubMed]
- Gadekallu, T.R.; Khare, N.; Bhattacharya, S.; Singh, S.; Maddikunta, P.K.R.; Srivastava, G. Deep neural networks to predict diabetic retinopathy. J. Ambient. Intell. Humaniz. Comput. 2020, 14, 5407–5420. [Google Scholar] [CrossRef]
- Bodapati, J.D.; Balaji, B.B. Self-adaptive stacking ensemble approach with attention based deep neural network models for diabetic retinopathy severity prediction. Multimed. Tools Appl. 2023, 83, 1083–1102. [Google Scholar] [CrossRef]
- Bora, A.; Balasubramanian, S.; Babenko, B.; Virmani, S.; Venugopalan, S.; Mitani, A.; de Oliveira Marinho, G.; Cuadros, J.; Ruamviboonsuk, P.; Corrado, G.S.; et al. Predicting the risk of developing diabetic retinopathy using deep learning. Lancet Digit. Health 2021, 3, e10–e19. [Google Scholar] [CrossRef]
- Majaw, E.A.; Sundar, G.N.; Narmadha, D.; Thangavel, S.K.; Ajibesin, A.A. EfficientNetB0-based Automated Diabetic Retinopathy Classification in Fundus Images. In Proceedings of the 2024 3rd International Conference on Automation, Computing and Renewable Systems (ICACRS), Pudukkottai, India, 4–6 December 2024; pp. 1752–1757. [Google Scholar] [CrossRef]
- Albelaihi, A.; Ibrahim, D.M. DeepDiabetic: An identification system of diabetic eye diseases using deep neural networks. IEEE Access 2024, 12, 10769–10789. [Google Scholar] [CrossRef]
- Balakrishnan, U.; Venkatachalapathy, K.; SMarimuthu, G. A hybrid PSO-DEFS based feature selection for the identification of diabetic retinopathy. Curr. Diabetes Rev. 2015, 11, 182–190. [Google Scholar] [CrossRef] [PubMed]
- Bhardwaj, P.; Gupta, P.; Guhan, T.; Srinivasan, K. Early diagnosis of retinal blood vessel damage via Deep Learning-Powered Collective Intelligence models. Comput. Math. Methods Med. 2022, 2022, 3571364. [Google Scholar] [CrossRef] [PubMed]
- Hayati, M.; Muchtar, K.; Maulina, N.; Syamsuddin, I.; Elwirehardja, G.N.; Pardamean, B. Impact of CLAHE-based image enhancement for diabetic retinopathy classification through deep learning. Procedia Comput. Sci. 2022, 216, 57–66. [Google Scholar] [CrossRef]
- Mane, D.; Sangve, S.; Kumbharkar, P.; Ratnaparkhi, S.; Upadhye, G.; Borde, S. A diabetic retinopathy detection using customized convolutional neural network. Int. J. Electr. Electron. Res. 2023, 11, 609–615. [Google Scholar] [CrossRef]
- Nofriansyah, D.; Anwar, B.; Ramadhan, M. Biometric and Data Secure Application for Eye Iris’s Recognition Using Hopfield Discrete Algorithm and Rivest Shamir Adleman Algorithm. In Proceedings of the 2016 1st International Conference on Technology, Innovation and Society ICTIS, Padang, Indonesia, 20–21 July 2016; pp. 257–263. [Google Scholar] [CrossRef]
- Nneji, G.U.; Cai, J.; Deng, J.; Monday, H.N.; Hossin, M.A.; Nahar, S. Identification of diabetic retinopathy using weighted Fusion Deep Learning based on Dual-Channel FuNDUS scans. Diagnostics 2022, 12, 540. [Google Scholar] [CrossRef]
- Benítez, V.E.C.; Matto, I.C.; Román, J.C.M.; Noguera, J.L.V.; García-Torres, M.; Ayala, J.; Pinto-Roa, D.P.; Gardel-Sotomayor, P.E.; Facon, J.; Grillo, S.A. Dataset from fundus images for the study of diabetic retinopathy. Data Brief 2021, 36, 107068. [Google Scholar] [CrossRef]
- Eyepacs, Aptos, Messidor Diabetic Retinopathy. Kaggle. Available online: https://www.kaggle.com/datasets/ascanipek/eyepacs-aptos-messidor-diabetic-retinopathy (accessed on 6 January 2024).
- Shamrat, F.J.M.; Shakil, R.; Akter, B.; Ahmed, M.Z.; Ahmed, K.; Bui, F.M.; Moni, M.A. An advanced deep neural network for fundus image analysis and enhancing diabetic retinopathy detection. Healthc. Anal. 2024, 5, 100303. [Google Scholar] [CrossRef]
- Wu, L.; Fernandez-Loaiza, P.; Sauma, J.; Hernandez-Bogantes, E.; Masis, M. Classification of diabetic retinopathy and diabetic macular edema. World J. Diabetes 2013, 4, 290–294. [Google Scholar] [CrossRef]
- Bhulakshmi, D.; Rajput, D.S. A systematic review on diabetic retinopathy detection and classification based on deep learning techniques using fundus images. PeerJ Comput. Sci. 2024, 10, e1947. [Google Scholar] [CrossRef]
- Rajalakshmi, R.; Prathiba, V.; Arulmalar, S.; Usha, M. Review of retinal cameras for global coverage of diabetic retinopathy screening. Eye 2021, 35, 162–172. [Google Scholar] [CrossRef]
- Akshita, L.; Singhal, H.; Dwivedi, I.; Ghuli, P. Diabetic retinopathy classification using deep convolutional neural network. Indones. J. Electr. Eng. Comput. Sci. 2021, 24, 208–216. [Google Scholar] [CrossRef]
- Balaji, S.; Karthik, B.; Gokulakrishnan, D. Prediction of Diabetic Retinopathy using Deep Learning with Preprocessing. EAI Endorsed Trans. Pervasive Health Technol. 2024, 10, 1. [Google Scholar] [CrossRef]
Reference | Datasets Used | Models Considered | Key Results |
---|---|---|---|
[15] Steffy (2023) | APTOS-2019 (3662 images) | ResNet50; DenseNet121; InceptionV3 | DenseNet121 achieved AUC = 0.965; extensive augmentation mitigated class imbalance. |
[16] Muthusamy and Palani (2024) | Kaggle DR (EyePACS, APTOS; 40,000 images) | InceptionV3; Xception | 87.12% (InceptionV3) and 74.49% (Xception) accuracy; InceptionV3 excelled with augmentation. |
[17] Senapati et al. (2023) | EyePACS, Messidor, IDRiD (50,000 images) | Survey of >30 CNN pipelines | Transfer-learning + attention ensembles identified as best practice, with top sensitivities >95%. |
[18] Vani Ashok et al. (2024) | APTOS-2019 (3662 images) | ResNet50 | Automated DR staging with 82% accuracy—first end-to-end staging pipeline using only fundus inputs. |
[19] Bellemo et al. (2019) | Messidor (1200 images) + African screening cohort (50,000 images) | Inception-V3 | 94% sensitivity and 92% specificity for referable DR across diverse populations. |
[20] Huang et al. (2021) | EyePACS (42k) | ResNet-50 with optimized traning | Vanilla ResNet-50 initially achieved a Quadratically Weighted Kappa of 0.7435 |
[21] Sadek et al. (2025) | EyePACS (35,000) + IDRiD (413) + Iraqi dataset (700) | CNN, Decision Tree, Logistic Regression | Logistic Regression achieved highest accuracy/sensitivity: EyePACS 99.4%/99.4%. |
[22] Somasundaram (2017) | Proprietary fundus set: 75 images (13 normal, 62 DR) | Ensemble (SVM, k-NN, Decision Tree) | Achieved 91% overall accuracy; balanced sensitivity (89%) and specificity (92%) for early DR detection. |
[23] Dai et al. (2024) | Longitudinal fundus series (10,000 eyes, 3-yr follow-up) | Temporal CNN + self-attention RNN | MAE = 3.5 months for time-to-progression prediction, enabling personalized monitoring. |
[24] Arora et al. (2024) | APTOS + DDR public sets (10,000 images) | Ensemble (EfficientNet-B0, DenseNet161, ResNet50) | Referable DR AUC = 0.981 via multi-model aggregation, showing consistent gains. |
[25] Shen et al. (2021) | Clinical and biochemical records (n ≈ 2000) | Random Forest + SVM ensemble | Achieved 87.2% accuracy for DR risk stratification using non-imaging predictors alone. |
[26] Yao et al. (2024) | Multimodal OCT + fundus (n ≈ 5000) | CNN–Transformer fusion | 93% combined DR/DME detection accuracy; fusion outperformed single-modality baselines. |
[27] Oulhadj et al. (2022) | Paired OCTA + color fundus images (n ≈ 1500) | Deformable-registration DNN | 95.3% sensitivity for early microaneurysm detection via learned registration. |
[28] Gupta et al. (2022) | Smartphone fundus captures (n = 500) | Hybrid feature-selection + DNN | 91.8% accuracy on low-resolution mobile images, validating point-of-care screening. |
[29] Gadekallu et al. (2020) | APTOS-2019 (3662 images) + Messidor (1200 images) | 6-layer CNN | Combined accuracy of 94.1% on merged dataset; robust cross-source feature learning. |
[30] Bodapati and Balaji (2023) | Messidor (1200) + EyePACS (35,126) | Stacked ensemble (CNNs + SVM) | 96.2% accuracy and Cohen’s = 0.92 for five-level DR severity grading. |
[31] Bora et al. (2021) | UK Biobank fundus (65,000 images + 5-yr clinical metadata) | Inception-based CNN + metadata fusion | Predicted 5-yr DR risk with AUC = 0.87, outperforming logistic regression (AUC = 0.78). |
[32] Majaw et al. (2024) | APTOS (3662 retinal fundus images) | EfficientNetB0 | 98.78%accuracy on five-class DR grading, demonstrating state-of-the-art performance with EfficientNetB0. |
[33] Albelaihi and Ibrahim (2024) | Six public datasets (DIARETDB0, DIARETDB1, Messidor, HEI-MED, Ocular, Retina; total 1228 images) | VGG16; EfficientNetB0; ResNet152V2; ResNet152V2 + GRU; ResNet152V2 + Bi-GRU | EfficientNetB0 led all variants with 98.76%accuracy, 98.76% recall, 98.76% precision, and AUC 0.9977. |
[34] Balakrishnan et al. (2015) | Custom fundus set (75 images: 13 normal, 62 DR) | PSO-DEFS feature selection + Multi-Relevance Vector Machine (M-RVM) | PSO-DEFS + M-RVM achieved 99.12%accuracy, 98.2% sensitivity, 98.7% specificity—outperforming SVM and PNN baselines. |
[35] Bhardwaj et al. (2022) | Severity-graded fundus images (APTOS) | TDCN-PSO and TDCN-ACO (swarm-optimized CNNs) | TDCN-PSO yielded 90.3%accuracy, AUC 0.956, Cohen’s 0.967; TDCN-ACO found architectures faster with only marginal performance drop. |
[36] Hayati et al. (2022) | Public fundus repositories (IDRiD, APTOS) | ResNet-34; VGG16; EfficientNet (on original, CLAHE- and Unsharp-masked images) | CLAHE preprocessing improved accuracy: EfficientNet from 95 → 97%, VGG16 from 87 → 91%, InceptionV3 from 90 → 95%, all surpassing original baselines. |
[37] Mane et al. (2023) | MESSIDOR (560 train, 163 test images) | Customized CNN (CCNN) | CCNN achieved 97.24%test accuracy—outperforming prior methods on the same test split. |
[38] Nofriansyah et al. (2016) | In-house iris database (50 patterns from 10 subjects) | Hopfield Discrete Algorithm + RSA encryption + ANN | Demonstrated >90%biometric recognition accuracy with secure iris localization and classification. |
[39] NNneji et al. (2022b) | Messidor (2000 images) + EyePACS (2000 selected images) | CLAHE + InceptionV3 channel; CECED + VGG-16 channel; Weighted Fusion DL Network (WFDLN) | WFDLN outperformed each single-channel model: Messidor—98.5%ACC, 98.9% SEN, 98.0% SPE; EyePACS—98.0% ACC, 98.7% SEN, 97.8% SPE; AUC = 0.991 on Messidor. |
Dataset | Image Count | Ratio (%) |
---|---|---|
No DR | 187 | 24.4 |
Mild NPDR | 4 | 0.6 |
Moderate NPDR | 80 | 10.6 |
Severe NPDR | 176 | 23.4 |
Very Severe NPDR | 108 | 14.3 |
PDR | 88 | 11.6 |
Advanced PDR | 114 | 15.1 |
Total | 757 | 100 |
Dataset | Image Count | Training Sample Size |
---|---|---|
0 | 55,200 | 4000 |
1 | 18,500 | 4000 |
2 | 24,200 | 4000 |
3 | 7936 | 4000 |
4 | 9475 | 4000 |
Total | 115,311 | 20,000 |
S.N. | Name | Training Accuracy | Val Accuracy | F1 Score | Recall | Precision |
---|---|---|---|---|---|---|
Baseline Models: | ||||||
1 | EfficientNetB0 [32] | 80.37 | 72.28 | 80 | 80 | 81 |
2 | Hybrid Bi-LSTM [33] with EfficientNetB0 | 91.88 | 78.58 | 90 | 90 | 91 |
3 | Hybrid Bi-GRU [33] with EfficientNetB0 | 90.76 | 74.12 | 90 | 90 | 91 |
4 | Bi-LSTM Optimized Using RSA [38] | 88.79 | 71.56 | 88 | 88 | 89 |
5 | Bi-LSTM Model with PSO [34] | 93.97 | 76.64 | 93 | 94 | 94 |
6 | Bi-LSTM Model with ACO [35] | 68.86 | 52.69 | 68 | 68 | 69 |
7 | RESNET with Filters [36] | 90.27 | 74.52 | 89 | 87 | 87 |
8 | CNN [37] | 80.22 | 64.17 | 79 | 79 | 80 |
Proposed Models: | ||||||
1 | Bi-LSTM with SHAP Explainability | 97.80 | 81.79 | 97 | 96 | 98 |
2 | EfficientNetB0 with SPCL Transformer | 94.84 | 79.80 | 93 | 94 | 95 |
3 | Genetic Algorithms for Bi-LSTM Optimization | 93.56 | 80.64 | 93 | 93 | 93 |
4 | RESNET Ensembled with Bi-LSTM | 97.48 | 82.65 | 97 | 97 | 98 |
S.N. | Name | Training Accuracy | Val Accuracy | F1 Score | Recall | Precision |
---|---|---|---|---|---|---|
Baseline Models: | ||||||
1 | EfficientNetB0 [32] | 89.87 | 74.12 | 89 | 89 | 90 |
2 | Hybrid Bi-LSTM [33] with EfficientNetB0 | 92.49 | 79.32 | 91 | 92 | 92 |
3 | Hybrid Bi-GRU [33] with EfficientNetB0 | 91.66 | 75.02 | 92 | 91 | 92 |
4 | Bi-LSTM Optimized Using RSA [38] | 89.11 | 71.91 | 89 | 89 | 89 |
5 | Bi-LSTM Model with PSO [34] | 94.60 | 77.04 | 95 | 94 | 95 |
6 | Bi-LSTM Model with ACO [35] | 73.92 | 59.90 | 74 | 73 | 74 |
7 | RESNET with Filters [36] | 91.47 | 75.73 | 91 | 91 | 91 |
8 | CNN [37] | 81.52 | 68.69 | 81 | 81 | 82 |
Proposed Models: | ||||||
1 | Bi-LSTM with SHAP Explainability | 98.30 | 82.17 | 92 | 92 | 92 |
2 | EfficientNetB0 with SPCL Transformer | 96.23 | 79.80 | 91 | 91 | 92 |
3 | Genetic Algorithms for Bi-LSTM Optimization | 94 | 81.09 | 94 | 94 | 95 |
4 | RESNET Ensembled with Bi-LSTM | 97.48 | 82.65 | 92 | 93 | 93 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kansal, S.; Mishra, B.K.; Sethi, S.; Vinayak, K.; Kansal, P.; Narayan, J. RetinoDeep: Leveraging Deep Learning Models for Advanced Retinopathy Diagnostics. Sensors 2025, 25, 5019. https://doi.org/10.3390/s25165019
Kansal S, Mishra BK, Sethi S, Vinayak K, Kansal P, Narayan J. RetinoDeep: Leveraging Deep Learning Models for Advanced Retinopathy Diagnostics. Sensors. 2025; 25(16):5019. https://doi.org/10.3390/s25165019
Chicago/Turabian StyleKansal, Sachin, Bajrangi Kumar Mishra, Saniya Sethi, Kanika Vinayak, Priya Kansal, and Jyotindra Narayan. 2025. "RetinoDeep: Leveraging Deep Learning Models for Advanced Retinopathy Diagnostics" Sensors 25, no. 16: 5019. https://doi.org/10.3390/s25165019
APA StyleKansal, S., Mishra, B. K., Sethi, S., Vinayak, K., Kansal, P., & Narayan, J. (2025). RetinoDeep: Leveraging Deep Learning Models for Advanced Retinopathy Diagnostics. Sensors, 25(16), 5019. https://doi.org/10.3390/s25165019