Artificial Intelligence Application in Cornea and External Diseases
Abstract
1. Introduction
2. Method
3. Background of Artificial Intelligence Tools
4. Keratoconus
5. Dry Eye Disease
6. Infectious Keratitis
7. Pterygium
8. Fuchs Endothelial Corneal Dystrophy
9. Corneal Transplantation
10. Discussion
11. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| AI | Artificial intelligence |
| DED | Dry eye disease |
| IK | Infectious keratitis |
| MeSH | Medical Subject Headings |
| ML | Machine learning |
| DL | Deep learning |
| ANN | Artificial neural network |
| CNN | Convolutional neural network |
| DNN | Deep neural network |
| AS-OCT | Anterior segment optical coherence tomography |
| KC | Keratoconus |
| FFKC | Forme fruste keratoconus |
| SVM | Support vector machine |
| LDA | Linear discriminant analysis |
| RF | Random forest |
| DT | Decision tree |
| NNs | Neural networks |
| TVST | Translational Vision Science and Technology |
| AUC | Area Under the Curve |
| AUROC | Area Under the Receiver Operating Characteristic Curve |
| CDSSs | Clinical Decision Support Systems |
| XAI | eXplainable Artificial Intelligence |
| EHRs | Electronic Health Records |
| MGD | Meibomian gland disease |
| TMH | Tear meniscus height |
| TBUT | Tear film break-up time |
| OCT | Optical coherence tomography |
| ADES | Asia Dry Eye Society |
| OSDI | Ocular surface disease index |
| IVCM | In vivo confocal microscopy |
| FK | Fungal keratitis |
| BK | Bacterial keratitis |
| AK | Acanthamoeba keratitis |
| MK | Microbial keratitis |
| VGG | Visual Geometry Group |
| DSLR | Digital single-lens reflex |
| BCVA | Best-corrected visual acuity |
| LLM | Large language model |
| FECD | Fuchs endothelial corneal dystrophy |
| SM | Specular microscopy |
| WFSM | Widefield specular microscopy |
| ECD | Endothelial cell density |
| fNLA | Feedback non-local attention |
| CV | Coefficient of variation |
| HEX | Hexagonality |
| MAE | Mean absolute error |
| ECCT | Enhanced compact convolutional transformer |
| CEDs | Corneal endothelium diseases |
| APL | Average perimeter length |
| PK | Penetrating keratoplasty |
| DALK | Deep anterior lamellar keratoplasty |
| DSAEK | Descemet stripping automated endothelial keratoplasty |
| DMEK | Descemet membrane endothelial keratoplasty |
| LASSO | Least absolute shrinkage and selection operator |
| CTA | Classification tree analysis |
| RFC | Random forest classification |
| FL | Federated Learning |
| RSFs | Random survival forests |
| RCTs | Randomized controlled trials |
| HIPAA | Health Insurance Portability and Accountability |
| GDPR | General Data Protection Regulation |
| FDA | Food and Drug Administration |
| Sens | Sensitivity |
| Spec | Specificity |
| NCS-Ophs | Non-consultant specialist ophthalmologists |
References
- Maehara, H.; Ueno, Y.; Yamaguchi, T.; Kitaguchi, Y.; Miyazaki, D.; Nejima, R.; Inomata, T.; Kato, N.; Chikama, T.i.; Ominato, J.; et al. Artificial Intelligence Support Improves Diagnosis Accuracy in Anterior Segment Eye Diseases. Sci. Rep. 2025, 15, 5117. [Google Scholar] [CrossRef]
- Pagano, L.; Posarelli, M.; Giannaccare, G.; Coco, G.; Scorcia, V.; Romano, V.; Borgia, A. Artificial Intelligence in Cornea and Ocular Surface Diseases. Saudi J. Ophthalmol. 2023, 37, 179–184. [Google Scholar] [CrossRef]
- Ji, Y.; Liu, S.; Hong, X.; Lu, Y.; Wu, X.; Li, K.; Li, K.; Liu, Y. Advances in Artificial Intelligence Applications for Ocular Surface Diseases Diagnosis. Front. Cell Dev. Biol. 2022, 10, 1107689. [Google Scholar] [CrossRef] [PubMed]
- Chen, X.; Zhao, J.; Iselin, K.C.; Borroni, D.; Romano, D.; Gokul, A.; McGhee, C.N.J.; Zhao, Y.; Sedaghat, M.R.; Momeni-Moghaddam, H.; et al. Keratoconus Detection of Changes Using Deep Learning of Colour-Coded Maps. BMJ Open Ophthalmol. 2021, 6, e000824. [Google Scholar] [CrossRef] [PubMed]
- Rabinowitz, Y.S. Keratoconus. Surv. Ophthalmol. 1998, 42, 297–319. [Google Scholar] [CrossRef]
- Zhang, X.; Munir, S.Z.; Sami Karim, S.A.; Munir, W.M. A Review of Imaging Modalities for Detecting Early Keratoconus. Eye 2021, 35, 173–187. [Google Scholar] [CrossRef]
- Maeda, N.; Klyce, S.D.; Smolek, M.K.; Thompson, H.W. Automated keratoconus screening with corneal topography analysis. Investig. Ophthalmol. Vis. Sci. 1994, 35, 2749–2757. [Google Scholar]
- Maeda, N. Comparison of Methods for Detecting Keratoconus Using Videokeratography. Arch. Ophthalmol. 1995, 113, 870. [Google Scholar] [CrossRef]
- Arbelaez, M.C.; Versaci, F.; Vestri, G.; Barboni, P.; Savini, G. Use of a Support Vector Machine for Keratoconus and Subclinical Keratoconus Detection by Topographic and Tomographic Data. Ophthalmology 2012, 119, 2231–2238. [Google Scholar] [CrossRef] [PubMed]
- Smadja, D.; Touboul, D.; Cohen, A.; Doveh, E.; Santhiago, M.R.; Mello, G.R.; Krueger, R.R.; Colin, J. Detection of Subclinical Keratoconus Using an Automated Decision Tree Classification. Am. J. Ophthalmol. 2013, 156, 237–246.e1. [Google Scholar] [CrossRef]
- Ruiz Hidalgo, I.; Rodriguez, P.; Rozema, J.J.; Ní Dhubhghaill, S.; Zakaria, N.; Tassignon, M.J.; Koppen, C. Evaluation of a Machine-Learning Classifier for Keratoconus Detection Based on Scheimpflug Tomography. Cornea 2016, 35, 827–832. [Google Scholar] [CrossRef]
- Herber, R.; Pillunat, L.E.; Raiskup, F. Development of a Classification System Based on Corneal Biomechanical Properties Using Artificial Intelligence Predicting Keratoconus Severity. Eye Vis. 2021, 8, 21. [Google Scholar] [CrossRef]
- Maeda, N.; Klyce, S.D.; Smolek, M.K. Neural Network Classification of Corneal Topography. Preliminary Demonstration. Investig. Ophthalmol. Vis. Sci. 1995, 36, 1327–1335. [Google Scholar]
- Smolek, M.K.; Klyce, S.D. Current Keratoconus Detection Methods Compared With a Neural Network Approach. Investig. Ophthalmol. Vis. Sci. 1997, 38, 2290–2299. [Google Scholar]
- Accardo, P.; Pensiero, S. Neural Network-Based System for Early Keratoconus Detection from Corneal Topography. J. Biomed. Inform. 2002, 35, 151–159. [Google Scholar] [CrossRef] [PubMed]
- Lavric, A.; Valentin, P. KeratoDetect: Keratoconus Detection Algorithm Using Convolutional Neural Networks. Comput. Intell. Neurosci. 2019, 2019, 1–9. [Google Scholar] [CrossRef]
- Kamiya, K.; Ayatsuka, Y.; Kato, Y.; Fujimura, F.; Takahashi, M.; Shoji, N.; Mori, Y.; Miyata, K. Keratoconus Detection Using Deep Learning of Colour-Coded Maps with Anterior Segment Optical Coherence Tomography: A Diagnostic Accuracy Study. BMJ Open 2019, 9, e031313. [Google Scholar] [CrossRef]
- Elsawy, A.; Eleiwa, T.; Chase, C.; Ozcan, E.; Tolba, M.; Feuer, W.; Abdel-Mottaleb, M.; Abou Shousha, M. Multidisease Deep Learning Neural Network for the Diagnosis of Corneal Diseases. Am. J. Ophthalmol. 2021, 226, 252–261. [Google Scholar] [CrossRef]
- Lu, N.; Koppen, C.; Hafezi, F.; Ní Dhubhghaill, S.; Aslanides, I.M.; Wang, Q.; Cui, L.; Rozema, J.J. Multidisease Combinations of Scheimpflug tomography, ocular coherence tomography and air-puff tonometry improve the detection of keratoconus. Cont. Lens Anterior Eye 2023, 46, 101840. [Google Scholar] [CrossRef]
- Alió Del Barrio, J.L.; Eldanasoury, A.M.; Arbelaez, J.; Faini, S.; Versaci, F. Artificial Neural Network for Automated Keratoconus Detection Using a Combined Placido Disc and Anterior Segment Optical Coherence Tomography Topographer. Trans. Vis. Sci. Technol. 2024, 13, 13. [Google Scholar] [CrossRef]
- Abdelmotaal, H.; Hazarbassanov, R.M.; Salouti, R.; Nowroozzadeh, M.H.; Taneri, S.; Al-Timemy, A.H.; Lavric, A.; Yousefi, S. Keratoconus Detection-based on Dynamic Corneal Deformation Videos Using Deep Learning. Ophthalmol. Sci. 2024, 4, 100380. [Google Scholar] [CrossRef]
- Nusair, O.; Asadigandomani, H.; Farrokhpour, H.; Moosaie, F.; Bibak-Bejandi, Z.; Razavi, A.; Daneshvar, K.; Soleimani, M. Clinical Applications of Artificial Intelligence in Corneal Diseases. Vision 2025, 9, 71. [Google Scholar] [CrossRef]
- Hartmann, L.M.; Langhans, D.S.; Eggarter, V.; Freisenich, T.J.; Hillenmayer, A.; König, S.F.; Vounotrypidis, E.; Wolf, A.; Wertheimer, C.M. Keratoconus Progression Determined at the First Visit: A Deep Learning Approach with Fusion of Imaging and Numerical Clinical Data. Transl. Vis. Sci. Technol. 2024, 13, 7. [Google Scholar] [CrossRef]
- Maile, H.P.; Li, J.P.O.; Fortune, M.D.; Royston, P.; Leucci, M.T.; Moghul, I.; Szabo, A.; Balaskas, K.; Allan, B.D.; Hardcastle, A.J.; et al. Personalized Model to Predict Keratoconus Progression From Demographic, Topographic, and Genetic Data. Am. J. Ophthalmol. 2022, 240, 321–329. [Google Scholar] [CrossRef] [PubMed]
- Ambrósio, R.; Machado, A.P.; Leão, E.; Lyra, J.M.G.; Salomão, M.Q.; Esporcatte, L.G.P.; Da Fonseca Filho, J.B.; Ferreira-Meneses, E.; Sena, N.B.; Haddad, J.S.; et al. Optimized Artificial Intelligence for Enhanced Ectasia Detection Using Scheimpflug-Based Corneal Tomography and Biomechanical Data. Am. J. Ophthalmol. 2023, 251, 126–142. [Google Scholar] [CrossRef] [PubMed]
- Ghasedi, S.; Abdel-Dayem, R. Hybrid Deep Learning and Genetic Algorithm Approach for Detecting Keratoconus Using Corneal Tomography. Int. J. Mach. Learn. 2025, 15, 1–12. [Google Scholar] [CrossRef]
- Hakim, F.E.; Farooq, A.V. Dry Eye Disease: An Upyear in 2022. JAMA 2022, 327, 478–479. [Google Scholar] [CrossRef]
- Nguyen, T.; Ong, J.; Masalkhi, M.; Waisberg, E.; Zaman, N.; Sarker, P.; Aman, S.; Lin, H.; Luo, M.; Ambrosio, R.; et al. Artificial Intelligence in Corneal Diseases: A Narrative Review. Contact Lens Anterior Eye 2024, 47, 102284. [Google Scholar] [CrossRef]
- Wang, J.; Yeh, T.N.; Chakraborty, R.; Yu, S.X.; Lin, M.C. A Deep Learning Approach for Meibomian Gland Atrophy Evaluation in Meibography Images. Trans. Vis. Sci. Technol. 2019, 8, 37. [Google Scholar] [CrossRef]
- Yeh, C.H.; Yu, S.X.; Lin, M.C. Meibography Phenotyping and Classification From Unsupervised Discriminative Feature Learning. Trans. Vis. Sci. Technol. 2021, 10, 4. [Google Scholar] [CrossRef]
- Li, S.; Wang, Y.; Yu, C.; Li, Q.; Chang, P.; Wang, D.; Li, Z.; Zhao, Y.; Zhang, H.; Tang, N.; et al. Unsupervised Learning Based on Meibography Enables Subtyping of Dry Eye Disease and Reveals Ocular Surface Features. Investig. Ophthalmol. Vis. Sci. 2023, 64, 43. [Google Scholar] [CrossRef] [PubMed]
- Su, T.Y.; Liu, Z.Y.; Chen, D.Y. Tear Film Break-Up Time Measurement Using Deep Convolutional Neural Networks for Screening Dry Eye Disease. IEEE Sens. J. 2018, 18, 6857–6862. [Google Scholar] [CrossRef]
- Shimizu, E.; Ishikawa, T.; Tanji, M.; Agata, N.; Nakayama, S.; Nakahara, Y.; Yokoiwa, R.; Sato, S.; Hanyuda, A.; Ogawa, Y.; et al. Artificial Intelligence to Estimate the Tear Film Breakup Time and Diagnose Dry Eye Disease. Sci. Rep. 2023, 13, 5822. [Google Scholar] [CrossRef] [PubMed]
- Chase, C.; Elsawy, A.; Eleiwa, T.; Ozcan, E.; Tolba, M.; Abou Shousha, M. Comparison of Autonomous AS-OCT Deep Learning Algorithm and Clinical Dry Eye Tests in Diagnosis of Dry Eye Disease. Clin. Ophthalmol. 2021, 15, 4281–4289. [Google Scholar] [CrossRef]
- Edorh, N.A.; El Maftouhi, A.; Djerada, Z.; Arndt, C.; Denoyer, A. New Model to Better Diagnose Dry Eye Disease Integrating OCT Corneal Epithelial Mapping. Br. J. Ophthalmol. 2022, 106, 1488–1495. [Google Scholar] [CrossRef]
- Zhang, W.; Rong, H.; Hei, K.; Liu, G.; He, M.; Du, B.; Wei, R.; Zhang, Y. A Deep Learning-Assisted Automatic Measurement of Tear Meniscus Height on Ocular Surface Images and Its Application in Myopia Control. Front. Bioeng. Biotechnol. 2025, 13, 1554432. [Google Scholar] [CrossRef]
- Wang, K.; Xu, K.; Chen, X.; He, C.; Zhang, J.; Li, F.; Xiao, C.; Zhang, Y.; Wang, Y.; Yang, W.; et al. Artificial Intelligence-Assisted Tear Meniscus Height Measurement: A Multicenter Study. Quant. Imaging Med. Surg. 2025, 15, 4071–4084. [Google Scholar] [CrossRef]
- Qu, J.H.; Qin, X.R.; Li, C.D.; Peng, R.M.; Xiao, G.G.; Cheng, J.; Gu, S.F.; Wang, H.K.; Hong, J. Fully Automated Grading System for the Evaluation of Punctate Epithelial Erosions Using Deep Neural Networks. Br. J. Ophthalmol. 2023, 107, 453–460. [Google Scholar] [CrossRef]
- Nejat, F.; Eghtedari, S.; Alimoradi, F. Next-Generation Tear Meniscus Height Detecting and Measuring Smartphone-Based Deep Learning Algorithm Leads in Dry Eye Management. Ophthalmol. Sci. 2024, 4, 100546. [Google Scholar] [CrossRef]
- Nair, P.P.; Keskar, M.; Borghare, P.T.; Methwani, D.A.; Nasre, Y.; Chaudhary, M. Artificial Intelligence in Dry Eye Disease: A Narrative Review. Cureus 2024, 16, e70056. [Google Scholar] [CrossRef]
- Heidari, Z.; Hashemi, H.; Sotude, D.; Ebrahimi-Besheli, K.; Khabazkhoob, M.; Soleimani, M.; Djalilian, A.R.; Yousefi, S. Applications of Artificial Intelligence in Diagnosis of Dry Eye Disease: A Systematic Review and Meta-Analysis. Cornea 2024, 43, 1310–1318. [Google Scholar] [CrossRef]
- Essalat, M.; Abolhosseini, M.; Le, T.H.; Moshtaghion, S.M.; Kanavi, M.R. Interpretable Deep Learning for Diagnosis of Fungal and Acanthamoeba Keratitis Using in Vivo Confocal Microscopy Images. Sci. Rep. 2023, 13, 8953. [Google Scholar] [CrossRef]
- Wei, Z.; Wang, S.; Wang, Z.; Zhang, Y.; Chen, K.; Gong, L.; Li, G.; Zheng, Q.; Zhang, Q.; He, Y.; et al. Development and Multi-Center Validation of Machine Learning Model for Early Detection of Fungal Keratitis. eBioMedicine 2023, 88, 104438. [Google Scholar] [CrossRef]
- Kuo, M.T.; Hsu, B.W.Y.; Lin, Y.S.; Fang, P.C.; Yu, H.J.; Chen, A.; Yu, M.S.; Tseng, V.S. Comparisons of Deep Learning Algorithms for Diagnosing Bacterial Keratitis via External Eye Photographs. Sci. Rep. 2021, 11, 24227. [Google Scholar] [CrossRef]
- Soleimani, M.; Esmaili, K.; Rahdar, A.; Aminizadeh, M.; Cheraqpour, K.; Tabatabaei, S.A.; Mirshahi, R.; Bibak-Bejandi, Z.; Mohammadi, S.F.; Koganti, R.; et al. From the Diagnosis of Infectious Keratitis to Discriminating Fungal Subtypes; a Deep Learning-Based Study. Sci. Rep. 2023, 13, 22200. [Google Scholar] [CrossRef] [PubMed]
- Li, Z.; Jiang, J.; Chen, K.; Chen, Q.; Zheng, Q.; Liu, X.; Weng, H.; Wu, S.; Chen, W. Preventing Corneal Blindness Caused by Keratitis Using Artificial Intelligence. Nat. Commun. 2021, 12, 3738. [Google Scholar] [CrossRef]
- Satitpitakul, V.; Puangsricharern, A.; Yuktiratna, S.; Jaisarn, Y.; Sangsao, K.; Puangsricharern, V.; Kasetsuwan, N.; Reinprayoon, U.; Kittipibul, T. A Convolutional Neural Network Using Anterior Segment Photos for Infectious Keratitis Identification. Clin. Ophthalmol. 2025, 19, 73–81. [Google Scholar] [CrossRef]
- Tang, N.; Huang, G.; Lei, D.; Jiang, L.; Chen, Q.; He, W.; Tang, F.; Hong, Y.; Lv, J.; Qin, Y.; et al. An Artificial Intelligence Approach to Classify Pathogenic Fungal Genera of Fungal Keratitis Using Corneal Confocal Microscopy Images. Int. Ophthalmol. 2023, 43, 2203–2214. [Google Scholar] [CrossRef] [PubMed]
- Liang, S.; Zhong, J.; Zeng, H.; Zhong, P.; Li, S.; Liu, H.; Yuan, J. A Structure-Aware Convolutional Neural Network for Automatic Diagnosis of Fungal Keratitis with In Vivo Confocal Microscopy Images. J. Digit. Imaging 2023, 36, 1624–1632. [Google Scholar] [CrossRef] [PubMed]
- Li, C.P.; Dai, W.; Xiao, Y.P.; Qi, M.; Zhang, L.X.; Gao, L.; Zhang, F.L.; Lai, Y.K.; Liu, C.; Lu, J.; et al. Two-Stage Deep Neural Network for Diagnosing Fungal Keratitis via in Vivo Confocal Microscopy Images. Sci. Rep. 2024, 14, 18432. [Google Scholar] [CrossRef] [PubMed]
- Erukulla, R.; Esmaili, K.; Rahdar, A.; Aminizade, M.; Cheraqpour, K.; Tabatabaei, S.A.; Bibak-Bejandi, Z.; Mohammadi, S.F.; Yousefi, S.; Soleimani, M. Deep Learning-Based Classification of Fungal and Acanthamoeba Keratitis Using Confocal Microscopy. Ocul. Surf. 2025, 38, 203–208. [Google Scholar] [CrossRef]
- Soleimani, M.; Cheung, A.Y.; Rahdar, A.; Kirakosyan, A.; Tomaras, N.; Lee, I.; De Alba, M.; Aminizade, M.; Esmaili, K.; Quiroz-Casian, N.; et al. Diagnosis of microbial keratitis using smartphone-captured images; a deep-learning model. J. Ophthalmic Inflamm. Infect. 2025, 15, 8. [Google Scholar] [CrossRef]
- Li, Z.; Xie, H.; Wang, Z.; Li, D.; Chen, K.; Zong, X.; Qiang, W.; Wen, F.; Deng, Z.; Chen, L.; et al. Deep Learning for Multi-Type Infectious Keratitis Diagnosis: A Nationwide, Cross-Sectional, Multicenter Study. npj Digit. Med. 2024, 7, 181. [Google Scholar] [CrossRef]
- Kuo, M.T.; Hsu, B.W.Y.; Yin, Y.K.; Fang, P.C.; Lai, H.Y.; Chen, A.; Yu, M.S.; Tseng, V.S. A Deep Learning Approach in Diagnosing Fungal Keratitis Based on Corneal Photographs. Sci. Rep. 2020, 10, 14424. [Google Scholar] [CrossRef]
- Liu, Z.; Cao, Y.; Li, Y.; Xiao, X.; Qiu, Q.; Yang, M.; Zhao, Y.; Cui, L. Automatic Diagnosis of Fungal Keratitis Using Data Augmentation and Image Fusion with Deep Convolutional Neural Network. Comput. Methods Programs Biomed. 2020, 187, 105019. [Google Scholar] [CrossRef]
- Hanif, A.; Prajna, N.V.; Lalitha, P.; NaPier, E.; Parker, M.; Steinkamp, P.; Keenan, J.D.; Campbell, J.P.; Song, X.; Redd, T.K. Assessing the Impact of Image Quality on Deep Learning Classification of Infectious Keratitis. Ophthalmol. Sci. 2023, 3, 100331. [Google Scholar] [CrossRef] [PubMed]
- Sarayar, R.; Lestari, Y.D.; Setio, A.A.A.; Sitompul, R. Accuracy of Artificial Intelligence Model for Infectious Keratitis Classification: A Systematic Review and Meta-Analysis. Front. Public Health 2023, 11, 1239231. [Google Scholar] [CrossRef] [PubMed]
- Tey, K.Y.; Cheong, E.Z.K.; Ang, M. Potential Applications of Artificial Intelligence in Image Analysis in Cornea Diseases: A Review. Eye Vis. 2024, 11, 10. [Google Scholar] [CrossRef] [PubMed]
- Chen, B.; Fang, X.W.; Wu, M.N.; Zhu, S.J.; Zheng, B.; Liu, B.Q.; Wu, T.; Hong, X.Q.; Wang, J.T.; Yang, W.H. Artificial intelligence assisted pterygium diagnosis: Current status and perspectives. Int. J. Ophthalmol. 2023, 16, 1386–1394. [Google Scholar] [CrossRef]
- Gan, F.; Chen, W.Y.; Liu, H.; Zhong, Y.L. Application of Artificial Intelligence Models for Detecting the Pterygium That Requires Surgical Treatment Based on Anterior Segment Images. Front. Neurosci. 2022, 16, 1084118. [Google Scholar] [CrossRef]
- Zhu, S.; Fang, X.; Qian, Y.; He, K.; Wu, M.; Zheng, B.; Song, J. Pterygium Screening and Lesion Area Segmentation Based on Deep Learning. J. Healthc. Eng. 2022, 2022, 1–9. [Google Scholar] [CrossRef]
- Wan, C.; Shao, Y.; Wang, C.; Jing, J.; Yang, W. A Novel System for Measuring Pterygium’s Progress Using Deep Learning. Front. Med. 2022, 9, 819971. [Google Scholar] [CrossRef]
- Liu, Y.; Xu, C.; Wang, S.; Chen, Y.; Lin, X.; Guo, S.; Liu, Z.; Wang, Y.; Zhang, H.; Guo, Y.; et al. Accurate Detection and Grading of Pterygium through Smartphone by a Fusion Training Model. Br. J. Ophthalmol. 2024, 108, 336–342. [Google Scholar] [CrossRef]
- Fang, X.; Deshmukh, M.; Chee, M.L.; Soh, Z.D.; Teo, Z.L.; Thakur, S.; Goh, J.H.L.; Liu, Y.C.; Husain, R.; Mehta, J.S.; et al. Deep Learning Algorithms for Automatic Detection of Pterygium Using Anterior Segment Photographs from Slit-Lamp and Hand-Held Cameras. Br. J. Ophthalmol. 2022, 106, 1642–1647. [Google Scholar] [CrossRef]
- Zhang, L.W.; Yang, J.; Jiang, H.W.; Yang, X.Q.; Chen, Y.N.; Ying, W.D.; Deng, Y.L.; Zhang, M.h.; Liu, H.; Zhang, H.L. Identification of Biomarkers and Immune Microenvironment Associated with Pterygium through Bioinformatics and Machine Learning. Front. Mol. Biosci. 2024, 11, 1524517. [Google Scholar] [CrossRef] [PubMed]
- Hung, K.H.; Lin, C.; Roan, J.; Kuo, C.F.; Hsiao, C.H.; Tan, H.Y.; Chen, H.C.; Ma, D.H.K.; Yeh, L.K.; Lee, O.K.S. Application of a Deep Learning System in Pterygium Grading and Further Prediction of Recurrence with Slit Lamp Photographs. Diagnostics 2022, 12, 888. [Google Scholar] [CrossRef] [PubMed]
- Jais, F.N.; Che Azemin, M.Z.; Hilmi, M.R.; Mohd Tamrin, M.I.; Kamal, K.M. Postsurgery Classification of Best-Corrected Visual Acuity Changes Based on Pterygium Characteristics Using the Machine Learning Technique. Sci. World J. 2021, 2021, 6211006. [Google Scholar] [CrossRef]
- Li, Z.; Wang, Z.; Xiu, L.; Zhang, P.; Wang, W.; Wang, Y.; Chen, G.; Yang, W.; Chen, W. Large Language Model-Based Multimodal System for Detecting and Grading Ocular Surface Diseases from Smartphone Images. Front. Cell Dev. Biol. 2025, 13, 1600202. [Google Scholar] [CrossRef]
- Tiong, E.W.; Soon, C.Y.; Ong, Z.Z.; Liu, S.H.; Qureshi, R.; Rauz, S.; Ting, D.S. Deep Learning for Diagnosing and Grading Pterygium: A Systematic Review and Meta-Analysis. Comput. Biol. Med. 2025, 196, 110743. [Google Scholar] [CrossRef]
- Liu, S.; Kandakji, L.; Stupnicki, A.; Sumodhee, D.; Leucci, M.T.; Hau, S.; Balal, S.; Okonkwo, A.; Moghul, I.; Kanda, S.P.; et al. Current Applications of Artificial Intelligence for Fuchs Endothelial Corneal Dystrophy: A Systematic Review. Transl. Vis. Sci. Technol. 2025, 14, 12. [Google Scholar] [CrossRef] [PubMed]
- Vigueras-Guillén, J.P.; Van Rooij, J.; Van Dooren, B.T.H.; Lemij, H.G.; Islamaj, E.; Van Vliet, L.J.; Vermeer, K.A. DenseUNets with Feedback Non-Local Attention for the Segmentation of Specular Microscopy Images of the Corneal Endothelium with Guttae. Sci. Rep. 2022, 12, 14035. [Google Scholar] [CrossRef]
- Sierra, J.S.; Pineda, J.; Rueda, D.; Tello, A.; Prada, A.M.; Galvis, V.; Volpe, G.; Millan, M.S.; Romero, L.A.; Marrugo, A.G. Corneal Endothelium Assessment in Specular Microscopy Images with Fuchs’ Dystrophy via Deep Regression of Signed Distance Maps. Biomed. Opt. Express 2023, 14, 335–351. [Google Scholar] [CrossRef]
- Karmakar, R.; Nooshabadi, S.V.; Eghrari, A.O. Mobile-CellNet: Automatic Segmentation of Corneal Endothelium Using an Efficient Hybrid Deep Learning Model. Cornea 2023, 42, 456–463. [Google Scholar] [CrossRef]
- Qu, J.H.; Qin, X.R.; Peng, R.M.; Xiao, G.G.; Cheng, J.; Gu, S.F.; Wang, H.K.; Hong, J. A Fully Automated Segmentation and Morphometric Parameter Estimation System for Assessing Corneal Endothelial Cell Images. Am. J. Ophthalmol. 2022, 239, 142–153. [Google Scholar] [CrossRef]
- Tey, K.Y.; Hsein Lee, B.J.; Ng, C.; Wong, Q.Y.; Panda, S.K.; Dash, A.; Wong, J.; Ken Cheong, E.Z.; Mehta, J.S.; Schmeterer, L.; et al. Deep Learning Analysis of Widefield Cornea Endothelial Imaging in Fuchs Dystrophy. Ophthalmol. Sci. 2026, 6, 100914. [Google Scholar] [CrossRef]
- Eleiwa, T.; Elsawy, A.; Özcan, E.; Abou Shousha, M. Automated Diagnosis and Staging of Fuchs’ Endothelial Cell Corneal Dystrophy Using Deep Learning. Eye Vis. 2020, 7, 44. [Google Scholar] [CrossRef]
- Prada, A.M.; Quintero, F.; Mendoza, K.; Galvis, V.; Tello, A.; Romero, L.A.; Marrugo, A.G. Assessing Fuchs Corneal Endothelial Dystrophy Using Artificial Intelligence–Derived Morphometric Parameters From Specular Microscopy Images. Cornea 2024, 43, 1080–1087. [Google Scholar] [CrossRef]
- Qu, J.H.; Qin, X.R.; Xie, Z.J.; Qian, J.H.; Zhang, Y.; Sun, X.N.; Sun, Y.Z.; Peng, R.M.; Xiao, G.G.; Lin, J.; et al. Establishment of an Automatic Diagnosis System for Corneal Endothelium Diseases Using Artificial Intelligence. J. Big Data 2024, 11, 67. [Google Scholar] [CrossRef]
- Foo, V.H.X.; Lim, G.Y.S.; Liu, Y.C.; Ong, H.S.; Wong, E.; Chan, S.; Wong, J.; Mehta, J.S.; Ting, D.S.W.; Ang, M. Deep Learning for Detection of Fuchs Endothelial Dystrophy from Widefield Specular Microscopy Imaging: A Pilotstudy. Eye Vis. 2024, 11, 11. [Google Scholar] [CrossRef] [PubMed]
- Shilpashree, P.S.; Suresh, K.V.; Sudhir, R.R.; Srinivas, S.P. Automated Image Segmentation of the Corneal Endothelium in Patients With Fuchs Dystrophy. Transl. Vis. Sci. Technol. 2021, 10, 27. [Google Scholar] [CrossRef] [PubMed]
- Fitoussi, L.; Zéboulon, P.; Rizk, M.; Ghazal, W.; Rouger, H.; Saad, A.; Elahi, S.; Gatinel, D. Deep Learning Versus Corneal Tomography Features to Detect Subclinical Corneal Edema in Fuchs Endothelial Corneal Dystrophy. Cornea Open 2024, 3, e0038. [Google Scholar] [CrossRef]
- Arjmandmazidi, S.; Heidari, H.R.; Ghasemnejad, T.; Mori, Z.; Molavi, L.; Meraji, A.; Kaghazchi, S.; Mehdizadeh Aghdam, E.; Montazersaheb, S. An In-depth Overview of Artificial Intelligence (AI) Tool Utilization across Diverse Phases of Organ Transplantation. J. Transl. Med. 2025, 23, 678. [Google Scholar] [CrossRef] [PubMed]
- Tay, C.; Reddy, H.; Mehta, J.S. Advances in Corneal Transplantation. Eye 2025, 39, 2497–2508. [Google Scholar] [CrossRef]
- Patefield, A.; Meng, Y.; Airaldi, M.; Coco, G.; Vaccaro, S.; Parekh, M.; Semeraro, F.; Gadhvi, K.A.; Kaye, S.B.; Zheng, Y.; et al. Deep Learning Using Preoperative AS-OCT Predicts Graft Detachment in DMEK. Transl. Vis. Sci. Technol. 2023, 12, 14. [Google Scholar] [CrossRef] [PubMed]
- Muijzer, M.B.; Hoven, C.M.W.; Frank, L.E.; Vink, G.; Wisse, R.P.L.; The Netherlands Corneal Transplant Network (NCTN); Bartels, M.C.; Cheng, Y.Y.; Dhooge, M.R.P.; Dickman, M.; et al. A Machine Learning Approach to Explore Predictors of Graft Detachment Following Posterior Lamellar Keratoplasty: A Nationwide Registry Study. Sci. Rep. 2022, 12, 17705. [Google Scholar] [CrossRef]
- Heslinga, F.G.; Alberti, M.; Pluim, J.P.W.; Cabrerizo, J.; Veta, M. Quantifying Graft Detachment after Descemet’s Membrane Endothelial Keratoplasty with Deep Convolutional Neural Networks. Transl. Vis. Sci. Technol. 2020, 9, 48. [Google Scholar] [CrossRef]
- Ang, M.; He, F.; Lang, S.; Sabanayagam, C.; Cheng, C.Y.; Arundhati, A.; Mehta, J.S. Machine Learning to Analyze Factors Associated With Ten-Year Graft Survival of Keratoplasty for Cornea Endothelial Disease. Front. Med. 2022, 9, 831352. [Google Scholar] [CrossRef] [PubMed]
- Gómez-Fernández, H.; Alhakim-Khalak, F.; Ruiz-Alonso, S.; Díaz, A.; Tamayo, J.; Ramalingam, M.; Larra, E.; Pedraz, J.L. Comprehensive Review of the State-of-the-Art in Corneal 3D Bioprinting, Including Regulatory Aspects. Int. J. Pharm. 2024, 662, 124510. [Google Scholar] [CrossRef]
- Nuliqiman, M.; Xu, M.; Sun, Y.; Cao, J.; Chen, P.; Gao, Q.; Xu, P.; Ye, J. Artificial Intelligence in Ophthalmic Surgery: Current Applications and Expectations. Clin. Ophthalmol. 2023, 17, 3499–3511. [Google Scholar] [CrossRef]
- Feizi, S.; Javadi, M.A.; Bayat, K.; Arzaghi, M.; Rahdar, A.; Ahmadi, M.J. Machine Learning Methods to Identify Risk Factors for Corneal Graft Rejection in Keratoconus. Sci. Rep. 2024, 14, 29131. [Google Scholar] [CrossRef]
- Leiderman, Y.I.; Gerber, M.J.; Hubschman, J.P.; Yi, D. Artificial Intelligence Applications in Ophthalmic Surgery. Curr. Opin. Ophthalmol. 2024, 35, 526–532. [Google Scholar] [CrossRef] [PubMed]
- Lin, H.; Li, R.; Liu, Z.; Chen, J.; Yang, Y.; Chen, H.; Lin, Z.; Lai, W.; Long, E.; Wu, X.; et al. Diagnostic Efficacy and Therapeutic Decision-making Capacity of an Artificial Intelligence Platform for Childhood Cataracts in Eye Clinics: A Multicentre Randomized Controlled Trial. eClinicalMedicine 2019, 9, 52–59. [Google Scholar] [CrossRef] [PubMed]
- Wolf, R.M.; Channa, R.; Liu, T.Y.A.; Zehra, A.; Bromberger, L.; Patel, D.; Ananthakrishnan, A.; Brown, E.A.; Prichett, L.; Lehmann, H.P.; et al. Autonomous Artificial Intelligence Increases Screening and Follow-up for Diabetic Retinopathy in Youth: The ACCESS Randomized Control Trial. Nat. Commun. 2024, 15, 421. [Google Scholar] [CrossRef]
- Noriega, A.; Meizner, D.; Camacho, D.; Enciso, J.; Quiroz-Mercado, H.; Morales-Canton, V.; Almaatouq, A.; Pentland, A. Screening Diabetic Retinopathy Using an Automated Retinal Image Analysis System in Independent and Assistive Use Cases in Mexico: Randomized Controlled Trial. JMIR Form. Res. 2021, 5, e25290. [Google Scholar] [CrossRef]
- Mathenge, W.; Whitestone, N.; Nkurikiye, J.; Patnaik, J.L.; Piyasena, P.; Uwaliraye, P.; Lanouette, G.; Kahook, M.Y.; Cherwek, D.H.; Congdon, N.; et al. Impact of Artificial Intelligence Assessment of Diabetic Retinopathy on Referral Service Uptake in a Low-Resource Setting. Ophthalmol. Sci. 2022, 2, 100168. [Google Scholar] [CrossRef]
- Li, B.; Chen, H.; Yu, W.; Zhang, M.; Lu, F.; Ma, J.; Hao, Y.; Li, X.; Hu, B.; Shen, L.; et al. The Performance of a Deep Learning System in Assisting Junior Ophthalmologists in Diagnosing 13 Major Fundus Diseases: A Prospective Multi-Center Clinical Trial. npj Digit. Med. 2024, 7, 8. [Google Scholar] [CrossRef] [PubMed]
- Ma, X.; Fang, J.; Wang, Y.; Hu, Z.; Xu, Z.; Zhu, S.; Yan, W.; Chu, M.; Xu, J.; Sheng, S.; et al. MCOA: A Comprehensive Multimodal Dataset for Advancing Deep Learning in Corneal Opacity Assessment. Sci. Data 2025, 12, 911. [Google Scholar] [CrossRef]
- Chen, Q.; Keenan, T.D.L.; Agron, E.; Allot, A.; Guan, E.; Duong, B.; Elsawy, A.; Hou, B.; Xue, C.; Bhandari, S.; et al. AI Workflow, External Validation, and Development in Eye Disease Diagnosis. JAMA Netw. Open 2025, 8, e2517204. [Google Scholar] [CrossRef]
- Mumuni, A.; Mumuni, F. Data Augmentation: A Comprehensive Survey of Modern Approaches. Array 2022, 16, 100258. [Google Scholar] [CrossRef]
- Wang, D.; Sklar, B.; Tian, J.; Gabriel, R.; Engelhard, M.; McNabb, R.P.; Kuo, A.N. Improving Artificial Intelligence–Based Microbial Keratitis Screening Tools Constrained by Limited Data Using Synthetic Generation of Slit-Lamp Photos. Ophthalmol. Sci. 2025, 5, 100676. [Google Scholar] [CrossRef]
- Li, J.P.O.; Liu, H.; Ting, D.S.; Jeon, S.; Chan, R.P.; Kim, J.E.; Sim, D.A.; Thomas, P.B.; Lin, H.; Chen, Y.; et al. Digital Technology, Tele-Medicine and Artificial Intelligence in Ophthalmology: A Global Perspective. Prog. Retin. Eye Res. 2021, 82, 100900. [Google Scholar] [CrossRef]
- The CONSORT-AI and SPIRIT-AI Steering Group. Reporting Guidelines for Clinical Trials Evaluating Artificial Intelligence Interventions Are Needed. Nat. Med. 2019, 25, 1467–1468. [Google Scholar] [CrossRef]
- Xu, Z.; Liu, A.; Su, B.; Wu, M.; Zhang, B.; Chen, G.; Lu, F.; Hu, L.; Mao, X. Standardized Corneal Topography-Driven AI for Orthokeratology Fitting: A Hybrid Deep/Machine Learning Approach With Enhanced Generalizability. Transl. Vis. Sci. Technol. 2025, 14, 16. [Google Scholar] [CrossRef]
- Peek, N.; Capurro, D.; Rozova, V.; Van Der Veer, S.N. Bridging the Gap: Challenges and Strategies for the Implementation of Artificial Intelligence-based Clinical Decision Support Systems in Clinical Practice. Yearb. Med. Inform. 2024, 33, 103–114. [Google Scholar] [CrossRef]
- Hassija, V.; Chamola, V.; Mahapatra, A.; Singal, A.; Goel, D.; Huang, K.; Scardapane, S.; Spinelli, I.; Mahmud, M.; Hussain, A. Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence. Cogn. Comput. 2024, 16, 45–74. [Google Scholar] [CrossRef]
- De Vries, B.M.; Zwezerijnen, G.J.C.; Burchell, G.L.; Van Velden, F.H.P.; Menke-van Der Houven Van Oordt, C.W.; Boellaard, R. Explainable artificial intelligence (XAI) in radiology and nuclear medicine: A literature review. Front. Med. 2023, 10, 1180773. [Google Scholar] [CrossRef] [PubMed]
- Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 2019, 1, 206–215. [Google Scholar] [CrossRef] [PubMed]
- Yadav, N.; Pandey, S.; Gupta, A.; Dudani, P.; Gupta, S.; Rangarajan, K. Data Privacy in Healthcare: In the Era of Artificial Intelligence. Indian Dermatol. Online J. 2023, 14, 788–792. [Google Scholar] [CrossRef]
- Price, W.N.; Cohen, I.G. Privacy in the age of medical big data. Nat. Med. 2019, 25, 37–43. [Google Scholar] [CrossRef]
- Linardatos, P.; Papastefanopoulos, V.; Kotsiantis, S. Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy 2020, 23, 18. [Google Scholar] [CrossRef]
- Yang, W.H.; Zheng, B.; Wu, M.N.; Zhu, S.J.; Fei, F.Q.; Weng, M.; Zhang, X.; Lu, P.R. An Evaluation System of Fundus Photograph-Based Intelligent Diagnostic Technology for Diabetic Retinopathy and Applicability for Research. Diabetes Ther. 2019, 10, 1811–1822. [Google Scholar] [CrossRef] [PubMed]

| Study | AI Model | Objective | Data (N) | Dataset Availability | Test Split (N) | Main Outcomes |
|---|---|---|---|---|---|---|
| Ghasedi et al. (2025) [26] | DL + Genetic Algorithm | KC and suspect detection | Pentacam (N = 1288 eyes) | Private | 3-fold cross-validation | Optimized feature selection using Genetic Algorithms (GAs) significantly improved diagnostic accuracy by focusing on relevant features. (ANN + GA: Acc = 98.63%; SVM + GA: Acc = 98.13%; ANN alone: 96.9%.) |
| Alió Del Barrio et al. (2024) [20] | ANN (MLP) | KC and suspect KC detection | AS-OCT and Placido topography (N = 6677 eyes) | Private | 30% of dataset | Early detection and automated clinical support of KC; the model outperformed in KC, but the Recall of suspect KC was low. (KC: Prec = 96.0%, Rec = 97.9%, F1 = 96.9%; Suspect: Prec = 83.6%, Rec = 69.7%, F1 = 76.0%) |
| Abdelmotaal et al. (2024) [21] | CNN (DenseNet121) | KC detection | Corneal deformation videos (N = 734 eyes) | Private | Dataset 2: 30%; dataset 1: external validation | AUROC = 0.93, showed generalization in external test; a quick and objective tool because only needs Corvis ST |
| Hartmann et al. (2024) [23] | DL | Predict KC progression | Pentacam clinical data (N = 570 KC eyes) | Private | 10% | Predicts progression at the first visit, helping early CXL decisions; fusion of imaging and numerical data is better. (Fusion Model Acc = 0.83; Imaging only Acc = 0.77) |
| Ambrósio et al. (2023) [25] | Random forest (TBIv1/v2) | Ectasia detection | Pentacam + Corvis (N = 3886 eyes) | Private | 10-fold cross-validation | Improve the detection of subclinical ectasia. (Very asymmetric ectasia, VAE-NT 0.899 to 0.945) |
| Lu et al. (2023) [19] | Random forest and NN | FFKC detection | Pentacam + OCT + Corvis ST (N = 599 eyes) | Private | 40% validation | Multiple devices are better; found that combining biomechanics (Corvis) with OCT was sufficient (AUC 0.902), with no added benefit from a 3rd device. (Corvis + OCT AUC = 0.902; Three-device AUC = 0.871) |
| Maile et al. (2022) [24] | Royston–Parmar | Predict KC progression | Keratometry and pachymetry (N = 8701 eyes) Genetic data (N = 926 patients) | Private | Internal–external cross-validation by region | First integration of genetic scores with topography; addresses biological risk factors. (Explained variation 33%; age was the most significant predictor) |
| Chen et al. (2021) [4] | Deep learning (VGG16) | KC detection and staging | Colour-coded maps (N = 1926 scans) | Private | N = 532 | Validated a CNN model on multicenter datasets using colour-coded maps; achieved high accuracy in external validation. (Testing Acc = 97.85%; external Val AUC = 0.9737) |
| Elsawy et al. (2021) [18] | Multi-disease CNN | Multidisease diagnosis (KC, FED, DES) | AS-OCT (N = 158,220) | Private | N = 132 eyes | MDDN enables automated screening for KC alongside other corneal diseases (FED, DES) using only AS-OCT, demonstrating high clinical utility for comprehensive triage. (Eye-level AUROC > 0.99 for KCN; F1 score > 0.90) |
| Herber et al. (2021) [12] | ML–linear discriminant analysis (LDA), random forest (RF) | Classifying the stage of KC | Dynamic Scheimpflug tonometry (Corvis ST) (N = 434 eyes) | Private | 30% Validation | The CST can predict the stage of KC without keratometric data; classification based on biomechanical properties. (Overall KC detection Acc = 93%) |
| Kamiya et al. (2019) [17] | Deep learning (VGG16) | Evaluate accuracy of KC diagnosis | AS-OCT (N = 543 eyes) | Private | LOOCV (5-fold) | AS-OCT maps effectively discriminate KC stages; increase accuracy in daily practice. (Normal vs. KC Acc = 0.991; stage classification Acc = 0.874) |
| Lavric et al. (2019) [16] | CNN (KeratoDetect) | Automatic detection of KC | Placido topography (N = 3000 images) | Private | N = 400 | Rapid screening tool; assists ophthalmologists to detect. (Accuracy 99.33% on test set) |
| Ruiz et al. (2016) [11] | ML-SVM | Evaluate the performance of SVM in detection | Pentacam (N = 856 eyes) | Private | no specified | Comparison with single parameter method; objective KC detection. (5-group accuracy 88.8%, Sens = 90%, Spec = 95.2%) |
| Smadja et al. (2013) [10] | ML-Regression tree | Detection of subclinical KC | Placido and Scheimpflug (N = 372 eyes) | Private | Not specified | Help in surgical decision and detect early KC. (Normal vs. FFKC: Sens = 93.6%, Spec = 97.2%) |
| Arbelaez et al. (2012) [9] | ML-SVM | Define ML-based classification | Placido tomography (N = 3502 eyes) | Private | Validation set | Classification of subclinical KC detection. (Subclinical KC Sens: 75.2% improved to 92%) |
| Accardo et al. (2002) [15] | Neural network | Screening for KC detection in both eyes | Videokeratoscope (EyeSys, N = 396 eyes) | Private | Not specified | Using parameters from both eyes improves KC detection; supports early detection and screening. (Test set: Global Sens 94.1%, Global Spec 97.6%) |
| Smolek et al. (1997) [14] | Neural network | Detection and grading stage of KC or KCS | Videokeratoscope TMS-1 (N = 300 eyes) | Private | N = 150 | Distinguish KC and KCS; improves accuracy and specificity over conventional tests. (Classification network: 100% accuracy, sensitivity, specificity) |
| Maeda et al. (1995) [8] | Neural network | Automatically distinguishing topography | videokeratoscope TMS-1 (N = 183 eyes) | Private | N = 75 | Reduce objective interpretation; support diagnosis of corneal shape abnormalities. (Test: 80% accuracy; accuracy and specificity >90% for all categories) |
| Maeda et al. (1994) [7] | Expert system | Differentiate keratoconus | TMS-1 (N = 200 eyes) | Private | N = 100 | Reduce subjective opinion of topography interpretation. (Validation: Acc = 96%, Spec = 99%, Sens = 89%) |
| Study | AI Model | Objective | Data (N) | Data Availability | Testing Split (N) | Main Outcomes |
|---|---|---|---|---|---|---|
| Zhang et al. (2025) [36] | Mask R-CNN (ResNet-101) | Tear meniscus height (TMH) measurement | Ocular surface images (N = 1300) | private | N = 220 (internal = 120, external = 100) | Achieved high precision with IoU 0.928 and R2 0.92, with external validation AUC reaching 0.975; limited sample size led to potential selection bias and unequal distribution of dry eye severity |
| Wang et al. (2025) [37] | ALNN (Attention-Limiting NN based on U-Net) | Automatic TMH measurement | Ocular surface images (N = 1300) | private | N = 2536 | Demonstrated superior segmentation (MIoU > 0.92) across multicenter data, with color images performing better than infrared (r = 0.957 vs. 0.803); scarcity of data with high TMH (>0.4 mm) reduces robustness for severe cases; method has device-specific dependency on Keratograph 5M |
| Nejat et al. (2024) [39] | Deep learning (SimCLR NN) | Tear meniscus height measurement | Smartphone images (N = 1021) | private | not specified | Achieved Dice coefficient of 0.9868 and accuracy of 95.39% using accessible smartphone imagery; limited to specific smartphone optics; lack of large-scale external validation |
| Li et al. (2023) [31] | Deep learning (SimCLR NN) | Clustering of dry eye subtypes | Meibography images (N = 82,236) | private | no | Identified 6 distinct subtypes (unsupervised) with TBUT and TMH; if no image subtypes, detailed clinical validation was limited to a small subset of 280 patients |
| Shimizu et al. (2023) [33] | CNN–Swin Transformer | Tear film break-up time (TFBUT) estimation for DED | Smart Eye Camera videos (slit-lamp video frame) (N = 22,172 frames) | private | 1599 frames | Achieved TFBUT estimation accuracy of 78.9% and DED diagnosis AUC of 0.813, enabling portable screening. Annotation relied on a single specialist, causing potential bias; strict image quality filtering limits real-world utility; lacked validation across different races. (TFBUT: Acc = 78.9%, AUROC = 0.877, F1 = 0.74; DED (ADES): Sens = 77.8%, Spec = 85.7%, AUROC = 0.813) |
| Qu et al. (2023) [38] | ResNet-34 + U-Net | Punctate Epithelial Erosion (PEE) grading | Slit-lamp corneal fluorescein staining images (N = 763) | private | 20% split | Achieved grading accuracy of 76.5% (AUC 0.940) and segmentation IoU of 0.937, showing high correlation with clinical grades (r = 0.908). (Gap: single-center study with single ethnic background; non-linear NEI scale limits objective comparison; required high-quality images with 283 excluded) |
| Chase et al. (2021) [34] | DL-VGG19 | DED diagnosis using AS-OCT | AS-OCT images (N = 27,180) | private | N = 7020 | Diagnostic agreement significantly better than Schirmer’s test and staining (84.6% accuracy); lack of age-matching standard for DED definition affects ground truth; model cannot quantify severity |
| Yeh et al. (2021) [30] | NPID (Unsupervised ResNet-50) | Meibography phenotyping | Meiboscore and infrared meibography images (N = 706 images) | private | N = 209 | Achieved TFBUT estimation accuracy of 78.9% and DED diagnosis AUC of 0.813, enabling portable screening. Source: Infrared meibography images (N = 706 images) |
| Wang et al. (2019) [29] | PSPNet/DeepLab | Meibomian Gland (MG) atrophy segmentation | Meiboscore and infrared meibography images (N = 706) | private | N = 209 | Achieved high segmentation accuracy (eyelid 97.6%, atrophy 95.4%) and Meiboscore grading accuracy of 95.6%. Cannot analyze individual gland morphology such as tortuosity or thickness; limited dataset size compared to later studies |
| Su et al. (2018) [32] | CNN | TFBUT measurement | Slit-lamp fluirescein tear film video (N = 80 patient) | private | N = 30 | Demonstrated high correlation with manual TFBUT (r = 0.9) and achieved sensitivity 0.83/specificity 0.95 for screening; training set is too small; only right eyes analyzed; manual ROI selection required; sensitive to eye movements and blinking |
| Study | AI Model | Objective | Data (N) | Dataset Availability | Test Split (N) | Main Outcomes |
|---|---|---|---|---|---|---|
| Li et al. (2024) [50] | Stage1: DL-CNN (Swin Transformer); Stage2: Multi-instance learning+ attention | DL diagnoses FK | IVCM; N = 96,632 | private | stage 1 IVCM (N = 8568); stage 2 = 37 patients | Spec. = 96.65%, Sens. = 97.57%; two stages can improve specificity and sensitivity; second stage mimics the clinical workflow and reduces wrong diagnose of FK cases |
| Erukulla et al. (2025) [51] | DL- ResNet50 | Model 1: Classify Fk, AK, NSK; Model 2: FK is filamentous or non-filamentous | IVCM (N = 1975) | private | formed through rotating folds in cross-validation | Accurately classify IK and subtype FK (multi-class diagnosis); accuracy > 85% in both models; Grad-CAM can visualize regions to influence prediction |
| Soleimani et al. (2025) [52] | DL-CNNs | DL diagnoses different kinds of MK by smartphone | Smartphone images (N = 602) | private | 20% of dataset | Accuracy 83.8%; discrimination accuracy of AK, BK, FK is higher than 0.80; by using smartphone and slit-lamp adaptor, solve the gaps in the remote environment |
| Satitpitakul et al. (2025) [47] | CNN (DenseNet121, ResNet50, VGG19) | Differentiate IK | Slit-lamp images (N = 6478) | private | N = 1307 | DenseNet121’s accuracy in four-class classification N = 0.80; ensemble algorithm’s accuracy = 0.83, better than a single tool; potential in resource-limited countries |
| Li et al. (2024) [53] | DeepIK (DenseNet121, InceptionResNetV2, Swin Transformer) | Identify BK, FK, AK, NSK | Slit-lamp images (N = 23,055) | private | (N = 12,463) | AUC: internal test 0.95–0.99, external test 0.88–0.93, prospective 0.87–0.97; achieve high AUC in external dataset, which ensures the generalizability |
| Essalat et al. (2023) [42] | CNN (DenseNet161, DenseNet121, etc.) | Automated diagnosis of FK and AK | IVCM images (N = 4001) | public (Figshare) | N = 1001 | Densenet161 achieved 93.55% accuracy, potential in FK and AK early detection with eXplainable Artificial Intelligence (XAI) |
| Wei et al. (2023) [43] | ML (Logistic, RF, DT) | Differentiate FK | Slit-lamp images (N = 1916) | private | internal N = 449; external N = 420 | First machine learning model for FK diagnosis; AUC of internal and external is over 0.90, which supports clinical decision-making |
| Kuo et al. (2021) [44] | DL (ResNet, DenseNet, ResNeXt, SE-ResNet, and EfficientNets) | Different DL algorithms for classifying BK | Slit-lamp (N = 1512) | private | 20% of each fold in 5 fold cross-validation | Comparing DL models and best AUROC is EfficientNet B3; EfficientNets were lesion focused without segmentation and preprocessing |
| Soleimani et al. (2023) [45] | CNN | Diagnose IK (model 1), differentiate BK and FK (model 2), discriminate filamentous type from yeast type of fungal (model 3) | Slit-lamp images (N = 9329) | private | have testing split | Accuracy: Model 1: 99.3%; Model 2: 84%; Model 3: 77.5%; assist experts to distinguish species of IK; first DL model of yeast and filamentous fungi |
| Tang et al. (2023) [48] | DL and DT | Classification of FK and AK | IVCM(N = 3364) | private | N = 334 | DL performs better than DT; classification of Fusarium and Aspergillus; real-time, non-invasive test; guide for treatment |
| Liang et al. (2023) [49] | DL-CNN (GoogLeNet and VGGNet) | Diagnosis of FK | IVCM (N = 7278) | private | (N = 1455) | Accuracy = 97.73%; support faster FK diagnosis; incorporate simple prior knowledge of hyphae-like structures to enhance accuracy |
| Hanif et al. (2023) [56] | DL-CNN | Evaluate how corneal images’ quality affect CNN prediction | DSLR | private | N = 3307 (External Test) | AUROC: FK = 0.85, BK = 0.79, micro-averaged = 0.83; supports AI diagnosis; resource-limited setting |
| Li et al. (2021) [46] | DL- DenseNet121, Inception-v3, ResNet50 | Classify keratitis | Slit-lamp and smartphone (N = 13,557) | private | have testing split | DenseNet121 is the best on every dataset; comparable to specialists in early detection |
| Kuo et al. (2020) [54] | DL- DenseNet | Comparison of FK diagnosis between AI models, experts and NCS-Oph | DSLR (N = 288) | private | not specified | Primary care, specificity higher than NCS-Oph (Sens: DL71% > NCS-Oph 52%; Spec: DL68% < NCS-Oph 83%) |
| Liu et al. (2020) [55] | DL-CNN (AlexNet, VGG16) | Detection of FK | IVCM (N = 1213) | private | 1/11 datasets | Improves real-time diagnostic performance; reduces reliance on expert subjective judgment (accuracy: AlexNet 99.95%, VGGNet 99.89%) |
| Study | AI Model | Objective | Data (N) | Data Availability | Testing Split (N) | Main Outcomes |
|---|---|---|---|---|---|---|
| Li et al. (2025) [68] | Multimodal Ocular Surface Assessment and Interpretation Copilot (MOSAIC) | Detection and grading of ocular surface diseases (OSDs) | Smartphone images (N = 375) | private | N = 375 | Improve the capability of image comprehension (ROUGE-L F1 scores of 0.70–0.78); limited dataset diversity; risk of LLM hallucinations; gap remains between research and product implementation |
| Zhang et al. (2024) [65] | Weighted correlation network analysis (WGCNA), RF, SVM | Classifying molecular mechanisms for detecting pterygium | RNA-seq (N = 68) | public (NCBI’s Sequence Read Archive and Gene Expression Omnibus) | Validation on external datasets (GSE2513, GSE51995) | Modest sample size limits generalizability; lack of long-term clinical follow-up for prognosis; potential batch effects in sequencing; need for experimental validation of causative relationships |
| Liu et al. (2024) [63] | Faster R-CNN (ResNet101) + U-Net (SE-ResNeXt50) | Pterygium detection and grading | Slit-lamp images (N = 20,987) + smartphone (N = 1094) | private | slit-lamp (N = 6296) + smartphone (N = 329) | Fusion model on smartphone images can be comparable to slit-lamp (accuracy of 92.38%, F1 = 0.931); image lacks medical history; need auto-QC to select eligible images; cannot identify other diseases |
| Gan et al. (2022) [60] | DL-ResNet18, AlexNet, GoogleNet, VGG11 | Diagnosis and treatment of pterygium | Anterior segment images (N = 172) | private | N = 34 | Ensemble model performs better than single models (AUC = 0.98); triage for surgery; proposes use in underserved settings; small size of data limits generalizability; black-box problem |
| Fang et al. (2022) [64] | VGG16 (ImageNet) and MLP | Referable pterygium | Slit-lamp/hand-held images (N = 2503), internal and external sets | private | internal (N = 629); external 1 (N = 2610), external 2 (N = 3701) | AUC of referable pterygium reached 98.5%; community screening feasibility; limited positive referable cases in external tests |
| Hung et al. (2022) [66] | Deep learning system (DLS) | The efficacy of DL in grading and prediction | Slit-lamp images (N = 237 eyes) | private | N = 48 eyes (grading), N = 25 (recurrence) | Early evidence; needs larger, prospective validation |
| Wan et al. (2022) [62] | U-Net++ | Diagnose and measure pterygium | AST (N = 489) | private | N = 239 | Objective tracking of lesion progression for follow-up; relied on doctors’ visual inspection |
| Zhu et al. (2022) [61] | VGG16 (screening) and PSPNet (segmentation) | Screening and lesion segmentation | Slit-lamp images (N = 734) | private | screening (N = 300), segmentation (N = 150) | Masks for corneal encroachment; aids pre-op planning (screening accuracy = 99%, segmentation MIOU = 0.86, best among models); need larger datasets to achieve better sensitivity |
| Jais et al. (2021) [67] | SVM, DT, Logistic Regression | Prediction of BCVA changes | Clinical dataset (N = 93) | private | 10-fold cross validation | Decision support for postoperative counseling (specificy= 100%); small size of sample influences the result; black-box nature |
| Study | AI Model | Objective | Data (N) | Data Availability | Testing Split | Main Outcomes |
|---|---|---|---|---|---|---|
| Tey et al. (2024) [58] | DL(U-Net) | Evaluate DL in FECD analysis | Widefield specular microscopy (N = 1839) | private | external dataset (N = 354) | Novel application on widefield imaging to assess peripheral endothelium (Dice coefficient in DL vs. manual = 0.86, Central ECD DL higher than manual); eye without clinically definite edema |
| Fitoussi et al. (2024) [81] | DL | Detect subclinical corneal edema in FECD | Optical coherence tomography (OCT) (N = 151) | private | N = 151 (validation cohort) | Can detect subclinical edema in atypical cases; small sample size and false positive in the datasets (model and tomography features agreed in 80% of cases) |
| Qu et al. (2024) [78] | DL-Enhanced Compact Convolutional Transformer (ECCT) | Establish automatic diagnosis system for corneal endothelium diseases (CEDs) | IVCM (N = 3723) | private | multicentre (N = 449) | First AI system for multiple CEDs using IVCM (accuracy = 89.53%, AUC = 0.958); sensitivity of “others” is low, so needs larger datasets for rare disease |
| Prada et al. (2024) [77] | DL-CNN (U-Net) | Classification of FECD severity | Specular microscopy and slit-lamp images (N = 1371) | public (Open Science Framework) | not specified | Significant difference in cell density between AI-based and built-in software |
| Foo et al. (2024) [79] | DL(+RF) | Diagnosis of FECD | Specular microscopy (internal and external sets) | private | external (N = 180), peripheral (N = 557) | First DL model for the peripheral endothelium; performance drop in external (1st model: AUC = 0.96, Sens. = 0.91, Spec. = 0.91 (internal); AUC = 0.77, Sens. = 0.69, Spec. = 0.68 (external)) |
| Sierra et al. (2023) [72] | Deep learning (U-Net) | Segment corneal endothelium and guttae in FECD | Specular microscopy (N = 90) | private | N = 23 | Cast segmentation as a regression task (mean ECD difference = −41.9 cells/mm2; mean cell area difference = 14.8 µm2); requires larger datasets |
| Karmakar et al. (2023) [73] | Mobile-CellNet, U-Net, U-Net++ | Automatic segmentation and ECD estimation | Specular microscopy (N = 612) | public | holdout set (N = 124) | A lightweight model suitable for remote settings (mean absolute error: ECD = 4.06%, U-Net = 3.80% and FLOPs less); comparison is limited to visual inspection for some failure cases |
| Vigueras et al. (2022) [71] | Deep learning (U-Net, ResUNeXt, DenseUNets) | Automated segmentation and morphometric analysis of corneal guttae | specular microscopy (N = 1203) | private | 10-fold cross-validation | Novel feedback non-local attention to infer cell edges;inference of edges inside large guttae is probabilistic (MAE: ECD = 23.16 cells/mm2; CV = 1.28%; HEX = 3.13%) |
| Qu et al. (2022) [74] | DL (ResNet) | Automated segmentation and morphometric parameter estimation system | IVCM (N = 283) | private | N = 184 | First fully automated DL system with IVCM; longitudinal studies not possible with IVCM; (ECD: 2592 cells/mm2; CV: 32.14%; HEX: 54.16%) |
| Shilpashree et al. (2021) [80] | DL (U-Net) and Watershed algorithm | Segment FECD and healthy patients | Specular microscopy images (N = 246) | private | N = 246 | U-Net and Watershed to resolve merged cells; average perimeter length (APL) as a new biomarker; APL only apparent after 5% of guttae |
| Eleiwa et al. (2020) [76] | DL-VGG19 | Automatic diagnosis of early-FECD, late-FECD, and healthy cornea | AS-OCT (N = 187,20) | private | N = 7380 | First diagnosis by AS-OCT; although large number of pictures, only 81 patients (early-FECD: AUC 0.997; healthy vs. all FECD: AUC 0.998) |
| Study | AI Model | Objective | Data (N) | Data Availability | Testing Split (N) | Main Outcomes |
|---|---|---|---|---|---|---|
| Patefield et al. (2023) [84] | DL-ResNet (MIL-AI) | Distinguish the graft detachment on pre-DMEK scans | AS-OCT; (N = 9466) | private | N = 24 (eyes) | First AI model to predicts eye with post-DMEK detachment; opportunity for DMEK screening; small data size (N = 74 eyes) cause black-box problem. (AUROC = 0.6; Sens = 92%, Spec = 45%) |
| Muijzer et al. (2022) [85] | ML–LASSO, classification tree analysis (CTA), random forest classification (RFC) | Prediction of graft detachment after posterior lamellar keratoplasty | Multimodal clinical data (N = 3647) | private | 30% of the spilt | Identify key risk factors and protective factors; model cannot explain all variance (AUROC: LASSO = 0.7, CTA = 0.65, RFC = 0.72) |
| Ang et al. (2022) [87] | ML–random survival forests (RSFs) and Cox regression | Analyze factors affecting 10-year graft survival in DSAEK and PK | Multimodal clinical data (N = 1335) | private | out-of-bag validation | confirmed long-term superiority of DSAEK over PK in Asian eyes (10-year graft survival: DSAEK 73.6% vs. PK 50.9%); specific to Asian population |
| Heslinga et al. (2020) [86] | DL (U-Net and ResNet) | Automatically locate and quantify graft detachment after DMEK | AS-OCT (N = 1280) | private | N = 320 | Quantify detachment length and visualize on 2D maps; support DMEK research; help clinical decision-making; limit generalization (Dice score: segmentation model = 0.896 vs. experts = 0.880) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lu, T.-C.; Huang, C.-H.; Lin, I.-C. Artificial Intelligence Application in Cornea and External Diseases. Diagnostics 2025, 15, 3199. https://doi.org/10.3390/diagnostics15243199
Lu T-C, Huang C-H, Lin I-C. Artificial Intelligence Application in Cornea and External Diseases. Diagnostics. 2025; 15(24):3199. https://doi.org/10.3390/diagnostics15243199
Chicago/Turabian StyleLu, Te-Chen, Chun-Hao Huang, and I-Chan Lin. 2025. "Artificial Intelligence Application in Cornea and External Diseases" Diagnostics 15, no. 24: 3199. https://doi.org/10.3390/diagnostics15243199
APA StyleLu, T.-C., Huang, C.-H., & Lin, I.-C. (2025). Artificial Intelligence Application in Cornea and External Diseases. Diagnostics, 15(24), 3199. https://doi.org/10.3390/diagnostics15243199

