Prediction of Cataract Severity Using Slit Lamp Images from a Portable Smartphone Device: A Pilot Study
Abstract
1. Introduction
2. Materials and Methods
2.1. Patient Recruitment and Image Capture
- Willing and able to participate in study, with mental capacity to consent;
- At least 40 years old;
- Having not had prior intraocular surgery or laser procedures to the eye, including laser refractive surgery;
- Fit enough to keep the eyes open for adequate image acquisition;
- No evidence of active intraocular inflammation;
- Not having concurrent external or anterior segment pathologies (e.g., corneal opacities, significant blepharoptosis), which would obscure photo-taking of the eye.
2.2. Image Capture Protocol
2.3. Deep Learning Technique
2.4. Experiment Setup for Deep Learning Model
3. Results
3.1. Patient Characteristics
3.2. Deep Learning Model on Automated Cataract Analysis
3.3. Comparison with Other Baseline Architectures
3.4. Heat Maps
4. Discussion
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Lee, C.M.; Afshari, N.A. The Global State of Cataract Blindness. Curr. Opin. Ophthalmol. 2017, 28, 98–103. [Google Scholar] [CrossRef] [PubMed]
- Chylack, L.T.; Leske, M.C.; McCarthy, D.; Khu, P.; Kashiwagi, T.; Sperduto, R. Lens Opacities Classification System II (LOCS II). Arch. Ophthalmol. 1989, 107, 991–997. [Google Scholar] [CrossRef] [PubMed]
- Chylack, L.T.; Wolfe, J.K.; Singer, D.M.; Leske, M.C.; Bullimore, M.A.; Bailey, I.L.; Friend, J.; McCarthy, D.; Wu, S.Y. The Lens Opacities Classification System III. The Longitudinal Study of Cataract Study Group. Arch. Ophthalmol. 1993, 111, 831–836. [Google Scholar] [CrossRef]
- Klein, B.E.K.; Klein, R.; Linton, K.L.P.; Magli, Y.L.; Neider, M.W. Assessment of Cataracts from Photographs in the Beaver Dam Eye Study. Ophthalmology 1990, 97, 1428–1433. [Google Scholar] [CrossRef] [PubMed]
- Sparrow, J.M.; Bron, A.J.; Brown, N.A.; Ayliffe, W.; Hill, A.R. The Oxford Clinical Cataract Classification and Grading System. Int. Ophthalmol. 1986, 9, 207–225. [Google Scholar] [CrossRef]
- Wong, W.L.; Li, X.; Li, J.; Cheng, C.-Y.; Lamoureux, E.L.; Wang, J.J.; Cheung, C.Y.; Wong, T.Y. Cataract Conversion Assessment Using Lens Opacity Classification System III and Wisconsin Cataract Grading System. Investig. Ophthalmol. Vis. Sci. 2013, 54, 280–287. [Google Scholar] [CrossRef]
- Xu, Y.; Gao, X.; Lin, S.; Wong, D.W.K.; Liu, J.; Xu, D.; Cheng, C.-Y.; Cheung, C.Y.; Wong, T.Y. Automatic Grading of Nuclear Cataracts from Slit-Lamp Lens Images Using Group Sparsity Regression. Med. Image Comput. Comput. Assist. Interv. 2013, 16, 468–475. [Google Scholar] [CrossRef]
- Huang, W.; Chan, K.L.; Li, H.; Lim, J.H.; Liu, J.; Wong, T.Y. A Computer Assisted Method for Nuclear Cataract Grading from Slit-Lamp Images Using Ranking. IEEE Trans. Med. Imaging 2011, 30, 94–107. [Google Scholar] [CrossRef]
- Cheung, C.Y.; Li, H.; Lamoureux, E.L.; Mitchell, P.; Wang, J.J.; Tan, A.G.; Johari, L.K.; Liu, J.; Lim, J.H.; Aung, T.; et al. Validity of a New Computer-Aided Diagnosis Imaging Program to Quantify Nuclear Cataract from Slit-Lamp Photographs. Investig. Ophthalmol. Vis. Sci. 2011, 52, 1314–1319. [Google Scholar] [CrossRef]
- Li, H.; Lim, J.H.; Liu, J.; Wong, D.W.K.; Tan, N.M.; Lu, S.; Zhang, Z.; Wong, T.Y. An Automatic Diagnosis System of Nuclear Cataract Using Slit-Lamp Images. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2009, 2009, 3693–3696. [Google Scholar] [CrossRef]
- Ignatowicz, A.A.; Marciniak, T.; Marciniak, E. AI-Powered Mobile App for Nuclear Cataract Detection. Sensors 2025, 25, 3954. [Google Scholar] [CrossRef] [PubMed]
- Gao, X.; Lin, S.; Wong, T.Y. Automatic Feature Learning to Grade Nuclear Cataracts Based on Deep Learning. IEEE Trans. Biomed. Eng. 2015, 62, 2693–2701. [Google Scholar] [CrossRef] [PubMed]
- Foong, A.W.P.; Saw, S.-M.; Loo, J.-L.; Shen, S.; Loon, S.-C.; Rosman, M.; Aung, T.; Tan, D.T.H.; Tai, E.S.; Wong, T.Y. Rationale and Methodology for a Population-Based Study of Eye Diseases in Malay People: The Singapore Malay Eye Study (SiMES). Ophthalmic Epidemiol. 2007, 14, 25–35. [Google Scholar] [CrossRef] [PubMed]
- Yu, Y.; Chen, D.; Tan, C.W.T.; Cheng, C.Y.; Yong, S.S. Mobile Eye-Imaging Device for Detecting Eye Pathologies 2020. Available online: https://patents.google.com/patent/WO2020060486A1/en?oq=WO2020%2f060486+A1 (accessed on 23 January 2026).
- Chen, D.; Ho, Y.; Sasa, Y.; Lee, J.; Yen, C.C.; Tan, C. Machine Learning-Guided Prediction of Central Anterior Chamber Depth Using Slit Lamp Images from a Portable Smartphone Device. Biosensors 2021, 11, 182. [Google Scholar] [CrossRef]
- Ambrosio, R. Oculus Pentacam Interpretation Guide, 3rd ed.; Oculus: Menlo Park, CA, USA; Available online: https://www.pentacam.com/fileadmin/user_upload/pentacam.de/downloads/interpretations-leitfaden/interpretation_guideline_3rd_edition_0915.pdf (accessed on 23 January 2026).
- Panthier, C.; de Wazieres, A.; Rouger, H.; Moran, S.; Saad, A.; Gatinel, D. Average Lens Density Quantification with Swept-Source Optical Coherence Tomography: Optimized, Automated Cataract Grading Technique. J. Cataract. Refract. Surg. 2019, 45, 1746–1752. [Google Scholar] [CrossRef]
- Pan, A.-P.; Wang, Q.-M.; Huang, F.; Huang, J.-H.; Bao, F.-J.; Yu, A.-Y. Correlation Among Lens Opacities Classification System III Grading, Visual Function Index-14, Pentacam Nucleus Staging, and Objective Scatter Index for Cataract Assessment. Am. J. Ophthalmol. 2015, 159, 241–247.e2. [Google Scholar] [CrossRef]
- Pei, X.; Bao, Y.; Chen, Y.; Li, X. Correlation of Lens Density Measured Using the Pentacam Scheimpflug System with the Lens Opacities Classification System III Grading Score and Visual Acuity in Age-Related Nuclear Cataract. Br. J. Ophthalmol. 2008, 92, 1471–1475. [Google Scholar] [CrossRef]
- Mayer, W.J.; Klaproth, O.K.; Hengerer, F.H.; Kohnen, T. Impact of Crystalline Lens Opacification on Effective Phacoemulsification Time in Femtosecond Laser-Assisted Cataract Surgery. Am. J. Ophthalmol. 2014, 157, 426–432.e1. [Google Scholar] [CrossRef]
- Nixon, D.R. Preoperative Cataract Grading by Scheimpflug Imaging and Effect on Operative Fluidics and Phacoemulsification Energy. J. Cataract. Refract. Surg. 2010, 36, 242–246. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Curran Associates Inc.: Red Hook, NY, USA, 2017; pp. 6000–6010. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer Using Shifted Windows. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV); IEEE: New York, NY, USA, 2021; pp. 9992–10002. [Google Scholar] [CrossRef]
- Tang, Y.; Yang, D.; Li, W.; Roth, H.R.; Landman, B.; Xu, D.; Nath, V.; Hatamizadeh, A. Self-Supervised Pre-Training of Swin Transformers for 3D Medical Image Analysis. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); IEEE: New York, NY, USA, 2022; pp. 20698–20708. [Google Scholar] [CrossRef]
- Cao, H.; Wang, Y.; Chen, J.; Jiang, D.; Zhang, X.; Tian, Q.; Wang, M. Swin-Unet: Unet-Like Pure Transformer for Medical Image Segmentation; Karlinsky, L., Michaeli, T., Nishino, K., Eds.; Springer Nature: Cham, Switzerland, 2023; Volume 13803, pp. 205–218. [Google Scholar]
- Ooi, B.C.; Tan, K.-L.; Wang, S.; Wang, W.; Cai, Q.; Chen, G.; Gao, J.; Luo, Z.; Tung, A.K.H.; Wang, Y.; et al. SINGA: A Distributed Deep Learning Platform. In Proceedings of the 23rd ACM International Conference on Multimedia, Brisbane, Australia, 26–30 October 2015; ACM: New York, NY, USA, 2015; pp. 685–688. [Google Scholar]
- Luo, Z.; Yeung, S.H.; Zhang, M.; Zheng, K.; Zhu, L.; Chen, G.; Fan, F.; Lin, Q.; Ngiam, K.Y.; Chin Ooi, B. MLCask: Efficient Management of Component Evolution in Collaborative Data Analytics Pipelines. In Proceedings of the 2021 IEEE 37th International Conference on Data Engineering (ICDE); IEEE: New York, NY, USA, 2021; pp. 1655–1666. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); IEEE: New York, NY, USA, 2016; pp. 770–778. [Google Scholar]
- Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image Is Worth 16 × 16 Words: Transformers for Image Recognition at Scale. arXiv 2021. [Google Scholar] [CrossRef]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; IEEE: New York, NY, USA, 2017; pp. 618–626. [Google Scholar]
- Dimock, J.; Robman, L.D.; McCarty, C.A.; Taylor, H.R. Cost-Effectiveness of Digital Cataract Assessment. Aust. N. Zealand J. Ophthalmol. 1999, 27, 208–210. [Google Scholar] [CrossRef]
- Chew, E.Y.; Kim, J.; Sperduto, R.D.; Datiles, M.B.; Coleman, H.R.; Thompson, D.J.S.; Milton, R.C.; Clayton, J.A.; Hubbard, L.D.; Danis, R.P.; et al. Evaluation of the Age-Related Eye Disease Study Clinical Lens Grading System AREDS Report No. 31. Ophthalmology 2010, 117, 2112–2119.e3. [Google Scholar] [CrossRef]
- Ganokratanaa, T.; Ketcham, M.; Pramkeaw, P. Advancements in Cataract Detection: The Systematic Development of LeNet-Convolutional Neural Network Models. J. Imaging 2023, 9, 197. [Google Scholar] [CrossRef]
- Pathak, S.; Raj, R.; Singh, K.; Verma, P.K.; Kumar, B. Development of Portable and Robust Cataract Detection and Grading System by Analyzing Multiple Texture Features for Tele-Ophthalmology. Multimed. Tools Appl. 2022, 81, 23355–23371. [Google Scholar] [CrossRef] [PubMed]
- Janti, S.S.; Saluja, R.; Tiwari, N.; Kolavai, R.R.; Mali, K.; Arora, A.J.; Johar, A.; Sahoo, D.P.; Sahithi, E. Evaluation of the Clinical Impact of a Smartphone Application for Cataract Detection. Cureus 2024, 16, e71467. [Google Scholar] [CrossRef] [PubMed]
- Goh, J.H.L.; Lei, X.; Chee, M.-L.; Qian, Y.; Yu, M.; Rim, T.H.; Nusinovici, S.; Chen, D.Z.; Koh, K.H.; Yew, S.M.E.; et al. Multi-Comparison of Different Ocular Imaging Modality-Based Deep Learning Models for Visually Significant Cataract Detection. Ophthalmol. Sci. 2025, 5, 100837. [Google Scholar] [CrossRef] [PubMed]
- Wu, X.; Huang, Y.; Liu, Z.; Lai, W.; Long, E.; Zhang, K.; Jiang, J.; Lin, D.; Chen, K.; Yu, T.; et al. Universal Artificial Intelligence Platform for Collaborative Management of Cataracts. Br. J. Ophthalmol. 2019, 103, 1553–1560. [Google Scholar] [CrossRef]
- Son, K.Y.; Ko, J.; Kim, E.; Lee, S.Y.; Kim, M.-J.; Han, J.; Shin, E.; Chung, T.-Y.; Lim, D.H. Deep Learning-Based Cataract Detection and Grading from Slit-Lamp and Retro-Illumination Photographs: Model Development and Validation Study. Ophthalmol. Sci. 2022, 2, 100147. [Google Scholar] [CrossRef]
- Lu, Q.; Wei, L.; He, W.; Zhang, K.; Wang, J.; Zhang, Y.; Rong, X.; Zhao, Z.; Cai, L.; He, X.; et al. Lens Opacities Classification System III-Based Artificial Intelligence Program for Automatic Cataract Grading. J. Cataract. Refract. Surg. 2022, 48, 528–534. [Google Scholar] [CrossRef]
- Keenan, T.D.L.; Chen, Q.; Agrón, E.; Tham, Y.-C.; Goh, J.H.L.; Lei, X.; Ng, Y.P.; Liu, Y.; Xu, X.; Cheng, C.-Y.; et al. DeepLensNet: Deep Learning Automated Diagnosis and Quantitative Classification of Cataract Type and Severity. Ophthalmology 2022, 129, 571–584. [Google Scholar] [CrossRef]
- The Age-Related Eye Disease Study Research Group. The Age-Related Eye Disease Study (AREDS) System for Classifying Cataracts from Photographs: AREDS Report No. 4. Am. J. Ophthalmol. 2001, 131, 167–175. [Google Scholar] [CrossRef]
- Gali, H.E.; Sella, R.; Afshari, N.A. Cataract Grading Systems: A Review of Past and Present. Curr. Opin. Ophthalmol. 2019, 30, 13–18. [Google Scholar] [CrossRef]
- Tan, A.C.S.; Wang, J.J.; Lamoureux, E.L.; Wong, W.; Mitchell, P.; Li, J.; Tan, A.G.; Wong, T.Y. Cataract Prevalence Varies Substantially with Assessment Systems: Comparison of Clinical and Photographic Grading in a Population-Based Study. Ophthalmic Epidemiol. 2011, 18, 164–170. [Google Scholar] [CrossRef]
- Wan, Z.; Zhang, J.; Wang, Y.; Lin, H.; Wang, Y.; Mi, Z.; Yang, X.; Fu, X.; Wang, H. Eye-Based Emotion Recognition via Event-Driven Sparse Transformers. In Proceedings of the 33rd ACM International Conference on Multimedia, Dublin, Ireland, 27–31 October 2025; ACM: New York, NY, USA, 2025; pp. 4659–4668. [Google Scholar]
- Iqbal, M.A.; Kim, J.; Han, I.; Kyun Kim, S. Attention-Driven Feature Fusion Integrating Swin Transformer and CNN Models for Improved Ocular Disease Classification. In Proceedings of the 2024 International Conference on Engineering and Emerging Technologies (ICEET); IEEE: New York, NY, USA, 2024; pp. 1–6. [Google Scholar]
- Zhang, H.; Niu, K.; Xiong, Y.; Yang, W.; He, Z.; Song, H. Automatic Cataract Grading Methods Based on Deep Learning. Comput. Methods Programs Biomed. 2019, 182, 104978. [Google Scholar] [CrossRef]






| PNS Label | Group 1 (PNS Score < 2) | Group 2 (PNS Score ≥ 2) | Total |
|---|---|---|---|
| Training | 320 | 390 | 710 |
| Validation | 40 | 40 | 80 |
| Test | 80 | 80 | 160 |
| Total | 440 | 510 | 950 |
| PNS Label | Average Accuracy (%) | |||
| Group 1 (PNS Score < 2) | Group 2 (PNS Score ≥ 2) | |||
| Predicted Label | Group 1 (PNS score < 2) | 50 | 0 | |
| Group 2 (PNS score ≥ 2) | 30 | 80 | ||
| Model Accuracy (%) | 62.50 | 100.00 | 81.25 | |
| PNS Label | Average Accuracy (%) | |||
| Group 1 (PNS Score < 2) | Group 2 (PNS Score ≥ 2) | |||
| Predicted Label | Group 1 (PNS Score < 2) | 44 | 5 | |
| Group 2 (PNS score ≥ 2) | 36 | 75 | ||
| Model Accuracy (%) | 55.00 | 93.75 | 74.38 | |
| Group 2 (PNS Score ≥ 2) | Group 1 (PNS Score < 2) | |||||||
|---|---|---|---|---|---|---|---|---|
| Uncertainty | Accuracy (%) | Precision (%) | Recall (%) | F1 (%) | Precision (%) | Recall (%) | F1 (%) | |
| Undilated Eyes | 0.3163 | 81.25 | 72.73 | 100.00 | 84.21 | 100 | 62.50 | 76.92 |
| Dilated Eyes | 0.3376 | 74.38 | 67.57 | 93.75 | 78.53 | 89.80 | 55.00 | 68.22 |
| Sensitivity (%) | Specificity (%) | ROC–AUC (%) | Brier Score (%) | ECE (BIN10) (%) | |
|---|---|---|---|---|---|
| Undilated Eyes | 100.00 | 62.50 | 84.30 | 16.39 | 15.81 |
| Dilated Eyes | 93.75 | 55.00 | 76.55 | 20.15 | 17.78 |
| ResNet50 | |||||||
| Group 2 (PNS score ≥ 2) | Group 1 (PNS Score < 2) | ||||||
| Accuracy (%) | Precision (%) | Recall (%) | F1 (%) | Precision (%) | Recall (%) | F1 (%) | |
| Undilated Eyes | 78.60 | 48.39 | 25.00 | 32.97 | 82.28 | 92.89 | 87.27 |
| Dilated Eyes | 72.42 | 88.57 | 51.67 | 65.26 | 78.40 | 81.22 | 79.79 |
| EfficientNet | |||||||
| Group 2 (PNS score ≥ 2) | Group 1 (PNS Score < 2) | ||||||
| Accuracy (%) | Precision (%) | Recall (%) | F1 (%) | Precision (%) | Recall (%) | F1 (%) | |
| Undilated Eyes | 79.65 | 53.57 | 25.00 | 34.09 | 82.49 | 94.22 | 87.97 |
| Dilated Eyes | 69.82 | 34.88 | 50.00 | 41.10 | 84.92 | 75.11 | 79.72 |
| ViT-B/16 | |||||||
| Group 2 (PNS score ≥ 2) | Group 1 (PNS Score < 2) | ||||||
| Accuracy (%) | Precision (%) | Recall (%) | F1 (%) | Precision (%) | Recall (%) | F1 (%) | |
| Undilated Eyes | 80.26 | 71.43 | 50.00 | 58.82 | 87.65 | 94.67 | 91.03 |
| Dilated Eyes | 82.11 | 71.43 | 25.00 | 37.04 | 82.95 | 97.33 | 89.57 |
| Author (Year) | Dataset | Clinical Reference | Performance |
|---|---|---|---|
| Gao et al. (2015) [12] | 5378 slit lamp images of dilated eyes | Wisconsin cataract grading system [4] | 70.7% exact agreement ratio, 88.4% decimal grading error ≤ 1.0 |
| Wu et al. (2019) [38] | 37,638 slit lamp images of dilated and undilated eyes | LOCS II [2] (sub-divided into severe or mild) | AUC > 91% |
| Son et al. (2022) [39] | 1355 slit lamp images of dilated eyes | LOCS III [3] | AUC 95.7% |
| Lu et al. (2022) [40] | 1039 slit lamp images of dilated eyes | LOCS III [3] (sub-divided into severe or mild) | AUC between 97.7% and 98.3% |
| Keenan et al. (2022) [41] | 6333 slit lamp images of dilated eyes | AREDS system [42] | MSE = 0.23 |
| Goh et al. (2025) [37] | 12,067 slit lamp images of dilated eyes | Wisconsin cataract grading system [4] | AUC between 92.3% to 93.4% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Chen, D.Z.; Liu, C.; Wu, J.; Zhu, L.; Ooi, B.C. Prediction of Cataract Severity Using Slit Lamp Images from a Portable Smartphone Device: A Pilot Study. Sensors 2026, 26, 1954. https://doi.org/10.3390/s26061954
Chen DZ, Liu C, Wu J, Zhu L, Ooi BC. Prediction of Cataract Severity Using Slit Lamp Images from a Portable Smartphone Device: A Pilot Study. Sensors. 2026; 26(6):1954. https://doi.org/10.3390/s26061954
Chicago/Turabian StyleChen, David Z., Changshuo Liu, Junran Wu, Lei Zhu, and Beng Chin Ooi. 2026. "Prediction of Cataract Severity Using Slit Lamp Images from a Portable Smartphone Device: A Pilot Study" Sensors 26, no. 6: 1954. https://doi.org/10.3390/s26061954
APA StyleChen, D. Z., Liu, C., Wu, J., Zhu, L., & Ooi, B. C. (2026). Prediction of Cataract Severity Using Slit Lamp Images from a Portable Smartphone Device: A Pilot Study. Sensors, 26(6), 1954. https://doi.org/10.3390/s26061954

