Next Article in Journal
A Mixture of Experts Model for Third-Party Pipeline Intrusion Detection Using DAS
Previous Article in Journal
ProMix-DGNet: A Process-Aware Spatiotemporal Network for Sintering System Prediction
Previous Article in Special Issue
Real-Time Remote Monitoring of Environmental Conditions and Actuator Status in Smart Greenhouses Using a Smartphone Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prediction of Cataract Severity Using Slit Lamp Images from a Portable Smartphone Device: A Pilot Study

1
Department of Ophthalmology, National University Hospital, Singapore 119024, Singapore
2
Centre for Innovational and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 119228, Singapore
3
Department of Computer Science, National University of Singapore, Singapore 117417, Singapore
4
School of Software Technology, Zhejiang University, Ningbo 315100, China
*
Author to whom correspondence should be addressed.
Work done while at the National University of Singapore.
Sensors 2026, 26(6), 1954; https://doi.org/10.3390/s26061954
Submission received: 24 January 2026 / Revised: 16 March 2026 / Accepted: 18 March 2026 / Published: 20 March 2026
(This article belongs to the Special Issue Smartphone Sensors and Their Applications)

Abstract

Cataract diagnosis requires a comprehensive dilated examination by an ophthalmologist using a slit lamp; there is currently no effective means to objectively screen for cataracts in the community using portable devices without dilation. We hypothesized that it would be possible to predict cataract severity using deep learning on images taken using a portable smartphone-based slit lamp prototype, with and without dilation. In this prospective cross-sectional pilot study, slit lamp images were captured from eligible patients with cataracts in a tertiary clinic using a portable slit lamp prototype attached to a smartphone. The Pentacam nuclear staging score (PNS, Pentacam®, Oculus, Inc., Arlington, WA, USA) was taken from the dilated pupils and served as ground truth. A transformer prototypical network with the Swin transformer on the images was trained to assign the class label corresponding to the highest predicted probability. Heat maps were generated based on attribution masks to identify the anatomical areas of concern. A total of 1900 images from 198 eyes of 99 patients were captured. The average age was 65.3 ± 10.4 years (range, 41.0 to 88.0 years) and the average PNS score was 1.57 ± 0.81 (range, 0 to 4). The model achieved an average accuracy of 81.25% and 74.38% for undilated and dilated eyes, respectively. Heat map visualization using the integrated gradient method successfully identified the anatomical area of interest in certain images. This study suggests the possibility of estimating cataract density using a portable smartphone slit lamp device without dilation. Further work is under way to validate this technique in a larger and more diverse group of eyes with cataracts.

1. Introduction

A cataract is the result of age-related degeneration of the human crystalline lens. It is the leading cause of reversible blindness worldwide, affecting 35 million people [1]. The current gold standard of cataract assessment is through a slit lamp by an ophthalmologist. There have been several clinical grading scales created for the standardization of this grading, including the Lens Opacities Classification System II (LOCS II) [2], the Lens Opacities Classification System III (LOCS III) [3], the photographic Wisconsin Cataract Grading System [4], and the Oxford Clinical Grading System [5]. Of these, the LOCS III is by far the most popular clinical grading system used in population studies [6].
However, the manual grading of cataracts by trained ophthalmologists is costly and subjective. Multiple studies in the past have attempted to incorporate the use of machine learning to automate nuclear cataract grading [7,8,9,10], including the use of an AI-powered mobile app [11]. In 2015, Gao et al. described a method using a convolutional–recursive neural network to grade nuclear cataracts from standardized images [12]. This method was validated on a population-based dataset of 5378 slit lamp images, achieving a 99.0% decimal grading error of ≤ 1.0. Despite these impressive results, the dataset required dilation, the use of standardized lighting conditions, and photography using a specialized digital slit lamp camera [13]—conditions that are not always available outside of a tertiary setting. Specifically, the need for a bulky photographic device and standardized photographic conditions limit the practical ability of community-based eye screening, especially in rural areas. In addition, ocular dilation is time-consuming and carries a remote risk of precipitating acute angle closure attack, especially in eyes with pre-existing narrow angles. For community-based automated nuclear cataract grading to be scalable and accessible, it should be performed with portable devices under physiologic conditions. However, the ability to autonomously grade nuclear cataracts without dilation and under heterogeneous conditions is still wanting.
In this pilot study, we describe the results of using deep learning to grade nuclear cataracts using a portable smartphone-based portable slit lamp (Figure 1), with and without dilation. We aim to determine the feasibility of grading nuclear cataracts in undilated eyes, which may serve as the basis to facilitate large-scale population-based screening.

2. Materials and Methods

2.1. Patient Recruitment and Image Capture

This was a prospective, single-institution, cross-sectional study conducted between 28 April 2022 and 11 July 2023 (both dates inclusive). Consecutive eligible patients were recruited from the tertiary eye clinic of the National University Hospital, Singapore. The inclusion criteria were as follows:
  • Willing and able to participate in study, with mental capacity to consent;
  • At least 40 years old;
  • Having not had prior intraocular surgery or laser procedures to the eye, including laser refractive surgery;
  • Fit enough to keep the eyes open for adequate image acquisition;
  • No evidence of active intraocular inflammation;
  • Not having concurrent external or anterior segment pathologies (e.g., corneal opacities, significant blepharoptosis), which would obscure photo-taking of the eye.
The research followed the tenets of the Declaration of Helsinki and was approved by the institutional domain-specific review board. Informed consent was obtained from all subjects after an explanation of the purpose and possible consequences of the study. If both eyes were eligible, both were included in the study.

2.2. Image Capture Protocol

After informed consent, all patients underwent a comprehensive ophthalmic examination by a single ophthalmologist (D.C.). Slit lamp photos were then captured with a portable device (the ‘Device’, Figure 1, also described previously elsewhere [14]). The technique for image capture was modified from a previous publication [15]. Briefly, a second-generation iPhone SE (Apple, Cupertino, CA, USA) with the Device was used for anterior segment photos in a dimmed room to simulate mesopic conditions. The Device comprises a light-emitting diode (LED) module fitted behind an optical slit and achromatic condensing lenses to produce an incident light of 45°, with the beam measuring 1 mm wide by 15 mm tall. The Device is held ‘en face’ at a working distance of approximately 2 cm from the eye, with the slit beam originating temporally and was aimed at the pupil center to produce a cross-sectional image of the crystalline lens (Figure 2). Five images were captured of each eye. The same slit lamp photo capture sequence was repeated after pharmacological dilation with one drop of tropicamide 1% (1% Mydriacyl®, Alcon Inc., Fort Worth, TX, USA) into the inferior fornix.
To reflect real-world usage of smartphone applications, we captured the images using the default camera application (“Camera”) on the smartphone on automatic settings. We saved the images as digital negative (DNG) files, with 4032 × 3024 resolution, to preserve maximum image fidelity and minimize software post processing. The full uncompressed DNG images were used for automatic cataract analysis.
Scheimpflug imaging with Pentacam® (Oculus, Inc., Arlington, WA, USA) was performed on the dilated eye. Pentacam grading was performed using Pentacam Nuclear Staging (PNS) software (Version 1.21r24). The PNS classification evaluates the optical density of a central, 4 mm three-dimensional reference block of the lens by analyzing the backward scatter when the eye is dilated, grading it from 0 to 5 [16]. PNS classification has been demonstrated to accurately estimate nuclear cataracts [17,18,19]; higher PNS scores are associated with the increased energy used during cataract surgery [20,21]. As a pilot study to explore the feasibility of using a portable device to identify cataracts that may be visually significant, we classified the attained PNS scores into two groups: Group 1 (PNS < 2), and Group 2 (PNS ≥ 2). The latter is correlated with significantly greater cataract density and energy used during phacoemulsification [20].

2.3. Deep Learning Technique

With our preliminary dataset, we have implemented a transformer-based network [22] for automated cataract analysis (Figure 3). The Swin transformer [23] represents a cutting-edge architecture, employing hierarchical self-attention mechanisms that efficiently capture both local and global features. We selected Swin transformer as the primary backbone because it is a widely validated state-of-the-art architecture for image classification and medical image analysis. By partitioning the input image into non-overlapping patches and progressively merging neighboring patches, the network performs multi-scale feature extraction, making it well-suited for the nuanced demands of medical image analysis, where both fine-grained details and broader structural patterns are critical [24,25]. Unlike traditional convolutional networks that depend on fixed-size kernels, the transformer-based approach adaptively models relationships between image patches. This flexibility allows the network to effectively capture varying degrees of cataract severity through both local and global image features, providing a promising tool for analyzing ocular images from portable smartphone-based slit lamp prototypes.
In this study, we trained the Swin transformer (Swin small variant pretrained on ImageNet-1K) to project images into a high-dimensional embedding space, leveraging the output from the global average pooling layer following the stacked self-attention blocks as the feature representation for each image. The model was optimized using cross-entropy loss, where prediction probabilities were computed via a SoftMax function applied to the class logits. Stochastic gradient descent was employed for optimization. An L2 regularization term (weight decay) was applied during optimization to improve generalization and prevent overfitting. The model was trained end-to-end without freezing backbone layers.
During inference, image labels were predicted by projecting the input into the embedding space and assigning the class label corresponding to the highest predicted probability. All models were implemented using the Apache SINGA platform [26], which integrates with the PyTorch 1.8 backend for model training and distributed optimization, while MLCask [27], an efficient pipeline management system, was employed to handle versioning and management of our deep learning pipelines.

2.4. Experiment Setup for Deep Learning Model

The input images were captured with a native resolution of 4032 × 3024. Before being fed into the model, all images were resized to 224 × 224 pixels using bicubic interpolation to maintain consistent input dimensions and manageable computational cost. Although the model operates on the resized inputs, acquiring images at a high resolution remains beneficial during data collection, as it helps preserve overall image quality, reduces the impact of sensor noise, and provides reliable source images for subsequent preprocessing, and annotation. The resized images were then normalized using the standard ImageNet mean and standard deviation to match the input distribution expected by the pretrained model. During training, we adopted a comprehensive data augmentation pipeline via the create_transform function from the TIMM package. Input images were resized to 224 × 224 pixels and subjected to color jittering with a factor of 0.4, the ‘rand-m9-mstd0.5-inc1’ AutoAugment policy to enhance sample diversity, and random erasing (pixel-level mode, probability 0.25, with a single erasing count) to improve generalization. Bicubic interpolation was consistently used for image rescaling. The model was trained for 200 epochs using the AdamW optimizer (weight decay = 0.05) with an initial learning rate of 9.735 × 10−5 (cosine learning rate scheduling was applied). Training was performed using distributed data parallelism across three NVIDIA RTX 2080 Ti GPUs, with a batch size of 32 per GPU and gradient accumulation over 2 steps, resulting in an effective global batch size of 192. Early convergence was observed before completion of 200 epochs. All experiments were conducted on a workstation equipped with 64 GB RAM and an Intel® Xeon® W-2133 CPU @ 3.60 GHz, with a total training time of approximately 30 min. Training loss curves (Figure 4) demonstrate that convergence was achieved before completion of 200 epochs, supporting the adequacy of this training schedule.

3. Results

3.1. Patient Characteristics

In total, 198 eyes of 99 subjects were included in this study. The mean age of included subjects was 65.3 ± 10.4 years (range, 41.0 to 88.0 years). Of the 198 eyes included in the study, the average PNS score was 1.57 ± 0.81 (range, 0 to 4). Figure 5 illustrates the clinical slit lamp photographs and corresponding PNS scores.

3.2. Deep Learning Model on Automated Cataract Analysis

After excluding 80 images (4.0%) deemed of poor quality, a total of 1900 images (950 dilated, 950 undilated) of 198 eyes were included. Table 1 shows the detailed data statistics (note the undilated images and dilated images share the same PNS statistics). We randomly split the dataset at the patient level into a ratio at around 9:1:2 for training, validation, and testing, respectively, following standard machine learning practice. The split ratio was applied to the overall dataset rather than enforced separately for each class. Because the numbers of Group 1 and Group 2 samples were not equal and patient-level partitioning was preserved to prevent data leakage, per class sample counts in each subset do not exactly follow the 9:1:2 proportion. We used the training set for learning the model and validation set to select the hyper-parameters of our deep learning method. Finally, we report the performance of our model on the test set.
Table 2 and Table 3 show the experiment results of our developed deep learning model on automated cataract analysis in undilated and dilated eyes, respectively. For evaluation, we used class average accuracy to measure the performance of the deep learning model. With the preliminary dataset, the deep learning model achieved an average accuracy of 81.25% for undilated images and 74.38% for dilated images. Although the overall accuracy was moderate, it is important to note that images were acquired using a portable smartphone-based slit lamp under heterogeneous real-world conditions, including undilated eyes. This setting is substantially more challenging than standardized, clinic-based imaging protocols. As a pilot feasibility study, these results demonstrate the practical potential of AI-assisted cataract grading in portable screening scenarios. The confusion matrices also reveal asymmetric error patterns. In both undilated and dilated eye experiments, false positives (i.e., Group 1 predicted as Group 2) occurred more frequently than false negatives. Consequently, the model demonstrates high sensitivity but relatively lower specificity. From a clinical screening perspective, this asymmetry may be acceptable because missing clinically significant cataracts (false negatives) may delay diagnosis and referral, whereas false positives mainly result in additional confirmatory examination.
In particular, performance was slightly lower in dilated eyes compared to undilated eyes. While this may initially seem counterintuitive given that clinical ground truth is established using dilated examinations, it likely stems from differences in how deep learning models process visual information compared to human clinicians. Clinicians rely on dilation to evaluate the entire crystalline lens, particularly peripheral cortical features. For the Swin transformer, however, the undilated pupil effectively acts as a natural anatomical crop. The constricted iris masks out peripheral regions, inadvertently guiding the model’s attention directly to the central lens, which contains consistent, high-yield diagnostic features of opacity. Conversely, pharmacological dilation exposes a significantly larger area of the anterior segment. This introduces complex peripheral light scattering, iris shadows, and a wider surface area for specular reflections. Because the model was trained on whole-image classification without explicit pixel-level segmentation to restrict its focus, this newly exposed peripheral area acts as visual noise. The model may become distracted by these peripheral optical artifacts in dilated images, leading to greater feature variability and slightly lower classification accuracy compared to the naturally constrained field of view in undilated eyes.
In addition, Table 4 presents the model performance on both undilated and dilated eye images, reporting predictive uncertainty (measured via entropy) alongside standard classification metrics for each class label. For undilated eyes, the model achieved a high overall accuracy of 81.25%. Specifically, for Group 2, it attained perfect recall (100%) and a precision of 72.73%, yielding an F1 score of 84.21%. For Group 1, the model demonstrated 100% precision, 62.50% recall, and an F1 score of 76.92%. In the case of dilated eyes, performance declined moderately, with an overall accuracy of 74.38%. Notably, the predictive entropy was 0.3163 for undilated and 0.3376 for dilated eyes, indicating generally low uncertainty in both cases, although slightly higher ambiguity in the dilated condition, which is consistent with its comparatively reduced predictive performance. However, entropy was not used for threshold-based rejection or selective prediction in this pilot study, and, therefore, should be interpreted as a descriptive confidence measure rather than a validated reliability guarantee. Future work will investigate uncertainty calibration and selective prediction strategies to enhance clinical robustness.
Beyond overall accuracy, Table 5 presents the sensitivity, specificity, receiver operating characteristic–area under the curve (ROC–AUC), and calibration metrics (Brier score and expected calibration error with 10 bins). The ROC–AUC was 84.30% (undilated) and 76.55% (dilated), indicating good discriminative ability. Calibration performance was assessed using the Brier score (16.39 and 20.15) and expected calibration error (ECE-BIN10: 15.81 and 17.78), demonstrating reasonable probability calibration in this pilot setting.

3.3. Comparison with Other Baseline Architectures

To strengthen the validity of our findings, in Table 6, we compared the Swin transformer backbone with three widely used image classification architectures: ResNet50 [28], EfficientNet [29], and Vision Transformer (ViT-B/16) [30]. All baseline models were trained and evaluated under the same data split, preprocessing pipeline, and optimization settings to ensure fair comparison.
For undilated eyes, ViT-B/16 achieved the highest overall accuracy (80.26%), followed by EfficientNet (79.65%) and ResNet50 (78.60%). Notably, ViT demonstrated a more balanced performance between the two classes, achieving a Group 2 F1 score of 58.82%, compared to 34.09% for EfficientNet and 32.97% for ResNet50. Both EfficientNet and ResNet50 showed high recall for Group 1 (>92%), but substantially lower recall for Group 2 (25%), indicating a bias toward the majority class.
For dilated eyes, ViT-B/16 again achieved the highest overall accuracy (82.11%), followed by ResNet50 (72.42%) and EfficientNet (69.82%). ResNet50 showed relatively strong performance for Group 2 in dilated eyes (F1 = 65.26%), while EfficientNet demonstrated lower precision for Group 2 (34.88%). ViT achieved high Group 1 recall (97.33%) but comparatively lower recall for Group 2 (25%), suggesting potential class imbalance sensitivity.
Overall, transformer-based architectures (ViT and Swin transformer) demonstrated competitive or superior performance compared with conventional convolutional networks (EfficientNet and ResNet50), particularly in maintaining better balance across class-specific metrics. These results support the suitability of transformer-based backbones for cataract severity classification in portable slit lamp imaging scenarios.

3.4. Heat Maps

Based on the transformer prototypical network, heat maps were generated to visualize regions contributing to cataract severity prediction. The integrated gradients method [31] was adopted for visualization, which is an attribution method for deep networks. As shown in Figure 6, in correctly predicted cases, the attribution maps generally highlight the central crystalline lens region corresponding to cataract opacity. In contrast, misclassified examples tend to exhibit more diffuse or misplaced attention patterns, often associated with specular reflections or peripheral illumination artifacts.
Because pixel-level annotations of cataract regions were not available in this pilot dataset, automated spatial localization metrics could not be computed. To account for this quantitatively, we conducted a manual review of the attribution maps for the test set. Among correctly predicted cases, all the attribution maps accurately localized the primary attention on the central crystalline lens. Conversely, in misclassified examples, the model’s attention was frequently misplaced: a total of 57.75% of these errors were driven by heavy focus on specular corneal reflections, 35.21% were misdirected by peripheral illumination artifacts, and 7.04% exhibited a highly diffuse attention pattern lacking any distinct focal region. Therefore, while the attribution maps are presented primarily as qualitative visualizations, this semi-quantitative breakdown illustrates that the model generally learns the correct anatomical features for accurate prediction, whereas misclassifications are strongly linked to identifiable imaging artifacts.

4. Discussion

In this pilot study, we describe a novel method to grade nuclear cataracts using slit lamp images taken from a smartphone-based portable slit lamp device in undilated eyes. Our preliminary results suggest that AI was able to grade cataracts in both dilated and undilated eyes, although the accuracy of AI on the undilated eyes was higher. The heat maps generated based on attribution masks provide preliminary evidence that the integrated gradient could correctly identify the anatomical areas of concern.
The benefit of having an automated digital grading for cataracts is clear. Cataracts are a universal age-related opacification of the crystalline lens that affect everyone from around 60 years old. A digital cataract grading system was shown to be more cost-effective than human grading as early as 1999 [32]. However, it is not fully implemented; some challenges include the routine need for dilation and the strict requirements for image standardization. While severe cataracts could be diagnosed with low-cost screening methods with a penlight and crude visual acuity assessment, a cost-effective solution to identify moderate yet visually significant cataracts is still lacking.
Over the past decade, there has been increasing interest in using deep learning to grade nuclear cataracts automatically (Table 7). These studies utilized standardized slit lamp images from existing databases, such as the Singapore Malay Eye Study [13], the Age-Related Eye Disease Study (AREDS) [33], or retrospective datasets of clinical slit lamp images compared against different clinical standards. However, while these studies report impressive performances, they were all trained on existing databases of dilated slit lamp photographs taken under strict lighting conditions, which significantly limit their applicability for use as a community-based screening tool where dedicated slit lamp cameras and exacting lighting requirements are not possible. This set-up—in particular, ocular dilation—is not routinely performed outside of eye clinics. Conversely, alternative approaches for community-based smartphone cataract screening using AI do not yet incorporate slit lamp imaging [34,35,36], even though slit lamp-based photographs are widely regarded as the superior method for assessing nuclear cataracts [37].
To bridge this gap, our work is the first to report automated cataract grading using a portable smartphone slit lamp set-up. In addition, our study was performed on undilated eyes under real-world, pragmatic, imaging conditions. While the deep learning backbone itself is not novel, the innovation of this study lies in translating state-of-the-art AI methods into a portable, smartphone-based slit lamp platform and evaluating its feasibility in a non-standardized, undilated clinical setting. If fully deployed, this approach could help to overcome key practical barriers to community-based cataract screening and expand access to early detection. Another strength of our study is that a standardized protocol was used for image capture; the ground truth used in this study (PNS) was shown to be an objective and reproducible way to grade cataracts [18]. Standardized grading scales, such as the LOCS III, on the other hand, have been reported to have suboptimal reproducibility [33,43,44]. A strength of our AI approach lies in its enhanced representation capability, derived from the attention mechanism to effectively model complex patterns specific to the medical domain [24,25]. Recent studies [45,46] have further demonstrated the effectiveness of transformer-based architectures and deep learning models in ophthalmic imaging and medical image classification tasks under real-world conditions, supporting the methodological choice and performance level observed in our work.
This preliminary study is limited by the small sample size of the images tested; it is likely that the initial developed model has issues with overfitting. In addition, as this study adopts established backbone architectures without introducing new architectural modules, systematic ablation experiments were not performed in the current work. Such analyses will be incorporated in future studies to better quantify the contribution of different model components and training strategies. Likewise, due to the small sample size, the heat maps of typical images were overlaid on other images, leading to issues with attribution mask allocation (Figure 6, bottom row). Larger, multi-center studies with broader demographic representation will be necessary to confirm the robustness and external validity of these findings. Another limitation of our study is the relatively mild cataracts in the captured dataset—all the cataracts identified were less than PNS 5, with the majority being graded as 0 to 1 (88 of 198 eyes, 44.4%). The accuracy of our deep learning model is also lower compared to alternative algorithms published previously [40,47]. However, ours is an initial proof-of-concept study to establish the feasibility of evaluating cataract density in undilated eyes captured on mobile devices. Our initial results are encouraging.
In conclusion, we have demonstrated preliminary results indicating that deep learning could be used to grade cataracts with slit lamp images taken from undilated eyes. Further work is underway to validate this technique in a larger and more diverse group of eyes with cataracts.

Author Contributions

Conceptualization, D.Z.C.; methodology, D.Z.C. and L.Z.; data collection, D.Z.C.; formal analysis, L.Z., C.L. and J.W.; writing—original draft preparation, D.Z.C. and C.L.; writing—review and editing, C.L., J.W. and B.C.O.; supervision, D.Z.C. and B.C.O.; funding acquisition, D.Z.C. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the NUHS Summit Programme in Innovation (NUHSRO/2022/RO5+6/SPIN/02). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National University Health System, Singapore. The APC was funded by National University Hospital, Singapore.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the National Healthcare Group Domain Specific Review Board (DSRB) 2021/00237 on 21 May 2021.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to institutional restrictions.

Acknowledgments

The authors would like to acknowledge the contributions of the following research coordinators: Jiah Ying Goh and Emily Lin.

Conflicts of Interest

The authors declare no competing interests. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data, in the writing of the manuscript or in the decision to publish the results.

References

  1. Lee, C.M.; Afshari, N.A. The Global State of Cataract Blindness. Curr. Opin. Ophthalmol. 2017, 28, 98–103. [Google Scholar] [CrossRef] [PubMed]
  2. Chylack, L.T.; Leske, M.C.; McCarthy, D.; Khu, P.; Kashiwagi, T.; Sperduto, R. Lens Opacities Classification System II (LOCS II). Arch. Ophthalmol. 1989, 107, 991–997. [Google Scholar] [CrossRef] [PubMed]
  3. Chylack, L.T.; Wolfe, J.K.; Singer, D.M.; Leske, M.C.; Bullimore, M.A.; Bailey, I.L.; Friend, J.; McCarthy, D.; Wu, S.Y. The Lens Opacities Classification System III. The Longitudinal Study of Cataract Study Group. Arch. Ophthalmol. 1993, 111, 831–836. [Google Scholar] [CrossRef]
  4. Klein, B.E.K.; Klein, R.; Linton, K.L.P.; Magli, Y.L.; Neider, M.W. Assessment of Cataracts from Photographs in the Beaver Dam Eye Study. Ophthalmology 1990, 97, 1428–1433. [Google Scholar] [CrossRef] [PubMed]
  5. Sparrow, J.M.; Bron, A.J.; Brown, N.A.; Ayliffe, W.; Hill, A.R. The Oxford Clinical Cataract Classification and Grading System. Int. Ophthalmol. 1986, 9, 207–225. [Google Scholar] [CrossRef]
  6. Wong, W.L.; Li, X.; Li, J.; Cheng, C.-Y.; Lamoureux, E.L.; Wang, J.J.; Cheung, C.Y.; Wong, T.Y. Cataract Conversion Assessment Using Lens Opacity Classification System III and Wisconsin Cataract Grading System. Investig. Ophthalmol. Vis. Sci. 2013, 54, 280–287. [Google Scholar] [CrossRef]
  7. Xu, Y.; Gao, X.; Lin, S.; Wong, D.W.K.; Liu, J.; Xu, D.; Cheng, C.-Y.; Cheung, C.Y.; Wong, T.Y. Automatic Grading of Nuclear Cataracts from Slit-Lamp Lens Images Using Group Sparsity Regression. Med. Image Comput. Comput. Assist. Interv. 2013, 16, 468–475. [Google Scholar] [CrossRef]
  8. Huang, W.; Chan, K.L.; Li, H.; Lim, J.H.; Liu, J.; Wong, T.Y. A Computer Assisted Method for Nuclear Cataract Grading from Slit-Lamp Images Using Ranking. IEEE Trans. Med. Imaging 2011, 30, 94–107. [Google Scholar] [CrossRef]
  9. Cheung, C.Y.; Li, H.; Lamoureux, E.L.; Mitchell, P.; Wang, J.J.; Tan, A.G.; Johari, L.K.; Liu, J.; Lim, J.H.; Aung, T.; et al. Validity of a New Computer-Aided Diagnosis Imaging Program to Quantify Nuclear Cataract from Slit-Lamp Photographs. Investig. Ophthalmol. Vis. Sci. 2011, 52, 1314–1319. [Google Scholar] [CrossRef]
  10. Li, H.; Lim, J.H.; Liu, J.; Wong, D.W.K.; Tan, N.M.; Lu, S.; Zhang, Z.; Wong, T.Y. An Automatic Diagnosis System of Nuclear Cataract Using Slit-Lamp Images. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2009, 2009, 3693–3696. [Google Scholar] [CrossRef]
  11. Ignatowicz, A.A.; Marciniak, T.; Marciniak, E. AI-Powered Mobile App for Nuclear Cataract Detection. Sensors 2025, 25, 3954. [Google Scholar] [CrossRef] [PubMed]
  12. Gao, X.; Lin, S.; Wong, T.Y. Automatic Feature Learning to Grade Nuclear Cataracts Based on Deep Learning. IEEE Trans. Biomed. Eng. 2015, 62, 2693–2701. [Google Scholar] [CrossRef] [PubMed]
  13. Foong, A.W.P.; Saw, S.-M.; Loo, J.-L.; Shen, S.; Loon, S.-C.; Rosman, M.; Aung, T.; Tan, D.T.H.; Tai, E.S.; Wong, T.Y. Rationale and Methodology for a Population-Based Study of Eye Diseases in Malay People: The Singapore Malay Eye Study (SiMES). Ophthalmic Epidemiol. 2007, 14, 25–35. [Google Scholar] [CrossRef] [PubMed]
  14. Yu, Y.; Chen, D.; Tan, C.W.T.; Cheng, C.Y.; Yong, S.S. Mobile Eye-Imaging Device for Detecting Eye Pathologies 2020. Available online: https://patents.google.com/patent/WO2020060486A1/en?oq=WO2020%2f060486+A1 (accessed on 23 January 2026).
  15. Chen, D.; Ho, Y.; Sasa, Y.; Lee, J.; Yen, C.C.; Tan, C. Machine Learning-Guided Prediction of Central Anterior Chamber Depth Using Slit Lamp Images from a Portable Smartphone Device. Biosensors 2021, 11, 182. [Google Scholar] [CrossRef]
  16. Ambrosio, R. Oculus Pentacam Interpretation Guide, 3rd ed.; Oculus: Menlo Park, CA, USA; Available online: https://www.pentacam.com/fileadmin/user_upload/pentacam.de/downloads/interpretations-leitfaden/interpretation_guideline_3rd_edition_0915.pdf (accessed on 23 January 2026).
  17. Panthier, C.; de Wazieres, A.; Rouger, H.; Moran, S.; Saad, A.; Gatinel, D. Average Lens Density Quantification with Swept-Source Optical Coherence Tomography: Optimized, Automated Cataract Grading Technique. J. Cataract. Refract. Surg. 2019, 45, 1746–1752. [Google Scholar] [CrossRef]
  18. Pan, A.-P.; Wang, Q.-M.; Huang, F.; Huang, J.-H.; Bao, F.-J.; Yu, A.-Y. Correlation Among Lens Opacities Classification System III Grading, Visual Function Index-14, Pentacam Nucleus Staging, and Objective Scatter Index for Cataract Assessment. Am. J. Ophthalmol. 2015, 159, 241–247.e2. [Google Scholar] [CrossRef]
  19. Pei, X.; Bao, Y.; Chen, Y.; Li, X. Correlation of Lens Density Measured Using the Pentacam Scheimpflug System with the Lens Opacities Classification System III Grading Score and Visual Acuity in Age-Related Nuclear Cataract. Br. J. Ophthalmol. 2008, 92, 1471–1475. [Google Scholar] [CrossRef]
  20. Mayer, W.J.; Klaproth, O.K.; Hengerer, F.H.; Kohnen, T. Impact of Crystalline Lens Opacification on Effective Phacoemulsification Time in Femtosecond Laser-Assisted Cataract Surgery. Am. J. Ophthalmol. 2014, 157, 426–432.e1. [Google Scholar] [CrossRef]
  21. Nixon, D.R. Preoperative Cataract Grading by Scheimpflug Imaging and Effect on Operative Fluidics and Phacoemulsification Energy. J. Cataract. Refract. Surg. 2010, 36, 242–246. [Google Scholar] [CrossRef]
  22. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Curran Associates Inc.: Red Hook, NY, USA, 2017; pp. 6000–6010. [Google Scholar]
  23. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer Using Shifted Windows. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV); IEEE: New York, NY, USA, 2021; pp. 9992–10002. [Google Scholar] [CrossRef]
  24. Tang, Y.; Yang, D.; Li, W.; Roth, H.R.; Landman, B.; Xu, D.; Nath, V.; Hatamizadeh, A. Self-Supervised Pre-Training of Swin Transformers for 3D Medical Image Analysis. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); IEEE: New York, NY, USA, 2022; pp. 20698–20708. [Google Scholar] [CrossRef]
  25. Cao, H.; Wang, Y.; Chen, J.; Jiang, D.; Zhang, X.; Tian, Q.; Wang, M. Swin-Unet: Unet-Like Pure Transformer for Medical Image Segmentation; Karlinsky, L., Michaeli, T., Nishino, K., Eds.; Springer Nature: Cham, Switzerland, 2023; Volume 13803, pp. 205–218. [Google Scholar]
  26. Ooi, B.C.; Tan, K.-L.; Wang, S.; Wang, W.; Cai, Q.; Chen, G.; Gao, J.; Luo, Z.; Tung, A.K.H.; Wang, Y.; et al. SINGA: A Distributed Deep Learning Platform. In Proceedings of the 23rd ACM International Conference on Multimedia, Brisbane, Australia, 26–30 October 2015; ACM: New York, NY, USA, 2015; pp. 685–688. [Google Scholar]
  27. Luo, Z.; Yeung, S.H.; Zhang, M.; Zheng, K.; Zhu, L.; Chen, G.; Fan, F.; Lin, Q.; Ngiam, K.Y.; Chin Ooi, B. MLCask: Efficient Management of Component Evolution in Collaborative Data Analytics Pipelines. In Proceedings of the 2021 IEEE 37th International Conference on Data Engineering (ICDE); IEEE: New York, NY, USA, 2021; pp. 1655–1666. [Google Scholar]
  28. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); IEEE: New York, NY, USA, 2016; pp. 770–778. [Google Scholar]
  29. Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019. [Google Scholar]
  30. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image Is Worth 16 × 16 Words: Transformers for Image Recognition at Scale. arXiv 2021. [Google Scholar] [CrossRef]
  31. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; IEEE: New York, NY, USA, 2017; pp. 618–626. [Google Scholar]
  32. Dimock, J.; Robman, L.D.; McCarty, C.A.; Taylor, H.R. Cost-Effectiveness of Digital Cataract Assessment. Aust. N. Zealand J. Ophthalmol. 1999, 27, 208–210. [Google Scholar] [CrossRef]
  33. Chew, E.Y.; Kim, J.; Sperduto, R.D.; Datiles, M.B.; Coleman, H.R.; Thompson, D.J.S.; Milton, R.C.; Clayton, J.A.; Hubbard, L.D.; Danis, R.P.; et al. Evaluation of the Age-Related Eye Disease Study Clinical Lens Grading System AREDS Report No. 31. Ophthalmology 2010, 117, 2112–2119.e3. [Google Scholar] [CrossRef]
  34. Ganokratanaa, T.; Ketcham, M.; Pramkeaw, P. Advancements in Cataract Detection: The Systematic Development of LeNet-Convolutional Neural Network Models. J. Imaging 2023, 9, 197. [Google Scholar] [CrossRef]
  35. Pathak, S.; Raj, R.; Singh, K.; Verma, P.K.; Kumar, B. Development of Portable and Robust Cataract Detection and Grading System by Analyzing Multiple Texture Features for Tele-Ophthalmology. Multimed. Tools Appl. 2022, 81, 23355–23371. [Google Scholar] [CrossRef] [PubMed]
  36. Janti, S.S.; Saluja, R.; Tiwari, N.; Kolavai, R.R.; Mali, K.; Arora, A.J.; Johar, A.; Sahoo, D.P.; Sahithi, E. Evaluation of the Clinical Impact of a Smartphone Application for Cataract Detection. Cureus 2024, 16, e71467. [Google Scholar] [CrossRef] [PubMed]
  37. Goh, J.H.L.; Lei, X.; Chee, M.-L.; Qian, Y.; Yu, M.; Rim, T.H.; Nusinovici, S.; Chen, D.Z.; Koh, K.H.; Yew, S.M.E.; et al. Multi-Comparison of Different Ocular Imaging Modality-Based Deep Learning Models for Visually Significant Cataract Detection. Ophthalmol. Sci. 2025, 5, 100837. [Google Scholar] [CrossRef] [PubMed]
  38. Wu, X.; Huang, Y.; Liu, Z.; Lai, W.; Long, E.; Zhang, K.; Jiang, J.; Lin, D.; Chen, K.; Yu, T.; et al. Universal Artificial Intelligence Platform for Collaborative Management of Cataracts. Br. J. Ophthalmol. 2019, 103, 1553–1560. [Google Scholar] [CrossRef]
  39. Son, K.Y.; Ko, J.; Kim, E.; Lee, S.Y.; Kim, M.-J.; Han, J.; Shin, E.; Chung, T.-Y.; Lim, D.H. Deep Learning-Based Cataract Detection and Grading from Slit-Lamp and Retro-Illumination Photographs: Model Development and Validation Study. Ophthalmol. Sci. 2022, 2, 100147. [Google Scholar] [CrossRef]
  40. Lu, Q.; Wei, L.; He, W.; Zhang, K.; Wang, J.; Zhang, Y.; Rong, X.; Zhao, Z.; Cai, L.; He, X.; et al. Lens Opacities Classification System III-Based Artificial Intelligence Program for Automatic Cataract Grading. J. Cataract. Refract. Surg. 2022, 48, 528–534. [Google Scholar] [CrossRef]
  41. Keenan, T.D.L.; Chen, Q.; Agrón, E.; Tham, Y.-C.; Goh, J.H.L.; Lei, X.; Ng, Y.P.; Liu, Y.; Xu, X.; Cheng, C.-Y.; et al. DeepLensNet: Deep Learning Automated Diagnosis and Quantitative Classification of Cataract Type and Severity. Ophthalmology 2022, 129, 571–584. [Google Scholar] [CrossRef]
  42. The Age-Related Eye Disease Study Research Group. The Age-Related Eye Disease Study (AREDS) System for Classifying Cataracts from Photographs: AREDS Report No. 4. Am. J. Ophthalmol. 2001, 131, 167–175. [Google Scholar] [CrossRef]
  43. Gali, H.E.; Sella, R.; Afshari, N.A. Cataract Grading Systems: A Review of Past and Present. Curr. Opin. Ophthalmol. 2019, 30, 13–18. [Google Scholar] [CrossRef]
  44. Tan, A.C.S.; Wang, J.J.; Lamoureux, E.L.; Wong, W.; Mitchell, P.; Li, J.; Tan, A.G.; Wong, T.Y. Cataract Prevalence Varies Substantially with Assessment Systems: Comparison of Clinical and Photographic Grading in a Population-Based Study. Ophthalmic Epidemiol. 2011, 18, 164–170. [Google Scholar] [CrossRef]
  45. Wan, Z.; Zhang, J.; Wang, Y.; Lin, H.; Wang, Y.; Mi, Z.; Yang, X.; Fu, X.; Wang, H. Eye-Based Emotion Recognition via Event-Driven Sparse Transformers. In Proceedings of the 33rd ACM International Conference on Multimedia, Dublin, Ireland, 27–31 October 2025; ACM: New York, NY, USA, 2025; pp. 4659–4668. [Google Scholar]
  46. Iqbal, M.A.; Kim, J.; Han, I.; Kyun Kim, S. Attention-Driven Feature Fusion Integrating Swin Transformer and CNN Models for Improved Ocular Disease Classification. In Proceedings of the 2024 International Conference on Engineering and Emerging Technologies (ICEET); IEEE: New York, NY, USA, 2024; pp. 1–6. [Google Scholar]
  47. Zhang, H.; Niu, K.; Xiong, Y.; Yang, W.; He, Z.; Song, H. Automatic Cataract Grading Methods Based on Deep Learning. Comput. Methods Programs Biomed. 2019, 182, 104978. [Google Scholar] [CrossRef]
Figure 1. Photograph of the prototype device set-up and slit lamp image capture in a study participant. This photo was taken in a bright room for illustration purposes; in the study, photos were captured in a dim room to simulate mesopic conditions.
Figure 1. Photograph of the prototype device set-up and slit lamp image capture in a study participant. This photo was taken in a bright room for illustration purposes; in the study, photos were captured in a dim room to simulate mesopic conditions.
Sensors 26 01954 g001
Figure 2. Sample of slit lamp photographs taken with a smartphone camera with the prototype Device in: (a) an undilated eye; and (b) the same eye after dilation.
Figure 2. Sample of slit lamp photographs taken with a smartphone camera with the prototype Device in: (a) an undilated eye; and (b) the same eye after dilation.
Sensors 26 01954 g002
Figure 3. The schematic flow of the adopted deep learning model. PNS, Pentacam nuclear staging.
Figure 3. The schematic flow of the adopted deep learning model. PNS, Pentacam nuclear staging.
Sensors 26 01954 g003
Figure 4. Training loss curves of the Swin transformer model over 200 epochs: (a) loss curves for undilated eye images; and (b) Loss curves for dilated eye images. Both plots demonstrate stable optimization and convergence prior to completion of the full training schedule.
Figure 4. Training loss curves of the Swin transformer model over 200 epochs: (a) loss curves for undilated eye images; and (b) Loss curves for dilated eye images. Both plots demonstrate stable optimization and convergence prior to completion of the full training schedule.
Sensors 26 01954 g004
Figure 5. Slit lamp photographs taken: (a) before; and (b) after dilation with the (c) corresponding Pentacam nuclear staging (PNS) scores. A more positive PNS score indicates higher cataract density.
Figure 5. Slit lamp photographs taken: (a) before; and (b) after dilation with the (c) corresponding Pentacam nuclear staging (PNS) scores. A more positive PNS score indicates higher cataract density.
Sensors 26 01954 g005
Figure 6. Heat map demonstrating the original image, attribution mask, and overlay of the attribution mask over the image with integrated gradient method: (a) an undilated eye with correct attribution on the top row, and incorrect attribution on the bottom row; and (b) a dilated eye with correct attribution on the top row, and incorrect attribution on the bottom row.
Figure 6. Heat map demonstrating the original image, attribution mask, and overlay of the attribution mask over the image with integrated gradient method: (a) an undilated eye with correct attribution on the top row, and incorrect attribution on the bottom row; and (b) a dilated eye with correct attribution on the top row, and incorrect attribution on the bottom row.
Sensors 26 01954 g006
Table 1. Data statistics of collected undilated dataset.
Table 1. Data statistics of collected undilated dataset.
PNS LabelGroup 1 (PNS Score < 2)Group 2 (PNS Score ≥ 2)Total
Training320390710
Validation404080
Test8080160
Total440510950
PNS, Pentacam nuclear staging.
Table 2. Performance of developed deep learning model on automated cataract analysis in undilated eyes.
Table 2. Performance of developed deep learning model on automated cataract analysis in undilated eyes.
PNS LabelAverage Accuracy
(%)
Group 1 (PNS Score < 2)Group 2 (PNS Score ≥ 2)
Predicted LabelGroup 1 (PNS score < 2)500
Group 2 (PNS score ≥ 2)3080
Model Accuracy (%)62.50100.0081.25
PNS, Pentacam nuclear staging.
Table 3. Performance of developed deep learning model on automated cataract analysis in dilated eyes.
Table 3. Performance of developed deep learning model on automated cataract analysis in dilated eyes.
PNS LabelAverage Accuracy (%)
Group 1 (PNS Score < 2)Group 2 (PNS Score ≥ 2)
Predicted LabelGroup 1 (PNS Score < 2)445
Group 2 (PNS score ≥ 2)3675
Model Accuracy (%)55.0093.7574.38
PNS, Pentacam nuclear staging.
Table 4. Detailed metrics of developed deep learning model on automated cataract analysis.
Table 4. Detailed metrics of developed deep learning model on automated cataract analysis.
Group 2 (PNS Score ≥ 2)Group 1 (PNS Score < 2)
UncertaintyAccuracy (%)Precision (%)Recall (%)F1 (%)Precision (%)Recall (%)F1 (%)
Undilated Eyes0.316381.2572.73100.0084.2110062.5076.92
Dilated Eyes0.337674.3867.5793.7578.5389.8055.0068.22
PNS, Pentacam nuclear staging.
Table 5. Discrimination and calibration performance on automated cataract analysis.
Table 5. Discrimination and calibration performance on automated cataract analysis.
Sensitivity (%)Specificity (%)ROC–AUC (%)Brier Score (%)ECE (BIN10) (%)
Undilated Eyes100.0062.5084.3016.3915.81
Dilated Eyes93.7555.0076.5520.1517.78
ECE (BIN10), expected calibration error with 10 bins); ROC–AUC, receiver operating characteristic–area under the curve.
Table 6. Performance comparison of baseline architectures (EfficientNet, ResNet50, and ViT-B/16) on automated cataract analysis.
Table 6. Performance comparison of baseline architectures (EfficientNet, ResNet50, and ViT-B/16) on automated cataract analysis.
ResNet50
Group 2 (PNS score ≥ 2)Group 1 (PNS Score < 2)
Accuracy (%)Precision (%)Recall (%)F1 (%)Precision (%)Recall (%)F1 (%)
Undilated Eyes78.6048.3925.0032.9782.2892.8987.27
Dilated Eyes72.4288.5751.6765.2678.4081.2279.79
EfficientNet
Group 2 (PNS score ≥ 2)Group 1 (PNS Score < 2)
Accuracy (%)Precision (%)Recall (%)F1 (%)Precision (%)Recall (%)F1 (%)
Undilated Eyes79.6553.5725.0034.0982.4994.2287.97
Dilated Eyes69.8234.8850.0041.1084.9275.1179.72
ViT-B/16
Group 2 (PNS score ≥ 2)Group 1 (PNS Score < 2)
Accuracy (%)Precision (%)Recall (%)F1 (%)Precision (%)Recall (%)F1 (%)
Undilated Eyes80.2671.4350.0058.8287.6594.6791.03
Dilated Eyes82.1171.4325.0037.0482.9597.3389.57
PNS, Pentacam nuclear staging.
Table 7. Literature review on automatic nuclear cataract grading using slit lamp images from 2015 onwards.
Table 7. Literature review on automatic nuclear cataract grading using slit lamp images from 2015 onwards.
Author (Year)DatasetClinical ReferencePerformance
Gao et al. (2015) [12]5378 slit lamp images of dilated eyesWisconsin cataract grading system [4]70.7% exact agreement ratio, 88.4% decimal grading error ≤ 1.0
Wu et al. (2019) [38]37,638 slit lamp images of dilated and undilated eyesLOCS II [2] (sub-divided into severe or mild)AUC > 91%
Son et al. (2022) [39]1355 slit lamp images of dilated eyesLOCS III [3]AUC 95.7%
Lu et al. (2022) [40]1039 slit lamp images of dilated eyesLOCS III [3] (sub-divided into severe or mild)AUC between 97.7% and 98.3%
Keenan et al. (2022) [41]6333 slit lamp images of dilated eyesAREDS system [42]MSE = 0.23
Goh et al. (2025) [37]12,067 slit lamp images of dilated eyesWisconsin cataract grading system [4] AUC between 92.3% to 93.4%
AUC, area under the receiver operating characteristic curve; LOCS, Lens Opacities Classification System; MSE, mean squared error.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, D.Z.; Liu, C.; Wu, J.; Zhu, L.; Ooi, B.C. Prediction of Cataract Severity Using Slit Lamp Images from a Portable Smartphone Device: A Pilot Study. Sensors 2026, 26, 1954. https://doi.org/10.3390/s26061954

AMA Style

Chen DZ, Liu C, Wu J, Zhu L, Ooi BC. Prediction of Cataract Severity Using Slit Lamp Images from a Portable Smartphone Device: A Pilot Study. Sensors. 2026; 26(6):1954. https://doi.org/10.3390/s26061954

Chicago/Turabian Style

Chen, David Z., Changshuo Liu, Junran Wu, Lei Zhu, and Beng Chin Ooi. 2026. "Prediction of Cataract Severity Using Slit Lamp Images from a Portable Smartphone Device: A Pilot Study" Sensors 26, no. 6: 1954. https://doi.org/10.3390/s26061954

APA Style

Chen, D. Z., Liu, C., Wu, J., Zhu, L., & Ooi, B. C. (2026). Prediction of Cataract Severity Using Slit Lamp Images from a Portable Smartphone Device: A Pilot Study. Sensors, 26(6), 1954. https://doi.org/10.3390/s26061954

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop