Next Article in Journal
Investigating Antecedents to Older Adults’ Uptake of Health Information Systems: A Quantitative Case Study of Electronic Personal Health Records
Previous Article in Journal
Audio Watermarking System in Real-Time Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning-Based Analysis of Ocular Anterior Segment Diseases from Patient-Self-Captured Smartphone Images

by
Byoungyoung Gu
1,
Mark Christopher
2,3,
Su-Ho Lim
4 and
Sally L. Baxter
2,3,*
1
Department of Ophthalmology, Gimcheon Medical Center, Gimcheon-si 39579, Republic of Korea
2
Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, La Jolla, CA 92093, USA
3
Division of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, CA 92093, USA
4
Department of Ophthalmology, Daegu Veterans Health Service Medical Center, Daegu 42835, Republic of Korea
*
Author to whom correspondence should be addressed.
Informatics 2025, 12(1), 2; https://doi.org/10.3390/informatics12010002
Submission received: 25 August 2024 / Revised: 11 December 2024 / Accepted: 26 December 2024 / Published: 31 December 2024

Abstract

:
The goal of this study is to evaluate the Eye Home Clinic app (ver 1.0), which uses deep learning models to assess the quality of self-captured anterior segment images and detect anterior segment diseases using only the patient’s smartphone. Images undergo quality assessment based on the ‘DL-Image Eligibility’ model, and usable images are analyzed by the ‘DL-Diagnosis’ model to detect one of several anterior segment diseases. A dataset of 1006 images was used for training, and a dataset of 520 images was used for validation. The ‘DL-Image Eligibility’ model achieved an AUC of 0.87, with an accuracy of 0.75. The ‘DL-Diagnosis’ model had higher specificity (0.97) but lower sensitivity (0.29), with an AUC of 0.62. While the app shows potential for anterior segment telemedicine, improvements are needed in the DL model’s sensitivity for detecting abnormalities. Oversampling techniques, transfer learning, and dataset expansion should be considered to enhance the performance in future research. Based on data from users in over 100 countries, significant differences in photo quality among user groups were also identified. iOS users, younger users (21–40 years), and users reporting eye symptoms submitted more usable images. This study underscores the importance of user education and technological advancements to optimize smartphone-based ocular diagnostics.

1. Introduction

Imaging plays a pivotal role in ophthalmology in diagnosing and managing eye disease, and the most commonly used imaging devices in ophthalmology have become essential tools for clinical eye care [1,2,3]. These devices, though, are typically expensive, stationary, and require trained professionals to capture high-quality images [4]. Recently, the wide availability of smartphones, as well as their increasing camera quality and computational power, have given clinicians an opportunity to improve access to care [5]. Smartphones have previously been investigated as a platform for ophthalmic testing and data collection for tasks, including visual acuity testing, color vision testing, and retinal and anterior segment imaging [6,7,8,9,10,11,12,13]. Smartphones as a platform for ophthalmic telemedicine could serve as a cost-effective way to deliver care to remote or underserved areas [5,14,15,16,17]. For anterior segment imaging, smartphone-based image acquisition has been reported on previously, and high-quality images can be captured by patients using their personal devices [14,18,19]. Typically, these solutions involve some kind of optical attachment to improve image quality or aid in capture [15]. However, increasing smartphone camera quality and computational tools to filter images or improve image quality provide the opportunity to capture usable images with no additional devices or attachments.
Artificial intelligence (AI) has emerged as a rapidly advancing technology with a large impact across many fields, including medicine [20]. In particular, deep learning (DL) and convolutional neural networks (CNNs) have seen widespread use in processing, interpreting, and classifying medical images [21]. Compared to most other medical specialties, ophthalmology has seen extensive research and development of DL-based tools to process images, predict visual function, and detect both anterior and posterior segment eye disease [17,22,23,24,25,26,27,28]. One requirement for developing effective medical DL systems, however, is large, diverse datasets on which models can be trained and evaluated. This means that a large number of high-quality images and expert annotations need to be collected from diverse patient populations under varying conditions across the spectrum of severity for the disease of interest [29,30]. Given the ubiquity of smartphones in many places across the globe, a smartphone-based telemedicine tool could be used to deliver DL-based screening recommendations to patients while also providing the opportunity to collect large, diverse datasets for further DL model improvement and validation [31,32].
For this work, we introduce a smartphone application, Eye Home Clinic, where users can obtain both DL and ophthalmologist evaluations of anterior segment eye disease based on self-collected smartphone images. The Eye Home Clinic application is a part of a pilot anterior segment telemedicine project. Through Eye Home Clinic, patients can capture and submit an image of their eye to receive screening recommendations regarding a number of anterior segment diseases. Unlike much previous work, no additional hardware or smartphone attachments are needed [33,34]. This provides a low-cost screening tool with the potential for broad adoption and a large impact. In this study, we explore whether smartphone photographs analyzed using DL models are a suitable approach for detecting a specific set of anterior segment diseases. We aimed to evaluate the feasibility of applying the smartphone as a portable, DL-based anterior segment photography device and investigate limitations and considerations for ongoing implementation and adoption.

2. Materials and Methods

2.1. Program Overview and Ethical Compliance

This study adhered to the principles of the Declaration of Helsinki and was approved by the Institutional Bioethics Committee of the Korea National Institute for Bioethics Policy. Users were required to provide active consent by agreeing to specific terms and privacy policies before accessing the app. Participant privacy was ensured through the de-identification of individual information.
The mobile application was designed to be a user-friendly system, ensuring that even people with limited technical expertise could easily access it. As outlined in Figure 1, this process involves capturing two close-up images of the eyes using the smartphone’s rear camera: one at 3× zoom and another at 5× zoom. For consistent analysis, only the 5× zoom photos are evaluated by the deep learning model. The DL-Image Eligibility model, which is used to assess the clinical usability of the photos, is applied to the captured photos. If the model prediction of photo usability is lower than a given threshold (0.8 for the analyses presented here), it instructs users that it does not meet quality criteria and that they need to retake the photos. Once the 5× zoom images meet the criteria set by the DL-Image Eligibility model, the DL-Diagnosis model, which classifies photos as normal or abnormal, analyzes them to provide a preliminary classification.
The app leverages deep learning models to swiftly process uploaded images, delivering initial results to users within seconds to a minute. A comprehensive report, including the final classification after an ophthalmologist reviews both the 3× and 5× zoom images, is delivered within 24 h. Users receive a notification alert on their smartphones when this final report, containing a detailed assessment from the ophthalmologist, is ready for review.

2.2. Image Acquisition and Labeling

To ensure high accuracy and reliability in the training data for our deep learning models, rigorous procedures were established to generate ground-truth labels evaluating the clinical usefulness of photographs. The labeling process was overseen by two Korean board-certified ophthalmologists, BG and SL, each with over a decade of clinical experience. These ophthalmologists independently reviewed each submitted image to assess both clinical usefulness and the presence of abnormalities. This review process was conducted independently of the initial assessments by the deep learning models and of the initial clinical judgment by the ophthalmologist, which had been completed within 24 h of the user requesting assistance.
To perform image quality assessment, a detailed grading rubric adapted from a previous study ranging from 0 (lowest quality) to 5 (highest quality) was used [16]:
0. Fail: The photo does not capture a real human eye (e.g., wearing glasses, a photo of a painted image, or animal eyes);
1. Poor: Significant defocus/blur, exposure issues, or artifacts affecting more than half of the image, rendering it inadequate for interpretation;
2. Bad: Serious defocus/blur or exposure issues or artifacts affecting around half of the image, including omission of the entire cornea, making it insufficient for interpretation;
3. Good: Defocus/blur, exposure issues, or artifacts affecting a third to less than half of the image, allowing for possible interpretation;
4. Very Good: Minor issues affecting less than a third of the image, with no impediment to interpretation;
5. Excellent: No discernible issues with image quality or interpretation.
Images with a score of 3 or higher were categorized as ‘usable’, while those scoring below 3 were considered ‘unusable’. Figure 2 showcases examples of training set images classified based on this scale.
After quality assessment, two ophthalmologists independently reviewed each image previously assessed as ‘usable’ (score ≥ 3), classifying each as ‘normal’ or ‘abnormal’ in one of several categories:
Conjunctival Abnormalities: Including abnormal conjunctival injection, swelling, subconjunctival hemorrhage, pterygium, pinguecula, etc.;
Corneal Abnormalities: Encompassing inflammatory states, opacities, dystrophic lesions, etc.;
Ocular Surface and Eyelid Abnormalities: Covering inflammatory states (e.g., dermatitis), entropion, ectropion, eyelid masses, blepharitis, etc.;
Iris Abnormalities: Including deficits and other detectable issues;
Lens Abnormalities: Cataracts and the pseudophakic state.
Additionally, the ophthalmologists would classify a photograph as abnormal if a notable abnormal lesion was present based on their clinical experiences. While multiple pathologies can exist in a single image, for the purposes of this analysis (i.e., determining whether it is normal or abnormal), only the most prominent abnormality was identified per photo. This approach was adopted to maintain clarity and consistency in the classification process. Figure 3 shows examples of training set images classified by an ophthalmologist as abnormal.
During the labeling process, the ophthalmologists primarily reviewed photographs captured at 5× zoom to assess image quality and detect abnormalities, mirroring the deep learning model’s approach. However, if the image quality was unclear or the presence of abnormalities was uncertain, they referred to photographs captured at 3× zoom for additional context. This approach ensured more accurate assessments.

2.3. Software Implementation

The Eye Home Clinic app is a cross-platform mobile application developed using Unity (Unity Technologies, San Francisco, CA, USA). Unity, a game engine software, was chosen for its benefits in enabling integrated development for both iOS and Android platforms [35,36]. Both deep learning models utilize the EfficientNet architecture proposed by Google for feature extraction. This model was enhanced based on the EfficientNet classification and recognition network [37]. TensorFlow and Keras, widely used frameworks in deep learning, were employed to develop the models [38,39].

2.4. Dataset Collection

Using the Eye Home Clinic app, a training dataset and a hold-out dataset for model evaluation were collected. The training dataset consisted of 1006 de-identified ocular anterior segment images collected in JPEG format from the application between 7 May 2022 and 22 January 2023. The entire training dataset was used to train the image quality assessment model: the DL-Image Eligibility model. A subset of this dataset consisting of the images graded as ‘usable’ by the ophthalmologists (n = 313 images) was used to train the anterior segment abnormality detection model: the DL-Diagnosis model.
The hold-out dataset consisted of 520 photographs collected between 23 January 2023 and 4 May 2024 and was used to evaluate the performance of both models (DL-Image Eligibility, DL-Diagnosis). These images were collected after the latest update to the deep learning model.

2.5. Model Training

Both models were trained to perform binary classification of the images. The DL-Image Eligibility model was trained to categorize images as ‘usable’ or ‘unusable’ based on the ophthalmologists’ grading of image quality as described above. The DL-Diagnosis model was trained to classify images as ‘normal’ if no anterior segment disease or lesions were apparent in the images or ‘abnormal’ if anterior segment disease could be identified.
The training dataset was randomly divided into 80% training, 10% validation, and 10% test subsets. To increase model robustness, data augmentation in the form of horizontal flipping was applied to mimic fellow eye orientation [40]. The models were trained using the Adam optimizer with an initial learning rate of 0.001 and a binary cross-entropy loss function. Models were trained for 50 epochs using a batch size of 32.
The trained models were exported using the TensorFlowLite format (TensorFlow Version 2.17.0, Google) and ONNX formats using Barracuda (Unity ML-Agents Barracuda Version 2.1.0, Unity) [41,42]. These formats enabled the integration of the model into the Eye Home Clinic app, which was published on both the Apple Store and Google Play Store. The deep learning model has undergone periodic updates approximately every 2–3 months as additional training data have been accumulated, with the most recent update in January 2023.

2.6. Additional Data Acquisition

In addition to eye images, demographic, geographic, and smartphone platform data were also collected. The platform (Apple Store or Google Play Store) from which the app was downloaded and the preferred language for using the app were recorded to assess geographic trends and user demographics across platforms. Users were classified into age groups (21–40, 41–60, and over 60), with the option to not respond, thereby protecting personal information and streamlining the data input process. Gender data (female, male, or prefer not to respond) were collected for gender-based analysis. Users were asked a binary yes/no question about having eye discomfort symptoms. If symptoms were reported, follow-up questions were used to gather detailed descriptions of the symptoms. Users could also disclose pre-existing conditions such as diabetes, eye trauma, or surgical history. This information was collected anonymously to explore potential risk factors associated with eye discomfort without identifying specific individuals.

2.7. Statistical Methods

Statistical methods were employed to evaluate DL-Image Eligibility and DL-Diagnosis model predictions captured through the Eye Home Clinic App. A comprehensive assessment of the models’ efficacy was performed using various performance metrics: area under the receiver operating characteristic (ROC) curve (AUC), accuracy, precision, sensitivity, specificity, and F1 score. The app download data from the Apple Store and Google Play Store were used to assess regional adoption trends and user preferences for operating systems. Chi-square tests and Fisher’s exact tests were conducted to identify statistically significant differences in regional distribution and download sources (Apple vs. Google) using a standard significance threshold of p < 0.05. SPSS software (version 29, IBM Corp., Armonk, NY, USA) was utilized for all statistical analyses.

3. Results

The Eye Home Clinic App was downloaded by users in more than 100 countries (Figure 4). The demographic and clinical characteristics of the participants who used the app are described in Table 1. The app was downloaded from the Apple Store (43.3%) and the Google Store (56.7%), with a significantly higher proportion of usable photos from Apple Store users (96.1%) compared to Google Store users (3.9%, p < 0.001).
There was a significant difference in the age distribution between groups found to have eye images that were ‘usable’ vs. ‘unusable’ and ‘normal’ vs. ‘abnormal’ (p < 0.001 for both). The largest group of users, those aged 21–40, comprised 41.5% of the total. Among the age groups, users over 60 submitted a relatively higher incidence of photos deemed unusable for analysis, with 0.3% of ‘usable’ cases compared to 2.2% of ‘unusable’ cases. Additionally, users who did not disclose their age were more likely to have photos categorized as ‘unusable’, with 28.0% ‘usable’ versus 49.0% ‘unusable’. Regarding gender, 37.2% identified as male, 36.9% as female, and 26.0% did not disclose their gender. Reports of some form of discomfort were indicated by 19.5% of users, with a detailed description provided by 5.5%. Those reporting any discomfort tended to submit more usable photos for analysis (22.4%, p = 0.022, if they reported having symptoms; 7.2%, p = 0.027, if they provided detailed descriptions). Additional health details such as diabetes, traumatic eye incidents, and previous eye surgery were reported by a small fraction of users at 1.5%, 5.6%, and 2.1%, respectively.
The performance of two deep learning models was evaluated using contingency tables to understand their effectiveness in assessing photo eligibility and diagnosing conditions as normal or abnormal, as summarized in Table 2. The DL-Image Eligibility model demonstrated an accuracy of 0.75, precision of 0.91, sensitivity of 0.64, and specificity of 0.91, with a positive predictive value of 0.91 and a negative predictive value of 0.64. The F1 score for this model was 0.75, and it achieved an AUC of 0.87 (95% CI: 0.83, 0.90). In contrast, the DL-Diagnosis model showed a higher accuracy of 0.92 but lower precision at 0.43 and sensitivity at 0.29. It exhibited a higher specificity of 0.97, a negative predictive value of 0.95, and a positive predictive value of 0.43. The F1 score was 0.34, with an AUC of 0.62 (95% CI: 0.47, 0.77).
The distribution of ground truth labels identified as abnormal by ophthalmologists in both the development and validation sets is detailed in Table 3. In the development set, excessive conjunctival injection and sub-conjunctival hemorrhage were the most identified conditions, constituting 31.4% and 20% of the cases, respectively. Similar trends were observed in the validation set, with excessive conjunctival hyperemia continuing to be the most common condition, representing 57.1% of the cases.
A bar graph in Figure 5 illustrates the model’s alignment with ground truth diagnoses, detailing correct and incorrect diagnoses for specific eye conditions (Figure 6). Of the total 21 abnormal cases, the DL-Diagnosis model correctly identified only 28.5% as abnormal. Notably, excessive conjunctival injection, despite being the most detected condition, was consistently inaccurately diagnosed by the model. Excluding cases of excessive conjunctival injection, the true positive rate was 66%. The effectiveness of the model in classifying photo eligibility and differentiating be-tween normal and abnormal conditions is assessed in Figure 7. The eligibility curve approaches the optimal point, indicating strong photo assessment capabilities. The curve for normal vs. abnormal classification shows moderate effectiveness.

4. Discussion

4.1. Overview

Our study is premised on the concept that smartphones can serve as a bidirectional tool for screening, follow-up, and telemedicine consultations for anterior segment eye diseases. There have been various attempts to use a smartphone for ophthalmic clinical practice, with noticeable results [16,18,43,44,45,46]. The advancements in smartphone functionality, including camera performance, have steadily progressed since initial reports in the early 2010s and, more recently, have combined with developments in artificial intelligence to offer new opportunities for the diagnosis, monitoring, and management of ocular conditions [10,47]. We noted a report that the smartphone without any attachment device can capture high-quality images of the anterior segment with the aid of some image-enhancing techniques [18]. Contrary to prior studies where photographic capture was predominantly reliant on the assistance of trained personnel [18,41,42,43,44], our study delineates an approach that empowers patients to self-capture and transmit images for diagnostic evaluation, thereby establishing a comprehensive bidirectional flow of information.

4.2. User Demographics and Photo Quality Analysis

The app has been downloaded in over 100 countries via the Apple Store and Google App Store, presenting its potential to provide accessible ophthalmic examinations worldwide. iOS devices, known for their standardized camera specifications, produced a higher proportion of usable photographs in this project. This may be attributed to Apple’s controlled hardware environment, which ensures uniform camera quality across its devices [48]. Conversely, Android devices exhibit significant variation in camera capabilities due to the diverse range of manufacturers and models. This variability can impact the quality of images captured, especially as lower-end devices may not always meet the stringent criteria necessary for effective medical image analysis. Addressing this disparity is vital for enhancing the app’s functionality and ensuring reliable diagnostic performance across various hardware platforms.
Our findings revealed a considerable age-related disparity in the eligibility of photographs (p < 0.001). An examination of the age distribution of app users indicated that younger adults (ages 21–40) are not only the most active users but also demonstrate the highest proficiency in submitting photos usable for analysis (54.5% of all usable photos). This trend may suggest greater appeal and adoption among technologically adept younger populations. Conversely, among users over 60, only 9.1% (2 out of 22) of submitted photos were categorized as ‘usable’, while 90.9% (20 out of 22) were ‘unusable’. This underscores the need for targeted educational interventions or app modifications to enhance usability among older populations [49,50].
Those reporting any current symptoms tended to submit more usable photos for analysis (Odds ratio: 1.36 for those who simply reported any symptoms, p-value 0.022; 1.6 for those who reported with detailed descriptions, p-value 0.027). These results suggest that users experiencing specific symptoms are more likely to try to submit photographs of adequate quality, likely in pursuit of diagnostic clarification or confirmation of their conditions.
These demographic insights are crucial for future enhancements of the app, indicating a need for personalized user support systems and potential improvements in app interface design to accommodate a broader range of users effectively. Moreover, understanding these patterns helps in refining the algorithms used by our deep learning models, potentially improving their diagnostic accuracy.

4.3. Performance of Deep Learning Models

The DL-Image Eligibility model demonstrated a strong ability to identify clinically useful photos, with an accuracy of 0.827. It showed high precision (PPV) at 0.906 and specificity at 0.922. However, it also had a sensitivity (recall) of 0.733 and an F1 score of 0.812, indicating instances where clinically useful photos were missed. The negative predictive value (NPV) was 0.773. The model’s AUC was 0.866, indicating a high level of discrimination between clinically useful and non-useful photos.
The disease detection model (DL-Diagnosis) struggled with sensitivity, correctly identifying abnormal cases with a recall of 0.286. The precision (PPV) was 0.429, and the F1 score was 0.343, reflecting a lower balance between precision and recall. The negative predictive value (NPV) was 0.971. The model AUC was 0.624, indicating somewhat low effectiveness in distinguishing between normal and abnormal cases.
These results highlight the strengths and limitations of each model in their respective tasks, with the DL-Image Eligibility model performing well in recognizing clinically useful photos and the DL-Diagnosis model needing improvement in detecting abnormalities.

4.4. Photo Quality Assessment

Obtaining adequate quality photographs is essential for effectively identifying disease. In clinical settings, clinicians use a slit lamp exam to evaluate the anterior segment of the eye. However, in screening or telemedicine contexts, relying solely on a skilled operator to perform the exam is impractical. Educating staff to utilize test devices for image capture requires effort, supervision, and additional expense. Here, deep learning can assist. Patients capture their photographs as ‘selfies’, and deep learning algorithms can determine which images are adequate for clinical purposes.
For instance, a previous study developed a deep learning model to grade bulbar conjunctival injection [17]. The model performed well in evaluating the severity of conjunctival injection. Researchers noted that subjective grading in clinics might depend on each clinician’s experience and perception, whereas deep learning provides more consistent and continuous quantitative metrics.
A study evaluating smartphone-attachable devices for capturing anterior segment photographs found that only 16% of photos taken with smartphones alone were suitable for decision-making [14]. This ratio improved to 52.85% with the use of attachable devices. In our study, 40.3% of all obtained images were deemed appropriate without any attachable device. The higher incidence of usable images suggests that integration of deep learning to help guide users during image capture may help users capture more effective photographs.
Liu et al. developed a fusion training model integrating slit-lamp and smartphone-acquired images for pterygium detection, achieving a sensitivity of 93.60%, specificity of 96.13%, and accuracy of 92.38% [51]. Their approach effectively leveraged complementary slit-lamp data to address limited smartphone image datasets. In contrast, our study exclusively relies on smartphone-acquired images without auxiliary datasets or external hardware, emphasizing accessibility and scalability in resource-limited environments. Vasan et al. evaluated the capabilities of AI-driven mobile applications for cataract detection in field settings, achieving a sensitivity of 91.2% and a specificity of 92.6% [52]. While their focus was on a single pathology, our study explores a broader diagnostic approach for anterior segment abnormalities using smartphone-captured images. Despite the lower sensitivity and specificity of our DL-Diagnosis model, the expanded diagnostic scope highlights unique challenges and opportunities for future improvements.

4.5. Anterior Segment Disease Detection

The analysis of images reviewed by the ophthalmologists revealed that 9.1% of anterior segments were disease or lesions, with the most prevalent abnormality being excessive conjunctival hyperemia. Significant findings such as pterygium-like lesions, conjunctival hyperpigmentation, cystic lesions, and subconjunctival hemorrhages were also observed. The distribution of findings was consistent in both the development and validation sets. These observations corroborate previous research indicating that smartphone-captured images possess adequate sensitivity for detecting conjunctival diseases, suggesting the chances of high-quality smartphone images in identifying conjunctival abnormalities without additional optical attachments [53,54].
Corneal opacity lesions were detected in 8.6% of cases in the training set and 9.5% in the validation set, supporting prior research on the reliability of smartphone photographs in identifying corneal scars and opacities [55,56]. However, the anticipated detection of corneal dystrophic patterns and epithelial lesions was not observed, likely due to their low prevalence and the limited sample size, which might affect the frequency of these observations. A distinctive outcome of this study was the detection of cataracts without additional devices. Anterior cortical lens opacities and nucleus cataracts were identified in images taken with a retro-illuminated view. This finding suggests the potential of the application to detect various stages of cataracts, extending beyond advanced conditions such as mature or brunescent cataracts. However, there remains significant room for improvement. Eyelid problems, such as ectropion or entropion, were rarely detected. This may be due to the challenges in analyzing eyelid conditions, as users often pull their eyelids to expose other concerns, which might obscure the eyelid problems themselves.
These results show that the application, without the need for additional camera attachments, can be used to identify some anterior segment diseases of the eye, at least by human ophthalmologists. This suggests that, although the model presented here achieved modest performance, it should be possible to train a deep learning model to detect these anterior segment eye conditions using images captured by untrained users with no additional hardware beyond their smartphone.

4.6. Medico-Legal Liability and Applicable Clinical Cases

Regarding potential medico-legal liability, Eye Home Clinic implements several measures to address concerns. The app includes a comprehensive liability release form that patients must agree to before using it. This form clearly states that the app is intended to supplement, not replace, professional medical advice, diagnosis, or treatment and that the company does not guarantee responsibility for actions taken based on the app’s results. Users are strongly advised to consult a healthcare provider for any health-related concerns before making any decisions based on the app’s outputs. This approach is similar to the one used in the ‘Dermatology AI App for Skin Cancer Detection’, where clear disclaimers and liability releases are provided to users, emphasizing the app’s role as a supplementary tool (https://www.firstderm.com/ai-dermatology/ (accessed on 30 December 2024)).
This app may be used for screening tests before accessing real medical services. Moreover, one feasible clinical application of the Eye Home Clinic app is its potential use in emergency departments, especially when ophthalmologists are not immediately available [57,58]. In the future, the app may provide preliminary assessments that help triage patients and prioritize those needing urgent care.
HIPAA compliance is also being explored to ensure the app meets the highest standards of data security and privacy, making it suitable for integration into hospital systems. This includes end-to-end encryption of patient data, access control, process monitoring, and secure storage solutions. The app plans to utilize Google Cloud Platform, which offers built-in HIPAA compliance features, to achieve this.

4.7. Limitations and Future Work

There are several limitations of this study that need to be addressed. First, the variability in image quality across different smartphone devices is an important limitation of this study. This may be the result of the variability in smartphone hardware capabilities, especially in Android-based devices, and may limit model performance and generalizability. It may also be contributing to a second important limitation: the limited performance of the DL-Diagnosis model in detecting anterior segment disease. The relatively small dataset combined with large variability in images makes training a high-performing model challenging. Future work will focus on improving performance in several ways, including gathering additional training data through Eye Home Clinic, additional data augmentation techniques to mimic data capture across different smartphone types, and the use of pretrained foundation models (e.g., RETFound [59]) along with transfer learning to improve performance. Further advancements in user education and app interface design could also minimize user errors and optimize image quality.
The image quality grading criteria used here primarily relied on the clinical judgment of experienced ophthalmologists and were adapted from the six-step grading criteria outlined in a previous study [16]. These grading criteria focus on clinical usability rather than quantitative metrics for resolution, brightness, or signal-to-noise levels. This reliance on subjective criteria may be considered a limitation, but we believe it is a strength, given that our goal is to capture clinically useful photos rather than those that maximize moderately related quantitative metrics. In addition, we did not implement any image enhancement techniques, such as denoising or contrast adjustments. Instead, here, we focus on the impact of automated feedback using our DL-Image Eligibility model to help untrained users collect clinically usable images under real-world conditions. Future studies, however, will explore the integration of image enhancement techniques to improve further improve image quality and model performance.
This app was developed primarily to serve as a proof of concept for employing deep learning in detecting conditions from photographs, with a strong emphasis on user convenience. Consequently, the collection of personal information, such as detailed age, past medical history, and gender, was not mandatory, leading many users to opt out of providing these data. As a result, the non-disclosure of certain demographic information by participants may have introduced potential bias in analyzing user behavior and diagnostic outcomes. While this approach prioritized participant anonymity and convenience, it also limited the ability to fully assess the influence of demographic factors. Future work that emphasizes the collection and analysis of these factors may help us better understand their influence.
Future work will focus on validating the deep learning model’s performance against clinical diagnoses using prospectively collected data. Such advancements would support more precise and automated assessments for detecting anterior segment eye diseases, benefiting screening, public health surveillance, and ophthalmic telemedicine.

5. Conclusions

Eye Home Clinic has been successfully downloaded in over 100 countries, demonstrating its potential for anterior segment telemedicine. While the deep learning models showed promising performance in evaluating photo eligibility and diagnosing conditions, their current limitations—particularly the low sensitivity of the DL-Diagnosis model and variability in image quality across devices—underscore the need for further research. Nonetheless, this app serves as a real-world example of implementing a deep learning model for anterior segment eye imaging. Our findings highlight the importance of continued advancements in AI and smartphone technology to enhance diagnostic accuracy and accessibility for ocular anterior segment diseases. Future efforts should focus on expanding the dataset, refining the model through transfer learning, and optimizing performance across various devices to improve diagnostic reliability and reproducibility.

Author Contributions

Conceptualization, B.G. and S.L.B.; methodology, B.G.; software, B.G.; validation, B.G., S.L.B. and M.C.; formal analysis, B.G.; investigation, B.G.; resources, B.G.; data curation, B.G. and S.-H.L.; writing—original draft preparation, B.G., M.C. and S.L.B.; writing—review and editing, B.G., S.L.B. and M.C.; visualization, B.G.; supervision, S.L.B.; project administration, B.G. and S.L.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of Korea National Institute for Bioethics Policy (IRB approval number: P01-202303-01-025, date of approval: 27 March 2023).

Informed Consent Statement

Informed consent was obtained from all participants through the app’s workflow. Due to the remote nature of data collection via the smartphone application, consent was collected digitally rather than through traditional paper forms. This process strictly adhered to ethical guidelines to ensure that participants were fully informed and voluntarily consented to participate in this study.

Data Availability Statement

The data presented in this study may be available upon request from the corresponding author. However, the data are not publicly available due to privacy agreements and ethical restrictions.

Acknowledgments

We would like to acknowledge the valuable contributions of the participants in the world who used the Eye Home Clinic app and provided the data that made this research possible.

Conflicts of Interest

The research presented in this study was conducted using data analyzed from the Eye Home Clinic app developed by GANA Holdings, Inc. The first author, Byoungyoung Gu, is the CEO of GANA Holdings, Inc., who was primarily responsible for the overall development of the app, including its AI processes. Byoungyoung Gu is also one of the ophthalmologists who conducted the data research. This dual role may present a potential conflict of interest. All efforts have been made to ensure the objectivity and integrity of the research.

References

  1. Vizzeri, G.; Kjaergaard, S.M.; Rao, H.L.; Zangwill, L.M. Role of imaging in glaucoma diagnosis and follow-up. Indian J. Ophthalmol. 2011, 59 (Suppl. S1), S59–S68. [Google Scholar] [CrossRef] [PubMed]
  2. Bennett, T.J.; Barry, C.J. Ophthalmic imaging today: An ophthalmic photographer’s viewpoint—A review. Clin. Exp. Ophthalmol. 2009, 37, 2–13. [Google Scholar] [CrossRef]
  3. Weinreb, R.N.; Bowd, C.; Moghimi, S.; Tafreshi, A.; Rausch, S.; Zangwill, L.M. Ophthalmic diagnostic imaging: Glaucoma. In High Resolution Imaging in Microscopy and Ophthalmology: New Frontiers in Biomedical Optics; Springer: Berlin/Heidelberg, Germany, 2019; pp. 107–134. [Google Scholar]
  4. Zvornicanin, E.; Zvornicanin, J.; Hadziefendic, B. The use of smart phones in ophthalmology. Acta Inform. Medica 2014, 22, 206. [Google Scholar] [CrossRef]
  5. Pujari, A.; Saluja, G.; Agarwal, D.; Selvan, H.; Sharma, N. Clinically useful smartphone ophthalmic imaging techniques. Graefe’s Arch. Clin. Exp. Ophthalmol. 2021, 259, 279–287. [Google Scholar] [CrossRef] [PubMed]
  6. Mohan, A.; Kaur, N.; Sharma, V.; Sen, P.; Jain, E.; Gajraj, M. Ophthalmologists on smartphones: Image-based teleconsultation. Br. Ir. Orthopt. J. 2019, 15, 3. [Google Scholar] [CrossRef] [PubMed]
  7. Oliphant, H.; Kennedy, A.; Comyn, O.; Spalton, D.J.; Nanavaty, M.A. Commercial slit lamp anterior segment photography versus digital compact camera mounted on a standard slit lamp with an adapter. Curr. Eye Res. 2018, 43, 1290–1294. [Google Scholar] [CrossRef] [PubMed]
  8. Tahiri Joutei Hassani, R.; El Sanharawi, M.; Dupont-Monod, S.; Baudouin, C. Smartphones in ophthalmology. J. Fr. Ophtalmol. 2013, 36, 499–525. [Google Scholar] [CrossRef] [PubMed]
  9. Bastawrous, A.; Cheeseman, R.; Kumar, A. iPhones for eye surgeons. Eye 2012, 26, 343–354. [Google Scholar] [CrossRef]
  10. Lord, R.K.; Shah, V.A.; San Filippo, A.N.; Krishna, R. Novel uses of smartphones in ophthalmology. Ophthalmology 2010, 117, 1274–1274.e3. [Google Scholar] [CrossRef] [PubMed]
  11. Suto, S.; Hiraoka, T.; Okamoto, Y.; Okamoto, F.; Oshika, T. Photography of anterior eye segment and fundus with smartphone. Nippon. Ganka Gakkai Zasshi 2014, 118, 7–14. [Google Scholar] [PubMed]
  12. Bastawrous, A. Smartphone fundoscopy. Ophthalmology 2012, 119, 432–433.e432. [Google Scholar] [CrossRef]
  13. Askarian, B.; Ho, P.; Chong, J.W. Detecting cataract using smartphones. IEEE J. Transl. Eng. Health Med. 2021, 9, 1–10. [Google Scholar] [CrossRef]
  14. Joshi, V.P.; Jain, A.; Thyagrajan, R.; Vaddavalli, P.K. Anterior segment imaging using a simple universal smartphone attachment for patients. In Seminars in Ophthalmology; Taylor & Francis: Abingdon, UK, 2022; pp. 232–240. [Google Scholar]
  15. Armstrong, G.W.; Kalra, G.; Arrigunaga, S.D.; Friedman, D.S.; Lorch, A.C. Anterior Segment Imaging Devices in Ophthalmic Telemedicine. Semin. Ophthalmol. 2021, 36, 149–156. [Google Scholar] [CrossRef] [PubMed]
  16. Dutt, S.; Vadivel, S.S.; Nagarajan, S.; Galagali, A.; Christy, J.S.; Sivaraman, A.; Rao, D.P. A novel approach to anterior segment imaging with smartphones in the COVID-19 era. Indian J. Ophthalmol. 2021, 69, 1257–1262. [Google Scholar] [CrossRef] [PubMed]
  17. Wei, S.; Wang, Y.; Shi, F.; Sun, S.; Li, X. Developing a Deep Learning Model to Evaluate Bulbar Conjunctival Injection with Color Anterior Segment Photographs. J. Clin. Med. 2023, 12, 715. [Google Scholar] [CrossRef]
  18. Kaya, A. Ophthoselfie: Detailed Self-imaging of Cornea and Anterior Segment by Smartphone. Turk J. Ophthalmol. 2017, 47, 130–132. [Google Scholar] [CrossRef] [PubMed]
  19. Pujari, A.; Mukhija, R.; Singh, A.B.; Chawla, R.; Sharma, N.; Kumar, A. Smartphone-based high definition anterior segment photography. Indian J. Ophthalmol. 2018, 66, 1375–1376. [Google Scholar] [CrossRef]
  20. Thompson, A.C.; Jammal, A.A.; Medeiros, F.A. A Review of Deep Learning for Screening, Diagnosis, and Detection of Glaucoma Progression. Transl. Vis. Sci. Technol. 2020, 9, 42. [Google Scholar] [CrossRef] [PubMed]
  21. Miotto, R.; Wang, F.; Wang, S.; Jiang, X.; Dudley, J.T. Deep learning for healthcare: Review, opportunities and challenges. Brief. Bioinform. 2017, 19, 1236–1246. [Google Scholar] [CrossRef] [PubMed]
  22. Christopher, M.; Belghith, A.; Bowd, C.; Proudfoot, J.A.; Goldbaum, M.H.; Weinreb, R.N.; Girkin, C.A.; Liebmann, J.M.; Zangwill, L.M. Performance of Deep Learning Architectures and Transfer Learning for Detecting Glaucomatous Optic Neuropathy in Fundus Photographs. Sci. Rep. 2018, 8, 16685. [Google Scholar] [CrossRef]
  23. Ahmad, B.U.; Kim, J.E.; Rahimy, E. Fundamentals of artificial intelligence for ophthalmologists. Curr. Opin. Ophthalmol. 2020, 31, 303–311. [Google Scholar] [CrossRef] [PubMed]
  24. Abràmoff, M.D.; Lavin, P.T.; Birch, M.; Shah, N.; Folk, J.C. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digit. Med. 2018, 1, 39. [Google Scholar] [CrossRef]
  25. Christopher, M.; Nakahara, K.; Bowd, C.; Proudfoot, J.A.; Belghith, A.; Goldbaum, M.H.; Rezapour, J.; Weinreb, R.N.; Fazio, M.A.; Girkin, C.A.; et al. Effects of Study Population, Labeling and Training on Glaucoma Detection Using Deep Learning Algorithms. Transl. Vis. Sci. Technol. 2020, 9, 27. [Google Scholar] [CrossRef] [PubMed]
  26. Christopher, M.; Bowd, C.; Belghith, A.; Goldbaum, M.H.; Weinreb, R.N.; Fazio, M.A.; Girkin, C.A.; Liebmann, J.M.; Zangwill, L.M. Deep learning approaches predict glaucomatous visual field damage from OCT optic nerve head en face images and retinal nerve fiber layer thickness maps. Ophthalmology 2020, 127, 346–356. [Google Scholar] [CrossRef]
  27. Oh, S.; Park, Y.; Cho, K.J.; Kim, S.J. Explainable Machine Learning Model for Glaucoma Diagnosis and Its Interpretation. Diagnostics 2021, 11, 510. [Google Scholar] [CrossRef] [PubMed]
  28. Ran, A.R.; Cheung, C.Y.; Wang, X.; Chen, H.; Luo, L.-y.; Chan, P.P.; Wong, M.O.M.; Chang, R.T.; Mannil, S.S.; Young, A.L.; et al. Detection of glaucomatous optic neuropathy with spectral-domain optical coherence tomography: A retrospective training and validation deep-learning analysis. Lancet Digit. Health 2019, 1, e172–e182. [Google Scholar] [CrossRef]
  29. Schuman, J.S.; Cadena, M.D.L.A.R.; McGee, R.; Al-Aswad, L.A.; Medeiros, F.A.; Abramoff, M.; Blumenkranz, M.; Chew, E.; Chiang, M.; Eydelman, M. A Case for the Use of Artificial Intelligence in Glaucoma Assessment. Ophthalmol. Glaucoma 2022, 5, e3–e13. [Google Scholar] [CrossRef]
  30. Adlung, L.; Cohen, Y.; Mor, U.; Elinav, E. Machine learning in clinical decision making. Med 2021, 2, 642–665. [Google Scholar] [CrossRef] [PubMed]
  31. O’Dea, S. Smartphone subscriptions worldwide 2016–2026. Retrieved August 2021, 30, 2021. [Google Scholar]
  32. Srivastava, O.; Tennant, M.; Grewal, P.; Rubin, U.; Seamone, M. Artificial intelligence and machine learning in ophthalmology: A review. Indian J. Ophthalmol. 2023, 71, 11. [Google Scholar] [CrossRef]
  33. Bhatter, P.; Cao, L.; Crochetiere, A.; Raefsky, S.M.; Cuevas, L.R.; Enendu, K.; Frisch, E.H.; Shumway, C.; Gore, C.; Browne, A.W. Using a macro lens for anterior segment imaging in rural panama. Telemed. e-Health 2020, 26, 1414–1418. [Google Scholar] [CrossRef]
  34. Chen, D.Z.; Tan, C.W. Smartphone imaging in ophthalmology: A comparison with traditional methods on the reproducibility and usability for anterior segment imaging. Ann. Acad. Med. Singap. 2016, 45, 6–11. [Google Scholar] [CrossRef]
  35. Sari, D.M.; Safwan; Ramadhani; Ambri. Design of Augmented Reality Spot in Aceh Polytechnic Information Technology Program Based on Android. J. Inotera 2022, 7, 163–169. [Google Scholar] [CrossRef]
  36. Patil, P.P.; Alvares, R. Cross-platform Application Development using Unity Game Engine. Int. J. Adv. Res. Comput. Sci. Manag. Stud. 2015, 3, 19–27. Available online: https://www.researchgate.net/publication/312591645 (accessed on 24 August 2024).
  37. Qian, Y.; Miao, Y.; Huang, S.; Qiao, X.; Wang, M.; Li, Y.; Luo, L.; Zhao, X.; Cao, L. Real-Time Detection of Eichhornia crassipes Based on Efficient YOLOV5. Machines 2022, 10, 754. [Google Scholar] [CrossRef]
  38. Nguyen, G.; Dlugolinsky, S.; Bobák, M.; Tran, V.; López García, Á.; Heredia, I.; Malík, P.; Hluchý, L. Machine learning and deep learning frameworks and libraries for large-scale data mining: A survey. Artif. Intell. Rev. 2019, 52, 77–124. [Google Scholar] [CrossRef]
  39. Sarkar, D.; Bali, R.; Ghosh, T. Hands-On Transfer Learning with Python: Implement Advanced Deep Learning and Neural Network Models Using TensorFlow and Keras; Packt Publishing Ltd.: Birmingham, UK, 2018. [Google Scholar]
  40. Shorten, C.; Khoshgoftaar, T.M. A survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
  41. Model Conversion Overview. Available online: https://www.tensorflow.org/lite/models/convert (accessed on 11 August 2024).
  42. Unity ML-Agents Toolkit. Available online: https://github.com/Unity-Technologies/ml-agents (accessed on 18 August 2024).
  43. Sanguansak, T.; Morley, K.; Morley, M.; Kusakul, S.; Lee, R.; Shieh, E.; Yospaiboon, Y.; Bhoomibunchoo, C.; Chai-Ear, S.; Joseph, A.; et al. Comparing smartphone camera adapters in imaging post-operative cataract patients. J. Telemed. Telecare 2017, 23, 36–43. [Google Scholar] [CrossRef]
  44. Ludwig, C.A.; Murthy, S.I.; Pappuru, R.R.; Jais, A.; Myung, D.J.; Chang, R.T. A novel smartphone ophthalmic imaging adapter: User feasibility studies in Hyderabad, India. Indian J. Ophthalmol. 2016, 64, 191–200. [Google Scholar] [CrossRef] [PubMed]
  45. Kalra, G.; Ichhpujani, P.; Thakur, S.; Singh, R.B.; Sharma, U.; Kumar, S. A pilot study for smartphone photography to assess bleb morphology and vasculature post-trabeculectomy. Int. Ophthalmol. 2021, 41, 483–490. [Google Scholar] [CrossRef] [PubMed]
  46. Nagino, K.; Sung, J.; Midorikawa-Inomata, A.; Eguchi, A.; Fujimoto, K.; Okumura, Y.; Miura, M.; Yee, A.; Hurramhon, S.; Fujio, K.; et al. Clinical Utility of Smartphone Applications in Ophthalmology: A Systematic Review. Ophthalmol. Sci. 2024, 4, 100342. [Google Scholar] [CrossRef]
  47. Jin, K.; Li, Y.; Wu, H.; Tham, Y.C.; Koh, V.; Zhao, Y.; Kawasaki, R.; Grzybowski, A.; Ye, J. Integration of smartphone technology and artificial intelligence for advanced ophthalmic care: A systematic review. Adv. Ophthalmol. Pract. Res. 2024, 4, 120–127. [Google Scholar] [CrossRef]
  48. Sun, H. The Smartphone Revolution: A Comparative Study of Apple and Samsung. Highlights Bus. Econ. Manag. 2024, 24, 575–580. [Google Scholar] [CrossRef]
  49. Van Deursen, A.J. Digital inequality during a pandemic: Quantitative study of differences in COVID-19–related internet uses and outcomes among the general population. J. Med. Internet Res. 2020, 22, e20073. [Google Scholar] [CrossRef] [PubMed]
  50. Vaportzis, E.; Giatsi Clausen, M.; Gow, A.J. Older adults perceptions of technology and barriers to interacting with tablet computers: A focus group study. Front. Psychol. 2017, 8, 1687. [Google Scholar] [CrossRef] [PubMed]
  51. Liu, Y.; Xu, C.; Wang, S.; Chen, Y.; Lin, X.; Guo, S.; Liu, Z.; Wang, Y.; Zhang, H.; Guo, Y.; et al. Accurate detection and grading of pterygium through smartphone by a fusion training model. Br. J. Ophthalmol. 2024, 108, 336–342. [Google Scholar] [CrossRef]
  52. Vasan, C.S.; Gupta, S.; Shekhar, M.; Nagu, K.; Balakrishnan, L.; Ravindran, R.D.; Ravilla, T.; Subburaman, G.-B.B. Accuracy of an artificial intelligence-based mobile application for detecting cataracts: Results from a field study. Indian J. Ophthalmol. 2023, 71, 2984–2989. [Google Scholar] [CrossRef]
  53. Zhang, Y.; Kao, W.W.; Hayashi, Y.; Zhang, L.; Call, M.; Dong, F.; Yuan, Y.; Zhang, J.; Wang, Y.C.; Yuka, O.; et al. Generation and Characterization of a Novel Mouse Line, Keratocan-rtTA (KeraRT), for Corneal Stroma and Tendon Research. Investig. Ophthalmol. Vis. Sci. 2017, 58, 4800–4808. [Google Scholar] [CrossRef]
  54. Nesemann, J.M.; Seider, M.I.; Snyder, B.M.; Maamari, R.N.; Fletcher, D.A.; Haile, B.A.; Tadesse, Z.; Varnado, N.E.; Cotter, S.Y.; Callahan, E.K. Comparison of smartphone photography, single-lens reflex photography, and field-grading for trachoma. Am. J. Trop. Med. Hyg. 2020, 103, 2488. [Google Scholar] [CrossRef] [PubMed]
  55. Woodward, M.A.; Musch, D.C.; Hood, C.T.; Greene, J.B.; Niziol, L.M.; Jeganathan, V.S.E.; Lee, P.P. Tele-ophthalmic approach for detection of corneal diseases: Accuracy and reliability. Cornea 2017, 36, 1159. [Google Scholar] [CrossRef]
  56. Maamari, R.N.; Ausayakhun, S.; Margolis, T.P.; Fletcher, D.A.; Keenan, J.D. Novel telemedicine device for diagnosis of corneal abrasions and ulcers in resource-poor settings. JAMA Ophthalmol. 2014, 132, 894–895. [Google Scholar] [CrossRef] [PubMed]
  57. Teismann, N.; Neilson, J.; Keenan, J. Quality and feasibility of automated digital retinal imaging in the emergency department. J. Emerg. Med. 2020, 58, 18–24. [Google Scholar] [CrossRef]
  58. De Arrigunaga, S.; Aziz, K.; Lorch, A.C.; Friedman, D.S.; Armstrong, G.W. A review of ophthalmic telemedicine for emergency department settings. In Seminars in Ophthalmology; Springer: Berlin/Heidelberg, Germany, 2022; pp. 83–90. [Google Scholar]
  59. Zhou, Y.; Chia, M.A.; Wagner, S.K.; Ayhan, M.S.; Williamson, D.J.; Struyven, R.R.; Liu, T.; Xu, M.; Lozano, M.G.; Woodward-Court, P.; et al. A foundation model for generalizable disease detection from retinal images. Nature 2023, 622, 156–163. [Google Scholar] [CrossRef]
Figure 1. Workflow and user interface of the Eye Home Clinic app. Step-by-step workflow of the app, illustrating the process from photo capture to final result notification. The steps include photo preparation and capture, deep learning image quality assessment, deep learning abnormality detection, and the combined ophthalmologist review and final result notification. Photo Preparation and Capture: Users capture two close-up images of their eyes using the smartphone’s rear camera: one at 3× zoom and another at 5× zoom. Initial Image Quality Assessment Using Deep Learning: The 5× zoom image is analyzed by a deep learning model to determine if the image meets the quality criteria for further analysis. Feedback is provided to the user in real time. Image Abnormality Detection Using Deep Learning: If the image quality is sufficient, a second deep learning model evaluates the image to detect any abnormalities and classifies it as normal or abnormal. Cross-check by Ophthalmologist and Result Notification: Both 3× and 5× zoom images are sent to a server for review by an ophthalmologist. The final report, including the ophthalmologist’s assessment, is sent back to the user within 24 h, along with a notification alert.
Figure 1. Workflow and user interface of the Eye Home Clinic app. Step-by-step workflow of the app, illustrating the process from photo capture to final result notification. The steps include photo preparation and capture, deep learning image quality assessment, deep learning abnormality detection, and the combined ophthalmologist review and final result notification. Photo Preparation and Capture: Users capture two close-up images of their eyes using the smartphone’s rear camera: one at 3× zoom and another at 5× zoom. Initial Image Quality Assessment Using Deep Learning: The 5× zoom image is analyzed by a deep learning model to determine if the image meets the quality criteria for further analysis. Feedback is provided to the user in real time. Image Abnormality Detection Using Deep Learning: If the image quality is sufficient, a second deep learning model evaluates the image to detect any abnormalities and classifies it as normal or abnormal. Cross-check by Ophthalmologist and Result Notification: Both 3× and 5× zoom images are sent to a server for review by an ophthalmologist. The final report, including the ophthalmologist’s assessment, is sent back to the user within 24 h, along with a notification alert.
Informatics 12 00002 g001
Figure 2. Examples of training images classified by Image Eligibility Assessment. (A) Usable Group: Images usable for analysis, graded as Excellent (ac), Very Good (df), and Good (gi). (B) Unusable Group: Images unusable for analysis, graded as Bad (jl), Poor (mo), and Fail (pr).
Figure 2. Examples of training images classified by Image Eligibility Assessment. (A) Usable Group: Images usable for analysis, graded as Excellent (ac), Very Good (df), and Good (gi). (B) Unusable Group: Images unusable for analysis, graded as Bad (jl), Poor (mo), and Fail (pr).
Informatics 12 00002 g002
Figure 3. Examples of the training images classified as ‘abnormal’. (a) Excessive conjunctival hyperemia; (b) Sub-conjunctival hemorrhage; (c) Dermatitis-like eyelid lesions; (d) Corneal opacity; (e) Conjunctival hyperpigmentation; (f) Pterygium-like lesion; (g) Conjunctival cystic lesion; (h) Cataract.
Figure 3. Examples of the training images classified as ‘abnormal’. (a) Excessive conjunctival hyperemia; (b) Sub-conjunctival hemorrhage; (c) Dermatitis-like eyelid lesions; (d) Corneal opacity; (e) Conjunctival hyperpigmentation; (f) Pterygium-like lesion; (g) Conjunctival cystic lesion; (h) Cataract.
Informatics 12 00002 g003
Figure 4. Geographic distribution of Eye Home Clinic downloads (Apple Store vs. Google App Store). The app was downloaded 4341 times, with 43.3% from iPhones and 56.7% from Android devices. This figure demonstrates the global reach of the Eye Home Clinic app, showcasing downloads across a wide range of countries.
Figure 4. Geographic distribution of Eye Home Clinic downloads (Apple Store vs. Google App Store). The app was downloaded 4341 times, with 43.3% from iPhones and 56.7% from Android devices. This figure demonstrates the global reach of the Eye Home Clinic app, showcasing downloads across a wide range of countries.
Informatics 12 00002 g004
Figure 5. Summary of DL-Diagnosis predictions versus ground truth classification.
Figure 5. Summary of DL-Diagnosis predictions versus ground truth classification.
Informatics 12 00002 g005
Figure 6. Examples of Correct vs. Incorrect Diagnostic Outcomes by Deep Learning Model. These photos were partially trimmed to align well in the figure.
Figure 6. Examples of Correct vs. Incorrect Diagnostic Outcomes by Deep Learning Model. These photos were partially trimmed to align well in the figure.
Informatics 12 00002 g006
Figure 7. Receiver operating characteristic curves for the DL-Image Eligibility and DL-Diagnosis models.
Figure 7. Receiver operating characteristic curves for the DL-Image Eligibility and DL-Diagnosis models.
Informatics 12 00002 g007
Table 1. Demographic and clinical characteristics of users in this study categorized by photo eligibility and disease classification.
Table 1. Demographic and clinical characteristics of users in this study categorized by photo eligibility and disease classification.
User Demographic Data for DL Model
Research on Photo Eligibility (n = 1526)
User Demographic Data for DL Model Research on
Photos Categorized as Normal or Abnormal (n = 615)
MeasurementTotal UsersUsableUnusablepNormalAbnormalp
(n = 1526)(615, 40.3%)(911, 59.7%)(559, 90.9%)(56, 9.1%)
Downloaded the app from
 Apple Store (iOS users)1028 (67.4%)591 (96.1%)437 (48.0%)<0.001537 (96.1%)54 (96.4%)1 *
 Google Store (Android users)498 (32.6%)24 (3.9%)474 (52.0%) 22 (3.9%)2 (3.6%)
Age
 21–40634 (41.5%)335 (54.5%)299 (32.8%) 307 (54.9%)28 (50.0%)
 41–60252 (16.5%)106 (17.2%)146 (16.0%)<0.001 *95 (17.0%)11 (19.6%)<0.001 *
 Over 6022 (1.4%)2 (0.3%)20 (2.2%) 1 (0.2%)1 (1.8%)
 Undisclosed618 (40.5%)172 (28.0%)446 (49.0%) 156 (27.9%)16 (28.6%)
Sex
 Male567 (37.2%)231 (37.6%)336 (36.9%) 213 (38.1%)18 (32.1%)
 Female563 (36.9%)273 (44.4%)290 (31.8%)<0.001251 (44.9%)22 (39.3%)0.099
 Undisclosed396 (26.0%)111 (18.0%)285 (31.3%) 95 (17.0%)16 (28.6%)
Mentioned the existence of any discomfort symptoms
 Yes298 (19.5%)138 (22.4%)160 (17.6%)0.022122 (21.8%)16 (28.6%)0.32
 No or Undisclosed1228 (80.5%)477 (77.6%)751 (82.4%) 437 (78.2%)40 (71.4%)
Mentioned the detailed-described symptoms
 Yes84 (5.5%)44 (7.2%)40 (4.4%)0.02737 (6.6%)7 (12.5%)0.175
 No or Undisclosed1442 (94.5%)571 (92.8%)871 (95.6%) 522 (93.4%)49 (87.5%)
Disclosed the existence of Diabetes
 Yes23 (1.5%)11 (1.8%)12 (1.3%)0.59810 (1.8%)1 (1.8%)1 *
 No or Undisclosed1503 (98.5%)604 (98.2%)899 (98.7%) 549 (98.2%)55 (98.2%)
Disclosed the existence of previous traumatic history
 Yes86 (5.6%)42 (6.8%)44 (4.8%)0.12236 (6.4%)6 (10.7%)0.352
 No or Undisclosed1440 (94.4%)573 (93.2%)867 (95.2%) 523 (93.6%)50 (89.3%)
Disclosed the existence of previous ocular surgery
 Yes32 (2.1%)15 (2.4%)17 (1.9%)0.55914 (2.5%)1 (1.8%)1 *
 No or Undisclosed1494 (97.9%)600 (97.6%)894 (98.1%) 545 (97.5%)55 (98.2%)
Note: All statistical analyses were conducted using the chi-square test. For cells with counts fewer than 5, Fisher’s exact test was applied (indicated by *).
Table 2. Summary of performance metrics for DL-Image Eligibility and DL-Diagnosis.
Table 2. Summary of performance metrics for DL-Image Eligibility and DL-Diagnosis.
ModelTrue Positives (TP)False Positives (FP)True Negatives (TN)False Negatives (FN)Total
DL-Image Eligibility1922023870520
DL-Diagnosis6827315302
Table 3. Summary of ophthalmologist abnormal classifications in the training and hold-out datasets.
Table 3. Summary of ophthalmologist abnormal classifications in the training and hold-out datasets.
Abnormal Cases in the Development SetN (Total = 35)%
Excessive conjunctival hyperemia1131.4%
Sub-conjunctival hemorrhage720.0%
Dermatitis-like eyelid lesions38.6%
Corneal opacity38.6%
Conjunctival hyperpigmentation25.7%
Pterygium-like lesion25.7%
Conjunctival cystic lesion12.9%
Cataract12.9%
Presence of ocular foreign body514.3%
Total35100.0%
Abnormal Cases in the Validation SetN (Total =21)%
Excessive conjunctival hyperemia1257.1%
Sub-conjunctival hemorrhage14.8%
Dermatitis-like eyelid lesions314.3%
Corneal opacity29.5%
Chalazion14.8%
Pterygium-like lesion29.5%
Total21100.0%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gu, B.; Christopher, M.; Lim, S.-H.; Baxter, S.L. Deep Learning-Based Analysis of Ocular Anterior Segment Diseases from Patient-Self-Captured Smartphone Images. Informatics 2025, 12, 2. https://doi.org/10.3390/informatics12010002

AMA Style

Gu B, Christopher M, Lim S-H, Baxter SL. Deep Learning-Based Analysis of Ocular Anterior Segment Diseases from Patient-Self-Captured Smartphone Images. Informatics. 2025; 12(1):2. https://doi.org/10.3390/informatics12010002

Chicago/Turabian Style

Gu, Byoungyoung, Mark Christopher, Su-Ho Lim, and Sally L. Baxter. 2025. "Deep Learning-Based Analysis of Ocular Anterior Segment Diseases from Patient-Self-Captured Smartphone Images" Informatics 12, no. 1: 2. https://doi.org/10.3390/informatics12010002

APA Style

Gu, B., Christopher, M., Lim, S.-H., & Baxter, S. L. (2025). Deep Learning-Based Analysis of Ocular Anterior Segment Diseases from Patient-Self-Captured Smartphone Images. Informatics, 12(1), 2. https://doi.org/10.3390/informatics12010002

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop