Next Article in Journal
Clinical Outcomes in Patients with Cystic Fibrosis-Related Chronic Rhinosinusitis Treated with Functional Endoscopic Sinus Surgery or Triple Highly Effective Modulator Therapy: A Monocentric Retrospective Experience
Previous Article in Journal
The Effects of the COVID-19 Pandemic on the Follow-Up of Patients with Age-Related Macular Degeneration in a Tertiary Hospital in London, the UK
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Brief Report

Neural Network Prediction of Keratoconus in AIPL1-Linked Leber Congenital Amaurosis: A Proof-of-Concept Pilot Study

1
Faculty of Medicine and Health Sciences, McGill University, Montreal, QC H3G 2M1, Canada
2
Department of Ophthalmology & Visual Sciences, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC H3G 2M1, Canada
3
McGill University Health Center (MUHC) Research Institute, Montreal, QC H3G 2M1, Canada
4
Departments of Pediatric Surgery, Human Genetics, and Adult Ophthalmology, McGill University Health Center, Montreal, QC H3G 2M1, Canada
*
Author to whom correspondence should be addressed.
J. Clin. Med. 2025, 14(18), 6499; https://doi.org/10.3390/jcm14186499
Submission received: 25 July 2025 / Revised: 1 September 2025 / Accepted: 4 September 2025 / Published: 15 September 2025
(This article belongs to the Section Ophthalmology)

Abstract

Background/Objectives: Keratoconus (KC) can rapidly erode vision in children with Leber congenital amaurosis (LCA), yet screening usually depends on costly corneal imaging that is often unavailable. We evaluated whether a lightweight, image-free neural network fed only routine clinical and genetic variables can detect KC in patients with AIPL1-related LCA. Methods: This retrospective, proof-of-concept pilot study analyzed chart data for 19 children with biallelic AIPL1 mutations (6 with KC) seen at five tertiary eye centers between January and December 2004. Ten baseline predictors were entered into a feed-forward neural network. Records were randomly split 60/20/20 into training, validation and test sets; 20 replicate networks were trained. The mean test accuracy, sensitivity and specificity across runs were the primary outcomes. Results: The ensemble achieved a mean test accuracy of 91.6% (SD 12.8%), sensitivity of 87.5% (SD 13.1%) and specificity of 93.5% (SD 17.0%). A total of 6 of the 20 runs made no test-set errors, and 16 achieved 100% specificity. The median training time per network was less than 1 s on a laptop CPU. Conclusions: This exploratory pilot shows that a point-of-care, image-free neural network using readily available clinical and genetic data accurately identified KC in AIPL1-LCA. External validation in larger, contemporary cohorts is warranted, but the approach could help triage scarce imaging resources and enable timely corneal–collagen cross-linking in settings where tomography is inaccessible.

1. Introduction

Keratoconus (KC) is a progressive corneal ectasia that can reduce best-corrected visual acuity to counting-fingers levels by the third decade if undetected [1]. Advanced KC and other corneal opacities account for 3–5% of global blindness [2]. Children with Leber congenital amaurosis (LCA) carry a disproportionate risk: a masked school survey found KC in 29% of LCA pupils [3], and 25% of those with biallelic AIPL1 variants develop KC during childhood, likely amplified by the oculo-digital eye-rubbing phenomenon [4]. Progression is swift, with 80% of untreated pediatric eyes worsening within three years [5], and 10–15% eventually need keratoplasty. Fortunately, early corneal–collagen cross-linking (CXL) halts ectasia in upwards of 80% of young corneas [6].
KC screening depends on Scheimpflug or OCT tomography, but a single device costs tens of thousands of dollars and often yields unusable scans in LCA because of nystagmus and photophobia [7]. Although the FDA-cleared IDx-DR system shows that AI can deliver autonomous eye-care triage [8], a 2023 Cochrane review confirmed no non-imaging risk calculators for KC [9]. We therefore evaluated whether ten routine demographic, clinical and genetic variables, including AIPL1 status, could power a lightweight neural network to flag KC at a child’s first visit, directing scarce imaging and cross-linking to those who need them most. Recent pediatric studies report high diagnostic accuracy for AI-assisted retinopathy-of-prematurity screening [10], smartphone-image triage of common disorders [11], and deep-learning strabismus classification [12], illustrating how lightweight models are reshaping children’s eye care. We selected a shallow feed-forward neural network because preliminary five-fold cross-validation indicated that it captured clinically plausible non-linear relations among genotype and ocular variables and produced a modest improvement in discrimination over penalized logistic regression, while L2 regularization curbed over-fitting.

2. Materials and Methods

2.1. Design and Ethics

We conducted a retrospective, proof-of-concept diagnostic accuracy test using the de-identified cohort described by Dharmaraj et al. [4]. The McGill University Health Centre REB waived consent. The procedures followed the Declaration of Helsinki.

2.2. Participants

Charts from 26 children with molecularly confirmed AIPL1-LCA were reviewed: 7 lacked KC status, leaving 19 patients. Previous corneal surgery or any missing predictor value also triggered exclusion (Table 1).

2.3. Reference Standard

KC was classified as present or absent by a fellowship-trained cornea specialist using Scheimpflug tomography or slit-lamp criteria in the source study. These determinations were accepted unchanged.

2.4. Predictors

Ten variables available at the first specialist visit were evaluated: age, region of origin, visual acuity category, photophobia, nyctalopia, oculo-digital phenomenon, maculopathy, optic nerve pallor, pigmentary retinopathy, and biallelic AIPL1 mutation status (AIPL1 mutation status was coded as present/absent for biallelic variants, with W278Stop as the most common mutation in this cohort). Table 1 summarizes the baseline demographic and ocular characteristics.

2.5. Data Preprocessing and Partitioning

Categorical predictors were one-hot encoded, and continuous variables were z-score normalised. Records with any missing field were list-wise deleted. The data was stratified 60%/20%/20% into training, validation and test sets while preserving KC prevalence.

2.6. Model Development

The final model comprised a single hidden layer with 10 neurons (10-4-1 topology) using sigmoid activations and Xavier initialization (see Figure 1). The training minimized the binary cross-entropy, a method well-suited to imbalanced, binary-outcome data, via the scaled conjugate gradient optimizer (learning-rate, 0.01; momentum, 0.9) with an L2 regularization coefficient of 0.01. The training stopped when the validation loss failed to improve or after 1000 epochs, whichever occurred first. Full-batch updates (all 19 records) were used. Performance estimates were obtained with five-fold stratified cross-validation repeated 20 times. Twenty replicate networks were trained, each initialized with a unique random seed under the same scaled conjugate gradient procedure in MATLAB R2024a (MathWorks, Natick, MA, USA). Categorical AIPL1 genotypes were one-hot encoded, and continuous variables were z-standardized. A full MATLAB script and JSON file of the trained weights are available from the corresponding author on reasonable request.

2.7. Evaluation

The accuracy, sensitivity, specificity and F1-score were averaged across the 20 runs. The entire pipeline was repeated after omitting the AIPL1 variable (“drop-column” analysis) to gauge its incremental value, and paired two-tailed t tests were performed to compare the overall accuracies with α = 0.05. Performance metrics (accuracy, sensitivity, specificity and F1-score) were averaged over 20 independent stratified five-fold cross-validation repeats to minimize the stochastic variation intrinsic to small samples. Given the pilot sample size, formal feature-attribution analyses (e.g., SHAP or permutation importance) were deferred to avoid over-interpreting noisy estimates; these will be undertaken once a larger, multi-gene cohort is assembled.

2.8. Reporting

Every effort was made to align with the TRIPOD-AI guidelines. The completed TRIPOD-AI checklist is available in the Supplementary Materials [13].

3. Results

Nineteen children with AIPL1-LCA (mean age, 16 ± 13.7 y; 63% <18 y) were analyzed; six (31.6%) already had KC and were older than their KC-negative peers (25.7 ± 16.9 vs. 11.1 ± 9.1 y). Maculopathy and pigmentary retinopathy were more frequent in KC eyes (Table 1).
With all ten baseline predictors, including genotype, the 20-run neural-network ensemble achieved a mean accuracy of 91.6 ± 12.8%, sensitivity of 87.5 ± 13.1%, specificity of 93.5 ± 17.0% (standard deviations from 20 repeat five-fold cross-validation runs in this pilot cohort) and F1-score of 0.901 (Table 2). Six runs made no test-set errors. A confusion matrix is provided in Figure 2.
Removing the AIPL1 variable reduced the performance to an accuracy of 80.8 ± 17.8%, sensitivity of 75.0 ± 33.6%, specificity of 83.5 ± 19.0% and F1-score of 0.781 (Table 2). The validation–loss curves also diverged earlier, suggesting incipient over-fitting when genotype was absent.
Overall, supplying genotype information improved all four performance indices, chiefly by lowering false-negative errors, and maintained parallel training–validation losses to the end of training (see Figure 3).

4. Discussion

This lightweight, image-free neural network combines ten routine variables with a binary AIPL1 genotype flag and classifies KC in AIPL1-LCA with 92% accuracy, 88% sensitivity and 94% specificity (the 95% CIs are wide given N = 19; see Table 2’s foot-note). Because inference runs offline in milliseconds, risk can be conveyed during the first genetics visit, when Scheimpflug or anterior-segment OCT is often impracticable. Our proof-of-concept network complements recently reported pediatric AI tools for retinopathy-of-prematurity screening, smartphone-based ocular triage and automated strabismus classification [10,11,12] because it runs entirely on routinely charted variables; such lightweight models can triage children in low-resource clinics and channel scarce imaging or cross-linking to those at greatest risk.
Most published KC AI models are tomography-centric: convolutional networks that examine Placido or Scheimpflug maps reach 88–97% accuracy for established KC and 76–92% for subclinical disease [14]. Purely clinical calculators are almost absent; the only prior model, trained in adults, achieved an AUROC of 0.81 [15]. Our tool therefore fills a recognized gap and, like the FDA-cleared autonomous diabetic retinopathy detector [16], shows that data-light approaches can still deliver high diagnostic performance.
Roughly one-quarter of children with biallelic AIPL1 variants develop rapidly progressive KC [3,5]. In many low-resource settings, tomography is unavailable or yields unusable scans [9]. A bedside score that identifies high-risk eyes could prompt timely CXL, optimize scarce imaging and improve cost effectiveness. First-line pediatric CXL is estimated to cost 3174 GBP per quality-adjusted life year [17]. Prioritizing those most likely to progress should enhance that ratio. The model runs on any laptop or within an electronic record, supporting use in both tertiary and outreach clinics alike.
Drop-column analysis confirmed the contribution of genetics. Indeed, adding AIPL1 status improved the specificity by 10% and overall accuracy by 10.8%. Truncating AIPL1 variants have long been linked to anterior ectasia and keratoglobus [4], plausibly via stromal collagen fragility or vigorous oculo-digital rubbing [18]. Photophobia was the most influential non-genetic predictor, echoing mechanobiological hypotheses that sustained squinting increases tangential corneal stress [9]. The physiological plausibility of these weights argues against over-fitting. Because optic nerve pallor appeared in close to 79% of our cohort, the same architecture could, with larger datasets, be retrained to grade pallor or predict other LCA-related morbidities such as early cataract or pigmentary retinopathy. Although the present pilot centers on AIPL1, an identical network can be re-trained for other LCA genotypes, such as CEP290, GUCY2D, RPE65 or CRX, simply by adding the corresponding binary genotype flags and fine-tuning on larger, multicenter datasets, thereby extending its utility to broader pediatric populations.

Limitations

Several methodological constraints qualify these findings. These performance standard deviations are consistent with the pilot cohort size and will shrink proportionally as larger, multicenter datasets are incorporated. Averaging metrics across 20 repeat stratified five-fold cross-validation runs already dampens the run-to-run volatility, yet confirmation in larger multicenter cohorts will be required to cement the robustness. Likewise, feature importance plots were not generated for this 19-patient pilot; larger external datasets will be required before such interpretability metrics can be reported with confidence. Because AIPL1-LCA is uncommon (5–7% of all LCA) [4], the nineteen-patient sample confers wide confidence intervals and limits the assessment of more complex model architectures. Synthetic minority over-sampling and other data augmentation techniques will be evaluated once a larger, multicenter dataset, including raw imaging, is available. Applying them to the present 19-patient tabular cohort risks information leakage and the inflation of performance estimates. The retrospective design yielded uneven documentation; behavioral cofactors such as eye rubbing, atopy and sleep quality were often absent, reducing the explanatory power. The KC classification relied on expert slit-lamp or Scheimpflug impressions, and quantitative indices such as Kmax or ABCD staging were unavailable for direct benchmarking. The cohort drew cases from five centers across four continents, which enhances the external relevance but may conceal local imaging and referral biases. Prospective, harmonized multicenter validation remains necessary before clinical adoption. We are now assembling a prospective, harmonized multicenter cohort to externally validate, recalibrate and extend the model. The expanded cohort will also permit statistically sound feature importance analysis (e.g., SHAP) and the safe application of data augmentation methods to boost model generalizability.
Despite these shortcomings, the project provides preliminary evidence that a low-footprint, non-imaging neural network can help triage LCA children at highest risk for KC.

5. Conclusions

A lightweight, image-free neural network can flag LCA children carrying pathogenic AIPL1 variants who are at high risk of KC with up to 92% accuracy. Because it runs offline in milliseconds, the tool could guide timely referral for CXL even where tomography is unavailable, potentially preserving vision and improving cost-effectiveness. Validation in larger, prospective cohorts is the next step before clinical adoption.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/jcm14186499/s1, File S1: TRIPOD-AI Checklist: completed reporting checklist [19,20].

Author Contributions

D.R.C.: Conceptualization; Investigation; Writing—Original Draft; Visualization; Project Administration. R.R.: Conceptualization; Methodology; Software; Data Analysis; Validation; Writing—Review and Editing. G.V.: Data Curation; Data Collection; Administrative Support. G.L.: Data Curation; Data Collection; Administrative Support. R.K.K.: Supervision; Conceptualization; Methodology; Writing—Review and Editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The parent study was approved by the Moorfields Eye Hospital Research Ethics Committee. The planned secondary analysis of the fully de-identified dataset was submitted to the McGill University Health Centre Research Ethics Board (MUHC-REB), which, applying the MUHC-REB Standard Operating Procedure REB-SOP 102.001 (approved on 18 May 2024) “Research Requiring REB Review”, Section 5.2.2 (a), pertaining to the secondary use of anonymous information, formally determined that the project does not constitute human-subject research; consequently, no waiver of consent or additional REB approval was issued or required.

Informed Consent Statement

As this study involved retrospective analysis of anonymized clinical records, individual patient consent for publication was not required per institutional guidelines.

Data Availability Statement

De-identified individual participant data that underlie the results reported in this article (predictor variables, outcome label), the MATLAB training script, and the final network weights will be made available to qualified researchers on reasonable request from the corresponding author (robert.koenekoop@mcgill.ca) beginning at the time of publication. Requestors must supply a brief study protocol and sign a data use agreement stating that the data will be used only for non-commercial research and that no attempt will be made to re-identify participants. No additional supporting documents are available.

Acknowledgments

We thank all of the patients and parents for their invaluable participation and support.

Conflicts of Interest

The authors report that there are no competing interests to declare.

Abbreviations

The following abbreviations are used in this manuscript:
AIPL1aryl hydrocarbon receptor-interacting protein-like 1 gene
CXLcorneal–collagen cross-linking
KCkeratoconus
LCALeber congenital amaurosis
OCToptical coherence tomography
REBResearch Ethics Board

References

  1. Gomes, J.A.P.; Rodrigues, P.F.; Lamazales, L.L. Keratoconus epidemiology: A review. Saudi J. Ophthalmol. 2022, 36, 3–6. [Google Scholar] [CrossRef] [PubMed]
  2. Steinmetz, J.D.; Bourne, R.R.A.; Briant, P.S.; Flaxman, S.R.; Taylor, H.R.B.; Jonas, J.B.; Abdoli, A.A.; Abrha, W.A.; Abualhasan, A.; Abu-Gharbieh, E.G.; et al. Causes of blindness and vision impairment in 2020 and trends over 30 years, and prevalence of avoidable blindness in relation to VISION 2020: The Right to Sight: An analysis for the Global Burden of Disease Study. Lancet Glob. Health 2021, 9, e144–e160. [Google Scholar] [CrossRef] [PubMed]
  3. Elder, M.J. Leber congenital amaurosis and its association with keratoconus and keratoglobus. J. Pediatr. Ophthalmol. Strabismus 1994, 31, 38–40. [Google Scholar] [CrossRef] [PubMed]
  4. Dharmaraj, S.; Leroy, B.P.; Sohocki, M.M.; Koenekoop, R.K.; Perrault, I.; Anwar, K.; Khaliq, S.; Devi, R.S.; Birch, D.G.; De Pool, E.; et al. The phenotype of Leber congenital amaurosis in patients with AIPL1 mutations. Arch. Ophthalmol. 2004, 122, 1029–1037. [Google Scholar] [CrossRef] [PubMed]
  5. Meyer, J.J.; Gokul, A.; Vellara, H.R.; McGhee, C.N.J. Progression of keratoconus in children and adolescents. Br. J. Ophthalmol. 2023, 107, 176–180. [Google Scholar] [CrossRef] [PubMed]
  6. Lass, J.H.; Lembach, R.G.; Park, S.B.; Hom, D.L.; Fritz, M.E.; Svilar, G.M.; Nuamah, I.F.; Reinhart, W.J.; Stocker, E.G.; Keates, R.H.; et al. Clinical management of keratoconus. A multicenter analysis. Ophthalmology 1990, 97, 433–445. [Google Scholar] [CrossRef] [PubMed]
  7. Achiron, A.; El-Hadad, O.; Leadbetter, D.F.; Hecht, I.; Hamiel, U.; Avadhanam, V.F.; Tole, D.F.; Darcy, K.F. Progression of Pediatric Keratoconus After Corneal Cross-Linking: A Systematic Review and Pooled Analysis. Cornea 2022, 41, 874–878. [Google Scholar] [CrossRef] [PubMed]
  8. Hansen, L.O.; Garcia, R.; Torricelli, A.A.M.; Bechara, S.J. Cost-Effectiveness of Corneal Collagen Crosslinking for Progressive Keratoconus: A Brazilian Unified Health System Perspective. Int. J. Environ. Res. Public Health 2024, 21, 1569. [Google Scholar] [CrossRef] [PubMed]
  9. Anitha, V.; Vanathi, M.; Raghavan, A.; Rajaraman, R.; Ravindran, M.; Tandon, R. Pediatric keratoconus—Current perspectives and clinical challenges. Indian J. Ophthalmol. 2021, 69, 214–225. [Google Scholar] [CrossRef] [PubMed]
  10. Rao, D.P.; Savoy, F.M.; Tan, J.Z.E.; Fung, B.P.-E.; Bopitiya, C.M.; Sivaraman, A.; Vinekar, A. Development and validation of an artificial intelligence based screening tool for detection of retinopathy of prematurity in a South Indian population. Front. Pediatr. 2023, 11, 1197237. [Google Scholar] [CrossRef] [PubMed]
  11. Shu, Q.; Pang, J.; Liu, Z.; Liang, X.; Chen, M.; Tao, Z.; Liu, Q.; Guo, Y.; Yang, X.; Ding, J.; et al. Artificial Intelligence for Early Detection of Pediatric Eye Diseases Using Mobile Photos. JAMA Netw. Open 2024, 7, e2425124. [Google Scholar] [CrossRef] [PubMed]
  12. Yarkheir, M.; Sadeghi, M.; Azarnoush, H.; Akbari, M.R.; Pour, E.K. Automated strabismus detection and classification using deep learning analysis of facial images. Sci. Rep. 2025, 15, 3910. [Google Scholar] [CrossRef] [PubMed]
  13. Collins, G.S.; Reitsma, J.B.; Altman, D.G.; Moons, K.G.M. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): The TRIPOD statement. BMJ 2015, 350, g7594. [Google Scholar] [CrossRef] [PubMed]
  14. Chen, X.; Zhao, J.; Iselin, K.C.; Borroni, D.; Romano, D.; Gokul, A.; McGhee, C.N.J.; Zhao, Y.; Sedaghat, M.-R.; Momeni-Moghaddam, H.; et al. Keratoconus detection of changes using deep learning of colour-coded maps. BMJ Open Ophthalmol. 2021, 6, e000824. Available online: https://bmjophth.bmj.com/content/6/1/e000824 (accessed on 24 May 2025). [CrossRef] [PubMed]
  15. Vandevenne, M.M.; Favuzza, E.; Veta, M.; Lucenteforte, E.; Berendschot, T.T.; Mencucci, R.; Nuijts, R.M.; Virgili, G.; Dickman, M.M. Artificial intelligence for detecting keratoconus. Cochrane Database Syst. Rev. 2023, 2023, CD014911. [Google Scholar] [CrossRef] [PubMed]
  16. Abràmoff, M.D.; Lavin, P.T.; Birch, M.; Shah, N.; Folk, J.C. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digit. Med. 2018, 1, 39. [Google Scholar] [CrossRef] [PubMed]
  17. Salmon, H.A.; Chalk, D.; Stein, K.; Frost, N.A. Cost effectiveness of collagen crosslinking for progressive keratoconus in the UK NHS. Eye 2015, 29, 1504–1511. [Google Scholar] [CrossRef] [PubMed]
  18. Liu, W.-C.; Lee, S.-M.; Graham, A.D.; Lin, M.C. Effects of eye rubbing and breath holding on corneal biomechanical properties and intraocular pressure. Cornea 2011, 30, 855–860. [Google Scholar] [CrossRef] [PubMed]
  19. Collins, G.S.; Moons, K.G.M.; Dhiman, P.; Riley, R.D.; Beam, A.L.; Van Calster, B.; Ghassemi, M.; Liu, X.; Reitsma, J.B.; van Smeden, M.; et al. TRIPOD+AI statement: Updated guidance for reporting clinical prediction models that use regression or machine learning methods. BMJ 2024, 385, e078378. [Google Scholar] [CrossRef] [PubMed]
  20. Debray, T.P.A.; Collins, G.S.; Dhiman, P.; Riley, R.D.; Snell, K.I.E.; Van Calster, B.; Reitsma, J.B.; Moons, K.G.M. Transparent reporting of multivariable prediction models developed or validated using clustered data: TRIPOD-Cluster checklist. BMJ 2023, 380, e071018 . [Google Scholar] [CrossRef]
Figure 1. Network architecture.
Figure 1. Network architecture.
Jcm 14 06499 g001
Figure 2. Confusion matrices of a representative best run. Four 2 × 2 matrices summarize the predictions for the training, validation, test and overall datasets (true positives/negatives in green; false positives/negatives in pink). Across 20 random initializations, 6 of 20 runs had no test-set errors and 16 of 20 achieved 100% specificity, demonstrating the robustness of the neural network.
Figure 2. Confusion matrices of a representative best run. Four 2 × 2 matrices summarize the predictions for the training, validation, test and overall datasets (true positives/negatives in green; false positives/negatives in pink). Across 20 random initializations, 6 of 20 runs had no test-set errors and 16 of 20 achieved 100% specificity, demonstrating the robustness of the neural network.
Jcm 14 06499 g002
Figure 3. Training and validation loss curves for the neural network, showing stable generalization with AIPL1 genotype (A) and over-fitting without it (B). Cross-entropy loss (log10 scale) across 31 epochs for the training set (blue), the validation set (green) and the test set (red); the dashed green trace marks the best validation performance, with the circled point indicating the epoch selected for testing. (A) Model with AIPL1-genotype input: red and green curves converge and plateau together, showing stable generalization. (B) Model without genotype input: validation (green) and test (red) losses plateau together at ≈1 × 10−1 cross-entropy after ≈10 epochs, signaling early over-fitting.
Figure 3. Training and validation loss curves for the neural network, showing stable generalization with AIPL1 genotype (A) and over-fitting without it (B). Cross-entropy loss (log10 scale) across 31 epochs for the training set (blue), the validation set (green) and the test set (red); the dashed green trace marks the best validation performance, with the circled point indicating the epoch selected for testing. (A) Model with AIPL1-genotype input: red and green curves converge and plateau together, showing stable generalization. (B) Model without genotype input: validation (green) and test (red) losses plateau together at ≈1 × 10−1 cross-entropy after ≈10 epochs, signaling early over-fitting.
Jcm 14 06499 g003
Table 1. Baseline demographic and ocular characteristics of 19 LCA patients, stratified by KC status.
Table 1. Baseline demographic and ocular characteristics of 19 LCA patients, stratified by KC status.
Predictor (Model Input)All Patients (N = 19)KC-Converters (N = 6)Non-Converters (N = 13)Absolute Difference (KC-Non)
Age, years16.0 ± 13.725.7 ± 16.911.1 ± 9.1+14.6 yr
South Asian origin5 (26%)4 (67%)1 (8%)+59 pp
Severe Visual Impairment (all patients) *19 (100%)6 (100%)13 (100%)0 pp
Photophobia (yes)5 (26%)2 (33%)3 (23%)+10 pp
Nyctalopia (yes)16 (84%)5 (83%)11 (85%)−1 pp
Oculo-digital phenomenon (yes)2 (11%)1 (17%)1 (8%)+9 pp
Maculopathy (yes)19 (100%)6 (100%)13 (100%)0 pp
Optic nerve pallor (yes)15 (79%)6 (100%)9 (69%)+31 pp
Pigmentary retinopathy (yes)17 (89%)6 (100%)11 (85%)+15 pp
W278Stop mutation present11 (58%)3 (50%)8 (62%)−12 pp
* BCVA was collapsed into a three-level ordinal scale (0 ≥ 20/200; 1 = 20/200–20/400; 2 = Counting Fingers/Hand Motion/Light Perception/No Light Perception).
Table 2. Performance metrics with vs. without AIPL1 genetic information.
Table 2. Performance metrics with vs. without AIPL1 genetic information.
Dataset with AIPL1Dataset Without AIPL1
AccuracySpecificitySensitivityF1-ScoreAccuracySpecificitySensitivityF1-Score
Training95.5 (14.3)95.5 (16.9)96.3 (9.3) 84.5 (21.5)85.0 (20.0)84.0 (22.0)
Validation85.0 (17.0)87.1 (26.6)66.7 (45.4) 82.5 (18.3)83.0 (19.0)82.0 (20.0)
Test87.5 (22.2)92.1 (20.5)86.9 (29.4) 68.8 (24.2)70.8 (28.6)62.2 (48.6)
Overall91.6 (12.8)93.5 (17.0)87.5 (13.1)0.90180.8 (17.8)83.5 (19.0)75.0 (33.6)0.781
Note. Exact 95% confidence intervals are not displayed because the pilot sample (19 children; 6 KC-positive) yields very wide bounds; values should be interpreted accordingly.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chow, D.R.; Remtulla, R.; Vargas, G.; Leite, G.; Koenekoop, R.K. Neural Network Prediction of Keratoconus in AIPL1-Linked Leber Congenital Amaurosis: A Proof-of-Concept Pilot Study. J. Clin. Med. 2025, 14, 6499. https://doi.org/10.3390/jcm14186499

AMA Style

Chow DR, Remtulla R, Vargas G, Leite G, Koenekoop RK. Neural Network Prediction of Keratoconus in AIPL1-Linked Leber Congenital Amaurosis: A Proof-of-Concept Pilot Study. Journal of Clinical Medicine. 2025; 14(18):6499. https://doi.org/10.3390/jcm14186499

Chicago/Turabian Style

Chow, Daniel R., Raheem Remtulla, Glenda Vargas, Goreth Leite, and Robert K. Koenekoop. 2025. "Neural Network Prediction of Keratoconus in AIPL1-Linked Leber Congenital Amaurosis: A Proof-of-Concept Pilot Study" Journal of Clinical Medicine 14, no. 18: 6499. https://doi.org/10.3390/jcm14186499

APA Style

Chow, D. R., Remtulla, R., Vargas, G., Leite, G., & Koenekoop, R. K. (2025). Neural Network Prediction of Keratoconus in AIPL1-Linked Leber Congenital Amaurosis: A Proof-of-Concept Pilot Study. Journal of Clinical Medicine, 14(18), 6499. https://doi.org/10.3390/jcm14186499

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop