Defining a Multi-Omic, AI-Enabled Stool Screening Paradigm for Colorectal Cancer: A Consensus Framework for Clinical Translation
Simple Summary
Abstract
1. The Evolving Landscape of Noninvasive Colorectal Cancer Screening
1.1. Clinical Imperatives: Adherence Gaps and the Precancerous Lesion Detection Deficit
1.2. The Performance Benchmark: Multitarget Stool DNA (Mt-sDNA) and the Next-Generation Cologuard Plus
1.3. The Unmet Need: Quantifying the Opportunity for Improvement
| Modality | Sample/Interval | Sens. (CRC) | Sens. (APL/HGD) | Specificity | Key Advantages and Limitations |
|---|---|---|---|---|---|
| Colonoscopy | NA/10 years | >95% | High | High | Reference standard; diagnostic and therapeutic. Invasive; prep/sedation; resource limits [4]. |
| FIT | Stool/1 year | 67.3% | 23.3% | 94.8–95.7% | Low cost and scalable. Low APL sensitivity; annual adherence required [5]. |
| Cologuard (original mt-sDNA) | Stool/3 years | 92.3% | 42.4% (HGD 69.2%) | 86.6% | Higher CRC/APL sensitivity than FIT. Lower specificity; more follow-up colonoscopies [8]. |
| Cologuard Plus (next-gen mt-sDNA) | Stool/3 years | 93.9% | 43.4% (HGD ~74%) | 90.6–92.7% | Higher specificity than legacy mt-sDNA; strong HGD detection. More than half of APLs remain missed [5,6,7]. |
| Shield (Guardant; cfDNA blood) | Blood/not established | 83.1% | 13.2% | 89.6% | Clinic-based blood draw; limited APL detection [9]. |
| Viome (stool RNA/microbiome) | Stool/not established | Not public | Not public | Not public | Development-stage stool RNA/microbiome platform; average-risk prospective data remain limited [11,12,13]. |
| BiotaX Labs (stool microbiome) | Stool/not established | Not public | Not public | Not public | Development-stage stool microbiome test; prospective colonoscopy-verified screening validation is still needed [14]. |
| Freenome (blood-based multiomic) | Blood/not established | 79.2% | 12.5% | 91.5% | Blood-based multiomic approach; limited APL detection in published validation [15,16]. |
| Test (Operating Point) | CRC Sens (%) | APL Sens (%) | Spec (%) Used | Detected CRCs/1000 | Detected APLs/1000 | Follow-Up Colonoscopies/1000 (Positives) |
|---|---|---|---|---|---|---|
| Published comparator scenario—FIT (BLUE-C input) [5] | 67.3 | 23.3 | 94.0 (modeled) | 3.3 | 24.8 | 81.4 |
| Published comparator scenario—legacy mt-sDNA (DeeP-C input) [8] | 92.3 | 42.4 | 86.6 (published) | 4.5 | 45.1 | 168.6 |
| Published comparator scenario—Cologuard Plus (FDA-labeled operating point input) [6,7] | 95.0 | 43.4 | 94.0 | 4.6 | 46.1 | 104.1 |
| Hypothetical multi-omic target—APL 55% (assumed CRC sens 95%, spec 94%) | 95.0 | 55.0 | 94.0 | 4.6 | 58.4 | 116.4 |
| Hypothetical multi-omic target—APL 60% (assumed CRC sens 95%, spec 94%) | 95.0 | 60.0 | 94.0 | 4.6 | 63.8 | 121.7 |
| Hypothetical multi-omic target—APL 65% (assumed CRC sens 95%, spec 94%) | 95.0 | 65.0 | 94.0 | 4.6 | 69.1 | 127.0 |
2. The Gut Microbiome: An Independent and Synergistic Axis for CRC Detection
2.1. Foundational Evidence: Cross-Cohort and Meta-Analytic Validation of Microbial Signatures
2.2. Key Microbial Biomarkers: From Fusobacterium Nucleatum to Multi-Taxa Risk Scores
2.3. Biological Plausibility and Clinical Synergy: Evidence for Combined Assays
3. Artificial Intelligence as the Integration Engine for Multi-Omic Diagnostics
3.1. Machine Learning Architectures: From Ensemble Methods to Deep Learning
3.2. The Generalizability Challenge: Addressing Batch Effects with Advanced Harmonization Frameworks
3.3. Ensuring Clinical Trust: The Role of Explainable AI (XAI)
4. A Development and Validation Playbook for Regulatory and Clinical Success
4.1. Pre-Analytical and Laboratory Rigor: The Foundation of a Robust Signal
4.2. A Principled AI Development Lifecycle: Adhering to TRIPOD + AI Reporting Standards
4.3. Integrating Diagnostic-Accuracy and Bias Appraisal: STARD 2015 and PROBAST
4.4. Designing Robust Clinical Trials: Incorporating SPIRIT-AI, CONSORT-AI, and DECIDE-AI Guidelines
4.5. Post-Market Surveillance: Model Governance, Drift Monitoring, and Real-World Evidence
5. Implementation and Adoption Considerations
5.1. Integration into Existing Screening Infrastructures
5.2. The Evolving Standard of Care
5.3. Intellectual Property and Freedom-to-Operate
6. Future Directions and Concluding Remarks
6.1. The Next Frontier: Strain-Level Resolution, Metabolomics, and Longitudinal Monitoring
6.2. The Health-Economic Equation: Balancing COGS and Prevention
6.3. Beyond Colorectal Cancer: Pan-Cancer Applications of Microbial and Multi-Omic Signatures
6.4. Ethical and Regulatory Considerations
6.5. Limitations
6.6. Conclusion: A Consensus Framework for a Clinically Actionable, AI-Powered Diagnostic
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Siegel, R.L.; Wagle, N.S.; Cercek, A.; Smith, R.A.; Jemal, A. Colorectal cancer statistics, 2023. CA Cancer J. Clin. 2023, 73, 233–254. [Google Scholar] [CrossRef]
- American Cancer Society. Colorectal Cancer Facts & Figures 2023–2025; American Cancer Society: Atlanta, GA, USA, 2025. [Google Scholar]
- Exact Sciences Corporation. Exact Sciences launches the Cologuard Plus™ test, transforming colorectal cancer screening. Exact Sciences Newsroom. 2025. Available online: https://www.exactsciences.com/newsroom/press-releases (accessed on 12 September 2025).
- Zhang, Y.; Song, K.; Zhou, Y.; Chen, Y.; Cheng, X.; Dai, M.; Wu, D.; Chen, H. Accuracy and long-term effectiveness of established screening modalities and strategies in colorectal cancer screening: An umbrella review. Int. J. Cancer 2025, 157, 126–138. [Google Scholar] [CrossRef]
- Imperiale, T.F.; Porter, K.; Zella, J.; Gagrat, Z.D.; Olson, M.C.; Statz, S.; Garces, J.; Lavin, P.T.; Aguilar, H.; Brinberg, D.; et al. Next-generation multitarget stool DNA test for colorectal cancer screening. N. Engl. J. Med. 2024, 390, 984–993. [Google Scholar] [CrossRef] [PubMed]
- US Food and Drug Administration. Summary of Safety and Effectiveness Data (SSED) for PMA P230043. 2024. Available online: https://www.accessdata.fda.gov/cdrh_docs/pdf23/P230043B.pdf (accessed on 12 September 2025).
- Exact Sciences Corporation. FDA Approves Exact Sciences’ Cologuard Plus™ Test. 2024. Available online: https://www.exactsciences.com/newsroom/press-releases/fda-approves-exact-sciences-cologuard-plus-test (accessed on 12 September 2025).
- Imperiale, T.F.; Ransohoff, D.F.; Itzkowitz, S.H.; Levin, T.R.; Lavin, P.; Lidgard, G.P.; Ahlquist, D.A.; Berger, B.M. Multitarget stool DNA testing for colorectal-cancer screening. N. Engl. J. Med. 2014, 370, 1287–1297. [Google Scholar] [CrossRef] [PubMed]
- Chung, D.C.; Gray, D.M., II; Singh, H.; Issaka, R.B.; Raymond, V.M.; Eagle, C.; Hu, S.; Chudova, D.I.; Talasaz, A.; Greenson, J.K.; et al. A cell-free DNA blood-based test for colorectal cancer screening. N. Engl. J. Med. 2024, 390, 973–983. [Google Scholar] [CrossRef] [PubMed]
- Forbes, S.P.; Donderici, E.Y.; Zhang, N.; Sharif, B.; Tremblay, G.; Schafer, G.; Raymond, V.M.; Talasaz, A.; Eagle, C.; Das, A.K.; et al. Population health outcomes of blood-based screening for colorectal cancer in comparison to current screening modalities: Insights from a discrete-event simulation model incorporating longitudinal adherence. J. Med. Econ. 2024, 27, 991–1002. [Google Scholar] [CrossRef]
- Park, A. Viome scores FDA breakthrough label for cancer-screening, microbiome-sequencing AI platform. Fierce Biotech. 2021. Available online: https://www.fiercebiotech.com/medtech/viome-scores-fda-breakthrough-label-for-microbiome-sequencing-ai-platform-for-cancer (accessed on 12 September 2025).
- Scripps Research. Viome and Scripps Research Partner to Develop First at-Home RNA Screening Test to prevent Colon Cancer Before it Strikes. 2025. Available online: https://www.scripps.edu/news-and-events/press-room/2025/20250714-viome-rna-screening.html (accessed on 12 September 2025).
- Medical Device Network. Viome, Scripps Research to develop colon polyps screening test. 2025. Available online: https://www.medicaldevice-network.com/news/viome-scripps-colon-polyps (accessed on 12 September 2025).
- BiotaX Labs Ltd. Microbiome Test for the Detection of Colorectal Polyps. ClinicalTrials.gov Identifier: NCT05060757. 2021. Available online: https://clinicaltrials.gov/study/NCT05060757 (accessed on 12 September 2025).
- Shaukat, A.; Burke, C.A.; Chan, A.T.; Grady, W.M.; Gupta, S.; Katona, B.W.; Ladabaum, U.; Liang, P.S.; Liu, J.J.; Putcha, G.; et al. Clinical validation of a circulating tumor DNA-based blood test to screen for colorectal cancer. JAMA 2025, 334, 56–63. [Google Scholar] [CrossRef]
- Freenome. Freenome Announces Exclusive License Agreement with Exact Sciences to Commercialize Freenome’s Blood-Based Screening Test for Colorectal Cancer. 2025. Available online: https://www.freenome.com/press (accessed on 12 September 2025).
- Collins, G.S.; Reitsma, J.B.; Altman, D.G.; Moons, K.G.M. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): The TRIPOD statement. Ann. Intern. Med. 2015, 162, 55–63. [Google Scholar] [CrossRef]
- Wolff, R.F.; Moons, K.G.M.; Riley, R.D.; Whiting, P.F.; Westwood, M.; Collins, G.S.; Reitsma, J.B.; Kleijnen, J.; Mallett, S. PROBAST: A tool to assess risk of bias and applicability of prediction model studies. Ann. Intern. Med. 2019, 170, 51–58. [Google Scholar] [CrossRef]
- Davis, S.E.; Matheny, M.E.; Balu, S.; Sendak, M.P. A framework for understanding label leakage in machine learning for health care. J. Am. Med. Inform. Assoc. 2023, 31, 274–280. [Google Scholar] [CrossRef]
- Varma, S.; Simon, R. Bias in error estimation when using cross-validation for model selection. BMC Bioinform. 2006, 7, 91. [Google Scholar] [CrossRef] [PubMed]
- Cawley, G.C.; Talbot, N.L.C. On over-fitting in model selection and subsequent selection bias in performance evaluation. J. Mach. Learn. Res. 2010, 11, 2079–2107. Available online: https://www.jmlr.org/papers/v11/cawley10a.html (accessed on 12 September 2025).
- Austin, P.C.; van Klaveren, D.; Vergouwe, Y.; Nieboer, D.; Lee, D.S.; Steyerberg, E.W. Validation of prediction models: Examining temporal and geographic stability of baseline risk and estimated covariate effects. Diagn. Progn. Res. 2017, 1, 12. [Google Scholar] [CrossRef] [PubMed]
- Bernett, J.; Blumenthal, D.B.; Grimm, D.G.; Haselbeck, F.; Joeres, R.; Kalinina, O.V.; List, M. Guiding questions to avoid data leakage in biological machine-learning applications. Nat. Methods 2024, 21, 1444–1453. [Google Scholar] [CrossRef]
- Novielli, P.; Romano, D.; Magarelli, M.; Di Bitonto, P.; Diacono, D.; Chiatante, A.; Lopalco, G.; Sabella, D.; Venerito, V.; Filannino, P.; et al. Explainable artificial intelligence for microbiome data analysis in colorectal cancer biomarker identification. Front. Microbiol. 2024, 15, 1348974. [Google Scholar] [CrossRef]
- Lundberg, S.M.; Lee, S.-I. A unified approach to interpreting model predictions. arXiv 2017, arXiv:1705.07874. [Google Scholar]
- Castellarin, M.; Warren, R.L.; Freeman, J.D.; Dreolini, L.; Krzywinski, M.; Strauss, J.; Barnes, R.; Watson, P.; Allen-Vercoe, E.; Moore, R.A.; et al. Fusobacterium nucleatum infection is prevalent in human colorectal carcinoma. Genome Res. 2012, 22, 299–306. [Google Scholar] [CrossRef]
- Wirbel, J.; Pyl, P.T.; Kartal, E.; Zych, K.; Kashani, A.; Milanese, A.; Fleck, J.S.; Voigt, A.Y.; Palleja, A.; Ponnudurai, R.; et al. Meta-analysis of fecal metagenomes reveals global microbial signatures that are specific for colorectal cancer. Nat. Med. 2019, 25, 679–689. [Google Scholar] [CrossRef]
- Dai, Z.; Coker, O.O.; Nakatsu, G.; Wu, W.K.K.; Zhao, L.; Chen, Z.; Chan, F.K.L.; Kristiansen, K.; Sung, J.J.Y.; Wong, S.H.; et al. Multi-cohort analysis of colorectal cancer metagenome identified altered bacteria across populations and universal bacterial markers. Microbiome 2018, 6, 70. [Google Scholar] [CrossRef]
- Zhu, Y.; Wang, Y.; Li, Y. A composite quantile regression model for correcting batch effects in microbiome data. Front. Microbiol. 2025, 16, 1484183. [Google Scholar] [CrossRef]
- Sinha, R.; Abu-Ali, G.; Vogtmann, E.; Fodor, A.A.; Ren, B.; Amir, A.; Schwager, E.; Crabtree, J.; Ma, S.; The Microbiome Quality Control Project Consortium; et al. Assessment of variation in microbial community amplicon sequencing by the Microbiome Quality Control (MBQC) project consortium. Nat. Biotechnol. 2017, 35, 1077–1086. [Google Scholar] [CrossRef]
- Austin, G.I.; Brown Kav, A.; ElNaggar, S.; Park, H.; Biermann, J.; Uhlemann, A.C.; Pe’er, I.; Korem, T. Processing-bias correction with DEBIAS-M improves cross-study generalization of microbiome-based prediction models. Nat. Microbiol. 2025, 10, 897–911. [Google Scholar] [CrossRef] [PubMed]
- Zepeda-Rivera, M.; Minot, S.S.; Bouzek, H.; Wu, H.; Blanco-Míguez, A.; Manghi, P.; Jones, D.S.; LaCourse, K.D.; Wu, Y.; McMahon, E.F.; et al. A distinct Fusobacterium nucleatum clade dominates the colorectal cancer niche. Nature 2024, 628, 424–432. [Google Scholar] [CrossRef] [PubMed]
- Zhou, Z.; Chen, J.; Yao, H.; Hu, H. Fusobacterium and Colorectal Cancer. Front. Oncol. 2018, 8, 371. [Google Scholar] [CrossRef] [PubMed]
- Zwezerijnen-Jiwa, F.H.; Sivov, H.; Paizs, P.; Zafeiropoulou, K.; Kinross, J. A systematic review of microbiome-derived biomarkers for early colorectal cancer detection. Neoplasia 2023, 36, 100868. [Google Scholar] [CrossRef]
- Chen, S.; Zhang, L.; Li, M.; Zhang, Y.; Sun, M.; Wang, L.; Lin, J.; Cui, Y.; Chen, Q.; Jin, C.; et al. Fusobacterium nucleatum reduces METTL3-mediated m6A modification and contributes to colorectal cancer metastasis. Nat. Commun. 2022, 13, 1248. [Google Scholar] [CrossRef]
- Fan, J.-Q.; Zhao, W.-F.; Lu, Q.-W.; Zha, F.-R.; Lv, L.-B.; Ye, G.-L.; Gao, H.-L. Fecal microbial biomarkers combined with multi-target stool DNA test improve diagnostic accuracy for colorectal cancer. World J. Gastrointest. Oncol. 2023, 15, 1424–1435. [Google Scholar] [CrossRef]
- Freitas, P.; Silva, F.; Sousa, J.V.; Ferreira, R.M.; Figueiredo, C.; Pereira, T.; Oliveira, H.P. Machine learning-based approaches for cancer prediction using microbiome data. Sci. Rep. 2023, 13, 11821. [Google Scholar] [CrossRef]
- Tian, T.; Zhu, M.; Zhang, Y.; Zhang, F.; Huang, X.; Li, J.; Wang, Y.; Liu, Y.; Wang, J.H. Harnessing AI for advancing pathogenic microbiology: Opportunities and challenges. Front. Microbiol. 2024, 15, 1510139. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Xu, C.; Zhao, L.-Y.; Ye, C.-S.; Xu, K.-C.; Xu, K.-Y. The application of machine learning in clinical microbiology and infectious diseases. Front. Cell. Infect. Microbiol. 2025, 15, 1545646. [Google Scholar] [CrossRef]
- Pateriya, D.; Malwe, A.S.; Sharma, V.K. CRCpred: An AI-ML tool for colorectal cancer prediction using gut microbiome. Comput. Biol. Med. 2025, 195, 110592. [Google Scholar] [CrossRef]
- Collins, G.S.; Moons, K.G.M.; Dhiman, P.; Riley, R.D.; Beam, A.L.; Van Calster, B.; Ghassemi, M.; Liu, X.; Reitsma, J.B.; van Smeden, M.; et al. TRIPOD+AI: Reporting guideline for machine learning/AI clinical prediction models. BMJ 2024, 385, e078378. [Google Scholar] [CrossRef] [PubMed]
- Cohen, J.; Bossuyt, P. TRIPOD+AI: An updated reporting guideline for clinical prediction models. BMJ 2024, 385, q824. [Google Scholar] [CrossRef] [PubMed]
- Moons, K.G.M.; de Groot, J.A.H.; Bouwmeester, W.; Vergouwe, Y.; Mallett, S.; Altman, D.G.; Reitsma, J.B.; Collins, G.S. Critical appraisal and data extraction for systematic reviews of pre-diction modelling studies: The CHARMS checklist. PLoS Med. 2014, 11, e1001744. [Google Scholar] [CrossRef] [PubMed]
- Bossuyt, P.M.; Reitsma, J.B.; Bruns, D.E.; Gatsonis, C.A.; Glasziou, P.P.; Irwig, L.; Lijmer, J.G.; Moher, D.; Rennie, D.; de Vet, H.C.W.; et al. STARD 2015: An updated list of essential items for reporting diagnostic accuracy studies. BMJ 2015, 351, h5527. [Google Scholar] [CrossRef]
- Moons, K.G.M.; Damen, J.A.A.; Kaul, T.; Hooft, L.; Navarro, C.A.; Dhiman, P.; Beam, A.L.; Van Calster, B.; Celi, L.A.; Denaxas, S.; et al. PROBAST+AI: An updated quality, risk of bias, and applicability assessment tool for prediction models using regression or artificial intelligence methods. BMJ 2025, 388, e082505. [Google Scholar] [CrossRef]
- Rivera, S.C.; Liu, X.; Chan, A.-W.; Denniston, A.K.; Calvert, M.J.; SPIRIT-AI and CONSORT-AI Working Group. Guidelines for clinical trial protocols for interventions involving artificial intelligence: The SPIRIT-AI extension. Nat. Med. 2020, 26, 1351–1363. [Google Scholar] [CrossRef]
- Liu, X.; Rivera, S.C.; Moher, D.; Calvert, M.J.; Denniston, A.K. Reporting guidelines for clinical trial reports for interventions involving artificial in-telligence: The CONSORT-AI extension. Nat. Med. 2020, 26, 1364–1374. [Google Scholar] [CrossRef]
- Vasey, B.; Nagendran, M.; Campbell, B.; Clifton, D.A.; Collins, G.S.; Denaxas, S.; Denniston, A.K.; Faes, L.; Geerts, B.; Ibrahim, M.; et al. DECIDE-AIExpert Group Reporting guideline for the early-stage clinical eval-uation of decision support systems driven by artificial intelligence: DECIDE-AI. BMJ 2022, 377, e070904. [Google Scholar] [CrossRef]
- Fendrick, A.M.; Ebner, D.W.; Dore, M.; Estes, C.; Aranda, G.; Dehghani, M. Value of stool-based colorectal cancer screening: Integrating real-world adherence, detection, and prevention in a cohort-based modeling analysis. J. Clin. Med. 2026, 15, 41. [Google Scholar] [CrossRef]
- Banavar, G.; Ogundijo, O.; Julian, C.; Toma, R.; Camacho, F.; Torres, P.J.; Hu, L.; Chandra, T.; Piscitello, A.; Kenny, L.; et al. Detecting salivary host and microbiome RNA signature for aiding diagnosis of oral and throat cancer. Oral Oncol. 2023, 145, 106480. [Google Scholar] [CrossRef]
- Loaiza-Bonilla, A.; Thaker, N.; Chung, C.; Parikh, R.B.; Stapleton, S.; Borkowski, P. Driving knowledge to action: Building a better future with artificial intelli-gence-enabled multidisciplinary oncology. Am. Soc. Clin. Oncol. Educ. Book 2025, 45, e100048. [Google Scholar] [CrossRef]
- Lupusoru, R.; Moleriu, L.C.; Mare, R.; Sporea, I.; Popescu, A.; Sirli, R.; Goldis, A.; Nica, C.; Moga, T.V.; Miutescu, B.; et al. AI-guided multi-omic microbiome modulation improves clinical and in-flammatory outcomes in refractory IBD: A real-world study. Int. J. Mol. Sci. 2026, 27, 201. [Google Scholar] [CrossRef]
- US Food and Drug Administration. Good Machine Learning Practice for Medical Device Development: Guiding Principles. 2025. Available online: https://www.fda.gov/medical-devices/software-medical-device-samd/good-machine-learning-practice-medical-device-development-guiding-principles (accessed on 1 March 2026).
- European Commission Directorate-General for Health and Food Safety. MDCG 2025-6 FAQ on interplay between the Medical Devices Regulation and In Vitro Diagnostic Medical Devices Regulation and the Artificial Intelligence Act. 2025. Available online: https://health.ec.europa.eu/latest-updates/mdcg-2025-6-faq-interplay-between-medical-devices-regulation-vitro-diagnostic-medical-devices-2025-06-19_en (accessed on 1 March 2026).


| Domain | Critical Requirement | Rationale for Rigor |
|---|---|---|
| Pre-Analytical | Standardized Exclusions: Explicit a priori exclusion of samples with recent antibiotic use (>30 days), overt GI bleeding (>14 days), or recent colonoscopy. | Prevents confounding of microbial and methylation signals by transient physiological states. |
| Unified Extraction: Use of validated protocols for dual extraction of host and microbial DNA from the same aliquot. | Ensures direct comparability of multi-omic signals; minimizes batch variation. | |
| AI Validation | Strict Leakage Prevention: Exclusion of all post-referral variables (e.g., hemoglobin immunoassay, pathology logs) from feature sets. | Prevents “hindsight bias” where the model learns proxies for the outcome rather than biological signals. |
| Temporal Holdouts: Validation using chronological (time-forward) splits rather than random shuffling. | Simulates prospective clinical deployment and reveals performance drift over time. | |
| Reporting | Guideline Adherence: Full compliance with TRIPOD + AI (development) and STARD 2015 (accuracy) checklists. | Enables transparent critical appraisal by regulators and clinicians. |
| Example Patient | True Label | Predicted Prob (Model) | Top Microbiome Contributors (Feature, Direction)—SHAP (Δ Log-Odds) | Top Epigenetic Contributors (Feature, Direction)—SHAP (Δ Log-Odds) | Net Explanation (Short) | Biological Plausibility Note |
|---|---|---|---|---|---|---|
| Patient A (case) | CRC | 0.92 | Fusobacterium nucleatum—↑ (+1.20); Peptostreptococcus—↑ (+0.45); Parvimonas—↑ (+0.30) | mSEPT9 (high methylation)—↑ (+1.05); SDC2 methylation—↑ (+0.40) | Microbiome and methylation signals are concordant; microbiome shifts + methylated SEPT9 together produce large positive contribution to the call. | Enrichment of Fusobacterium and oral pathobionts is repeatedly associated with CRC. |
| Patient B (adenoma; APL) | APL | 0.68 | Fusobacterium—↑ (+0.35); Akkermansia—↓ (protective) (−0.20); Bacteroides fragilis—↑ (+0.25) | SDC2 methylation (moderate)—↑ (+0.50); mSEPT9—low (0.00) | Microbiome provides moderate positive signal; epigenetic SDC2 contributes additional weight yielding an above-threshold call despite low SEPT9. | SDC2 methylation often signals earlier lesions; microbiome shifts can appear in advanced adenomas. |
| Patient C (false positive/inflammation) | No neoplasia (control) | 0.58 | High blood-associated taxa signal or recent bleed proxy: Streptococcus—↑ (+0.30); oral microbiome spillover—↑ (+0.25) | mSEPT9—low (0.00); SDC2—low (0.00) | Microbiome-only positive call, no corroborating methylation signal; SHAP shows weaker total contribution vs cases and suggests possible false positive due to inflammation or bleed. | When microbiome signal is driven by bleeding/inflammation, epigenetic markers (host methylation) may help discriminate true neoplasia from confounders. |
| Guideline | Primary Focus | Stage of Research | Key AI-Specific Recommendations (Concise) |
|---|---|---|---|
| TRIPOD + AI | Transparent reporting of model development & validation (regression or ML) | Pre-clinical → internal validation | Full model specification; clearly state data sources and participant flow; describe predictor handling, missingness, model-building, hyperparameter tuning, internal validation (nested CV) and final model specification; provide code/weights where possible [41]. |
| STARD 2015 | Diagnostic accuracy study reporting (index vs reference tests) | Comparative diagnostic evaluation/clinical validation | Provide index/reference test definitions, timing, recruitment and participant flow diagrams, blinding/status of readers, handling of indeterminate results, and cross-tabulations (2 × 2). Report test execution and thresholds used; when reporting ML-based index tests, state how thresholds were chosen without leaking test data [44]. |
| PROBAST/PROBAST-AI | Systematic risk-of-bias & applicability appraisal for prediction models | All stages (use for appraisal & reporting) | Perform structured bias assessment across participants, predictors, outcomes, and analysis; for AI models evaluate overfitting, leakage, calibration, temporal/geographic transportability, and transparency of analytical choices. Present PROBAST assessment in the supplement [18,45]. |
| SPIRIT-AI | Trial protocols that include AI interventions | Clinical trial protocol design | Specify algorithm version, intended inputs/outputs, human-AI interaction, monitoring, and plans for data handling/updates; pre-specify primary outcome and evaluation plan. |
| CONSORT-AI | Reporting of clinical trial results when AI is an intervention | Clinical trial report | Describe AI integration into workflow, versioning, training data provenance, performance error analysis, and state code/model availability and monitoring plans. |
| DECIDE-AI | Early-stage clinical evaluation & real-world safety | Early clinical evaluation/implementation studies | Assess workflow integration, human factors, clinician decision support, safety monitoring, and iterative refinement; report usability and implementation outcomes. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Loaiza-Bonilla, A.; Leyfman, Y.; Cortiana, V.; Crawford, R.; Modi, S. Defining a Multi-Omic, AI-Enabled Stool Screening Paradigm for Colorectal Cancer: A Consensus Framework for Clinical Translation. Cancers 2026, 18, 909. https://doi.org/10.3390/cancers18060909
Loaiza-Bonilla A, Leyfman Y, Cortiana V, Crawford R, Modi S. Defining a Multi-Omic, AI-Enabled Stool Screening Paradigm for Colorectal Cancer: A Consensus Framework for Clinical Translation. Cancers. 2026; 18(6):909. https://doi.org/10.3390/cancers18060909
Chicago/Turabian StyleLoaiza-Bonilla, Arturo, Yan Leyfman, Viviana Cortiana, Rhys Crawford, and Shivani Modi. 2026. "Defining a Multi-Omic, AI-Enabled Stool Screening Paradigm for Colorectal Cancer: A Consensus Framework for Clinical Translation" Cancers 18, no. 6: 909. https://doi.org/10.3390/cancers18060909
APA StyleLoaiza-Bonilla, A., Leyfman, Y., Cortiana, V., Crawford, R., & Modi, S. (2026). Defining a Multi-Omic, AI-Enabled Stool Screening Paradigm for Colorectal Cancer: A Consensus Framework for Clinical Translation. Cancers, 18(6), 909. https://doi.org/10.3390/cancers18060909

