Improving Early Prostate Cancer Detection Through Artificial Intelligence: Evidence from a Systematic Review
Simple Summary
Abstract
1. Introduction
2. Materials and Methods
2.1. Study Protocol
2.2. Search Strategy and Study Selection
- Source: studies published in English between January 2015 and 18 April 2025;
- Study design: RCTs, prospective or retrospective cohort studies, case–control studies, and pilot clinical studies;
- Study population: adult men (≥18 years) with clinical suspicion of prostate cancer, undergoing early diagnostic evaluation, or asymptomatic but at increased risk (e.g., elevated PSA, family history, age > 50);
- Study intervention: application of AI-based technologies for early detection of prostate cancer, including machine learning, deep learning, radiomics, natural language processing (NLP), AI-assisted image analysis, and clinical decision-support systems;
- Study outcomes: diagnostic accuracy measures (Area Under the Receiver Operating Characteristic Curve—AUC-ROC, Sensitivity, specificity, Positive Predictive Value—PPV, Negative Predictive Value—NPV, Diagnostic Odds Ratio—DOR), timeliness of reporting, and/or secondary outcomes such as avoided procedures, clinical decision impact, cost-effectiveness, patient-reported outcomes, accessibility, and equity.
- Source: studies published before 2015, after 18 April 2025, or not available in English;
- Study design: editorials, letters, narrative reviews, case reports, technical validation studies without patient-level diagnostic outcomes, and duplicate datasets;
- Study intervention: studies not involving AI-based diagnostic technologies, or those applying AI outside the early detection setting;
- Study outcomes: studies not reporting diagnostic accuracy, timeliness of reporting, or other clinically relevant outcomes.
2.3. Data Extraction and Collection
2.4. Risk of Bias (Quality) Assessment
3. Results
3.1. Study Selection
3.2. Participants’ Demographics
3.3. Domains of AI Application in Prostate Cancer Detection
- Image segmentation and lesion detection: Several studies [20,23,24,30,34,35,40] developed or validated deep learning architectures such as U-Net, 3D CNN, and nnU-Net for automated lesion detection and segmentation on mpMRI or bpMRI, supporting radiologists in identifying suspicious prostate regions. Across studies, Convolutional Neural Networks (CNNs) were primarily employed for lesion classification and image-based diagnosis, while U-Net and nnU-Net models were optimized for segmentation and volumetric mapping, achieving high lesion-level sensitivity and reproducibility. Ensemble architectures that integrated multiple CNN or U-Net networks demonstrated enhanced generalizability, particularly in multicenter settings with heterogeneous imaging data. AI segmentation was particularly effective in large and high-grade lesions, supporting radiologists in localizing csPCa while potentially reducing inter-reader variability. Other works extended AI-based segmentation to quantitative image analysis, improving the accuracy of prostate volume estimation and PSA density calculation [33].
- Computer-aided diagnosis and decision support: another group of studies focused on assisting radiologists during image interpretation and lesion classification with commercial or custom AI tools (e.g., Quantib Prostate, DL-CAD, ProstateID CADx, FocalNet) [21,25,26,27,29,31]. These systems provided real-time diagnostic suggestions or PI-RADS scoring support, improving sensitivity and diagnostic confidence—particularly among less-experienced readers—while maintaining comparable specificity.
- Risk classification and predictive modeling: several works employed AI to predict clinically significant disease or Gleason grade, integrating imaging and histopathological data [12,22,28,32,41]. These approaches showed promise for biopsy triage, reduction in unnecessary procedures, and improved risk stratification, especially in multicenter and multivendor validation settings.
3.4. Study Outcomes
3.5. AI Intervention/AUC-ROC
3.6. Sensitivity/Specificity Analysis
3.7. PPV/NPV/DOR
3.8. Quantitative Summary of Diagnostic Performance
3.9. Reporting Time
3.10. Avoided Procedures and Cost-Effectiveness
4. Discussion
5. Conclusions and Future Directions
6. Limitations
Supplementary Materials
Author Contributions
Funding
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Bray, F.; Laversanne, M.; Sung, H.; Ferlay, J.; Siegel, R.L.; Soerjomataram, I.; Jemal, A. Global cancer statistics 2022: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 2024, 74, 229–263. [Google Scholar] [CrossRef]
- Schafer, E.J.; Laversanne, M.; Sung, H.; Soerjomataram, I.; Briganti, A.; Dahut, W.; Bray, F.; Jemal, A. Recent Patterns and Trends in Global Prostate Cancer Incidence and Mortality: An Update. Eur. Urol. 2025, 87, 302–313. [Google Scholar] [CrossRef]
- Cornford, P.; van den Bergh, R.C.N.; Briers, E.; Van den Broeck, T.; Brunckhorst, O.; Darraugh, J.; Eberli, D.; De Meerleer, G.; De Santis, M.; Farolfi, A.; et al. EAU-EANM-ESTRO-ESUR-ISUP-SIOG Guidelines on Prostate Cancer-2024 Update. Part I: Screening, Diagnosis, and Local Treatment with Curative Intent. Eur. Urol. 2024, 86, 148–163. [Google Scholar] [CrossRef]
- Mensah, J.E.; Akpakli, E.; Kyei, M.; Klufio, K.; Asiedu, I.; Asante, K.; Toboh, B.; Ashaley, M.D.; Addo, B.M.; Morton, B.; et al. Prostate-specific antigen, digital rectal examination, and prostate cancer detection: A study based on more than 7000 transrectal ultrasound-guided prostate biopsies in Ghana. Transl. Oncol. 2025, 51, 102163. [Google Scholar] [CrossRef] [PubMed]
- Jin, Y.; Jung, J.H.; Han, W.K.; Hwang, E.C.; Nho, Y.; Lee, N.; Yun, J.E.; Lee, K.S.; Lee, S.H.; Lee, H.; et al. Diagnostic accuracy of prostate-specific antigen below 4 ng/mL as a cutoff for diagnosing prostate cancer in a hospital setting: A systematic review and meta-analysis. Investig. Clin. Urol. 2022, 63, 251–261. [Google Scholar] [CrossRef] [PubMed]
- Adhyam, M.; Gupta, A.K. A Review on the Clinical Utility of PSA in Cancer Prostate. Indian J. Surg. Oncol. 2012, 3, 120–129. [Google Scholar] [CrossRef] [PubMed]
- Ahmed, H.U.; El-Shater Bosaily, A.; Brown, L.C.; Gabe, R.; Kaplan, R.; Parmar, M.K.; Collaco-Moraes, Y.; Ward, K.; Hindley, R.G.; Freeman, A.; et al. Diagnostic accuracy of multi-parametric MRI and TRUS biopsy in prostate cancer (PROMIS): A paired validating confirmatory study. Lancet 2017, 389, 815–822. [Google Scholar] [CrossRef]
- Bittencourt, L.K.; Hausmann, D.; Sabaneeff, N.; Gasparetto, E.L.; Barentsz, J.O. Multiparametric magnetic resonance imaging of the prostate: Current concepts. Radiol. Bras. 2014, 47, 292–300. [Google Scholar] [CrossRef]
- Van Booven, D.J.; Kuchakulla, M.; Pai, R.; Frech, F.S.; Ramasahayam, R.; Reddy, P.; Parmar, M.; Ramasamy, R.; Arora, H. A Systematic Review of Artificial Intelligence in Prostate Cancer. Res. Rep. Urol. 2021, 13, 31–39. [Google Scholar] [CrossRef]
- Jóźwiak, R.; Sobecki, P.; Lorenc, T. Intraobserver and Interobserver Agreement between Six Radiologists Describing mpMRI Features of Prostate Cancer Using a PI-RADS 2.1 Structured Reporting Scheme. Life 2023, 13, 580. [Google Scholar] [CrossRef]
- Twilt, J.J.; van Leeuwen, K.G.; Huisman, H.J.; Fütterer, J.J.; de Rooij, M. Artificial Intelligence Based Algorithms for Prostate Cancer Classification and Detection on Magnetic Resonance Imaging: A Narrative Review. Diagnostics 2021, 11, 959. [Google Scholar] [CrossRef]
- Saha, A.; Bosma, J.S.; Twilt, J.J.; van Ginneken, B.; Bjartell, A.; Padhani, A.R.; Bonekamp, D.; Villeirs, G.; Salomon, G.; Giannarini, G.; et al. Artificial intelligence and radiologists in prostate cancer detection on MRI (PI-CAI): An international, paired, non-inferiority, confirmatory study. Lancet Oncol. 2024, 25, 879–887. [Google Scholar] [CrossRef] [PubMed]
- Chow, J.C.L. Quantum Computing and Machine Learning in Medical Decision-Making: A Comprehensive Review. Algorithms 2025, 18, 156. [Google Scholar] [CrossRef]
- James, N.D.; Tannock, I.; N’Dow, J.; Feng, F.; Gillessen, S.; Ali, S.A.; Trujillo, B.; Al-Lazikani, B.; Attard, G.; Bray, F.; et al. The Lancet Commission on prostate cancer: Planning for the surge in cases. Lancet 2024, 403, 1683–1722. [Google Scholar] [CrossRef] [PubMed]
- Rahman, M.A.; Victoros, E.; Ernest, J.; Davis, R.; Shanjana, Y.; Islam, M.R. Impact of Artificial Intelligence (AI) Technology in Healthcare Sector: A Critical Evaluation of Both Sides of the Coin. Clin. Pathol. 2024, 17, 2632010x241226887. [Google Scholar] [CrossRef]
- Santamato, V.; Tricase, C.; Faccilongo, N.; Iacoviello, M.; Marengo, A. Exploring the Impact of Artificial Intelligence on Healthcare Management: A Combined Systematic Review and Machine-Learning Approach. Appl. Sci. 2024, 14, 10144. [Google Scholar] [CrossRef]
- Alowais, S.A.; Alghamdi, S.S.; Alsuhebany, N.; Alqahtani, T.; Alshaya, A.I.; Almohareb, S.N.; Aldairem, A.; Alrashed, M.; Bin Saleh, K.; Badreldin, H.A.; et al. Revolutionizing healthcare: The role of artificial intelligence in clinical practice. BMC Med. Educ. 2023, 23, 689. [Google Scholar] [CrossRef]
- Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
- Schardt, C.; Adams, M.B.; Owens, T.; Keitz, S.; Fontelo, P. Utilization of the PICO framework to improve searching PubMed for clinical questions. BMC Med. Inform. Decis. Mak. 2007, 7, 16. [Google Scholar] [CrossRef]
- Aldoj, N.; Lukas, S.; Dewey, M.; Penzkofer, T. Semi-automatic classification of prostate cancer on multi-parametric MR imaging using a multi-channel 3D convolutional neural network. Eur. Radiol. 2020, 30, 1243–1253. [Google Scholar] [CrossRef]
- Arslan, A.; Alis, D.; Erdemli, S.; Seker, M.E.; Zeybel, G.; Sirolu, S.; Kurtcan, S.; Karaarslan, E. Does deep learning software improve the consistency and performance of radiologists with various levels of experience in assessing bi-parametric prostate MRI? Insights Imaging 2023, 14, 48. [Google Scholar] [CrossRef]
- Cao, R.; Zhong, X.; Afshari, S.; Felker, E.; Suvannarerg, V.; Tubtawee, T.; Vangala, S.; Scalzo, F.; Raman, S.; Sung, K. Performance of Deep Learning and Genitourinary Radiologists in Detection of Prostate Cancer Using 3-T Multiparametric Magnetic Resonance Imaging. J. Magn. Reson. Imaging 2021, 54, 474–483. [Google Scholar] [CrossRef]
- Debs, N.; Routier, A.; Bône, A.; Rohé, M.M. Evaluation of a deep learning prostate cancer detection system on biparametric MRI against radiological reading. Eur. Radiol. 2025, 35, 3134–3143. [Google Scholar] [CrossRef]
- Deniffel, D.; Abraham, N.; Namdar, K.; Dong, X.; Salinas, E.; Milot, L.; Khalvati, F.; Haider, M.A. Using decision curve analysis to benchmark performance of a magnetic resonance imaging-based deep learning model for prostate cancer risk assessment. Eur. Radiol. 2020, 30, 6867–6876. [Google Scholar] [CrossRef]
- Faiella, E.; Vertulli, D.; Esperto, F.; Cordelli, E.; Soda, P.; Muraca, R.M.; Moramarco, L.P.; Grasso, R.F.; Beomonte Zobel, B.; Santucci, D. Quantib Prostate Compared to an Expert Radiologist for the Diagnosis of Prostate Cancer on mpMRI: A Single-Center Preliminary Study. Tomography 2022, 8, 2010–2019. [Google Scholar] [CrossRef]
- Giannini, V.; Mazzetti, S.; Cappello, G.; Doronzio, V.M.; Vassallo, L.; Russo, F.; Giacobbe, A.; Muto, G.; Regge, D. Computer-Aided Diagnosis Improves the Detection of Clinically Significant Prostate Cancer on Multiparametric-MRI: A Multi-Observer Performance Study Involving Inexperienced Readers. Diagnostics 2021, 11, 973. [Google Scholar] [CrossRef]
- Giganti, F.; Moreira da Silva, N.; Yeung, M.; Davies, L.; Frary, A.; Ferrer Rodriguez, M.; Sushentsev, N.; Ashley, N.; Andreou, A.; Bradley, A.; et al. AI-powered prostate cancer detection: A multi-centre, multi-scanner validation study. Eur. Radiol. 2025, 35, 4915–4924. [Google Scholar] [CrossRef] [PubMed]
- Hosseinzadeh, M.; Saha, A.; Brand, P.; Slootweg, I.R.; de Rooij, M.; Huisman, H. Deep learning-assisted prostate cancer detection on bi-parametric MRI: Minimum training data size requirements and effect of prior knowledge. Eur. Radiol. 2022, 32, 2224–2234. [Google Scholar] [CrossRef] [PubMed]
- Labus, S.; Altmann, M.M.; Huisman, H.; Tong, A.; Penzkofer, T.; Choi, M.H.; Shabunin, I.; Winkel, D.J.; Xing, P.; Szolar, D.H.; et al. A concurrent, deep learning-based computer-aided detection system for prostate multiparametric MRI: A performance study involving experienced and less-experienced radiologists. Eur. Radiol. 2023, 33, 64–76. [Google Scholar] [CrossRef] [PubMed]
- Lin, Y.; Yilmaz, E.C.; Belue, M.J.; Harmon, S.A.; Tetreault, J.; Phelps, T.E.; Merriman, K.M.; Hazen, L.; Garcia, C.; Yang, D.; et al. Evaluation of a Cascaded Deep Learning-based Algorithm for Prostate Lesion Detection at Biparametric MRI. Radiology 2024, 311, e230750. [Google Scholar] [CrossRef]
- Maki, J.H.; Patel, N.U.; Ulrich, E.J.; Dhaouadi, J.; Jones, R.W. Part II: Effect of different evaluation methods to the application of a computer-aided prostate MRI detection/diagnosis (CADe/CADx) device on reader performance. Curr. Probl. Diagn. Radiol. 2024, 53, 614–623. [Google Scholar] [CrossRef] [PubMed]
- Mehralivand, S.; Harmon, S.A.; Shih, J.H.; Smith, C.P.; Lay, N.; Argun, B.; Bednarova, S.; Baroni, R.H.; Canda, A.E.; Ercan, K.; et al. Multicenter Multireader Evaluation of an Artificial Intelligence-Based Attention Mapping System for the Detection of Prostate Cancer With Multiparametric MRI. Am. J. Roentgenol. 2020, 215, 903–912. [Google Scholar] [CrossRef] [PubMed]
- Mehta, P.; Antonelli, M.; Singh, S.; Grondecka, N.; Johnston, E.W.; Ahmed, H.U.; Emberton, M.; Punwani, S.; Ourselin, S. AutoProstate: Towards Automated Reporting of Prostate MRI for Prostate Cancer Assessment Using Deep Learning. Cancers 2021, 13, 6138. [Google Scholar] [CrossRef] [PubMed]
- Schelb, P.; Kohl, S.; Radtke, J.P.; Wiesenfarth, M.; Kickingereder, P.; Bickelhaupt, S.; Kuder, T.A.; Stenzinger, A.; Hohenfellner, M.; Schlemmer, H.P.; et al. Classification of Cancer at Prostate MRI: Deep Learning versus Clinical PI-RADS Assessment. Radiology 2019, 293, 607–617. [Google Scholar] [CrossRef]
- Schelb, P.; Wang, X.; Radtke, J.P.; Wiesenfarth, M.; Kickingereder, P.; Stenzinger, A.; Hohenfellner, M.; Schlemmer, H.P.; Maier-Hein, K.H.; Bonekamp, D. Simulated clinical deployment of fully automatic deep learning for clinical prostate MRI assessment. Eur. Radiol. 2021, 31, 302–313. [Google Scholar] [CrossRef]
- Sun, Z.; Wang, K.; Kong, Z.; Xing, Z.; Chen, Y.; Luo, N.; Yu, Y.; Song, B.; Wu, P.; Wang, X.; et al. A multicenter study of artificial intelligence-aided software for detecting visible clinically significant prostate cancer on mpMRI. Insights Imaging 2023, 14, 72. [Google Scholar] [CrossRef]
- Sun, Z.; Wang, K.; Gao, G.; Wang, H.; Wu, P.; Li, J.; Zhang, X.; Wang, X. Assessing the Performance of Artificial Intelligence Assistance for Prostate MRI: A Two-Center Study Involving Radiologists With Different Experience Levels. J. Magn. Reson. Imaging 2025, 61, 2234–2245. [Google Scholar] [CrossRef]
- Wang, K.; Xing, Z.; Kong, Z.; Yu, Y.; Chen, Y.; Zhao, X.; Song, B.; Wang, X.; Wu, P.; Xue, Y. Artificial intelligence as diagnostic aiding tool in cases of Prostate Imaging Reporting and Data System category 3: The results of retrospective multi-center cohort study. Abdom. Radiol. 2023, 48, 3757–3765. [Google Scholar] [CrossRef]
- Youn, S.Y.; Choi, M.H.; Kim, D.H.; Lee, Y.J.; Huisman, H.; Johnson, E.; Penzkofer, T.; Shabunin, I.; Winkel, D.J.; Xing, P.; et al. Detection and PI-RADS classification of focal lesions in prostate MRI: Performance comparison between a deep learning-based algorithm (DLA) and radiologists with various levels of experience. Eur. J. Radiol. 2021, 142, 109894. [Google Scholar] [CrossRef]
- Zhang, K.S.; Schelb, P.; Netzer, N.; Tavakoli, A.A.; Keymling, M.; Wehrse, E.; Hog, R.; Rotkopf, L.T.; Wennmann, M.; Glemser, P.A.; et al. Pseudoprospective Paraclinical Interaction of Radiology Residents With a Deep Learning System for Prostate Cancer Detection: Experience, Performance, and Identification of the Need for Intermittent Recalibration. Investig. Radiol. 2022, 57, 601–612. [Google Scholar] [CrossRef]
- Zhong, X.; Cao, R.; Shakeri, S.; Scalzo, F.; Lee, Y.; Enzmann, D.R.; Wu, H.H.; Raman, S.S.; Sung, K. Deep transfer learning-based prostate cancer classification using 3 Tesla multi-parametric MRI. Abdom. Radiol. 2019, 44, 2030–2039. [Google Scholar] [CrossRef] [PubMed]
- Russo, T.; Quarta, L.; Pellegrino, F.; Cosenza, M.; Camisassa, E.; Lavalle, S.; Apostolo, G.; Zaurito, P.; Scuderi, S.; Barletta, F.; et al. The added value of artificial intelligence using Quantib Prostate for the detection of prostate cancer at multiparametric magnetic resonance imaging. Radiol. Med. 2025, 130, 1105–1114. [Google Scholar] [CrossRef] [PubMed]
- James, S.L.; Abate, D.; Abate, K.H.; Abay, S.M.; Abbafati, C.; Abbasi, N.; Abbastabar, H.; Abd-Allah, F.; Abdela, J.; Abdelalim, A.; et al. Global, regional, and national incidence, prevalence, and years lived with disability for 354 diseases and injuries for 195 countries and territories, 1990–2017: A systematic analysis for the Global Burden of Disease Study 2017. Lancet 2018, 392, 1789–1858. [Google Scholar] [CrossRef] [PubMed]
- Pagniez, M.A.; Kasivisvanathan, V.; Puech, P.; Drumez, E.; Villers, A.; Olivier, J. Predictive Factors of Missed Clinically Significant Prostate Cancers in Men with Negative Magnetic Resonance Imaging: A Systematic Review and Meta-Analysis. J. Urol. 2020, 204, 24–32. [Google Scholar] [CrossRef]
- Hectors, S.J.; Chen, C.; Chen, J.; Wang, J.; Gordon, S.; Yu, M.; Al Hussein Al Awamlh, B.; Sabuncu, M.R.; Margolis, D.J.A.; Hu, J.C. Magnetic Resonance Imaging Radiomics-Based Machine Learning Prediction of Clinically Significant Prostate Cancer in Equivocal PI-RADS 3 Lesions. J. Magn. Reason. Imaging 2021, 54, 1466–1473. [Google Scholar] [CrossRef]
- Hamm, C.A.; Baumgärtner, G.L.; Biessmann, F.; Beetz, N.L.; Hartenstein, A.; Savic, L.J.; Froböse, K.; Dräger, F.; Schallenberg, S.; Rudolph, M.; et al. Interactive Explainable Deep Learning Model Informs Prostate Cancer Diagnosis at MRI. Radiology 2023, 307, e222276. [Google Scholar] [CrossRef]
- Capraro, V.; Lentsch, A.; Acemoglu, D.; Akgun, S.; Akhmedova, A.; Bilancini, E.; Bonnefon, J.-F.; Brañas-Garza, P.; Butera, L.; Douglas, K.M.; et al. The impact of generative artificial intelligence on socioeconomic inequalities and policy making. Proc. Natl. Acad. Sci. Nexus 2024, 3, pgae191. [Google Scholar] [CrossRef]
- Twilt, J.J.; Saha, A.; Bosma, J.S.; Padhani, A.R.; Bonekamp, D.; Giannarini, G.; van den Bergh, R.; Kasivisvanathan, V.; Obuchowski, N.; Yakar, D.; et al. AI-Assisted vs Unassisted Identification of Prostate Cancer in Magnetic Resonance Images. JAMA Netw. Open 2025, 8, e2515672. [Google Scholar] [CrossRef]
- Ramli Hamid, M.T.; Ab Mumin, N.; Abdul Hamid, S.; Mohd Ariffin, N.; Mat Nor, K.; Saib, E.; Mohamed, N.A. Comparative analysis of diagnostic performance in mammography: A reader study on the impact of AI assistance. PLoS ONE 2025, 20, e0322925. [Google Scholar] [CrossRef]
- Forookhi, A.; Laschena, L.; Pecoraro, M.; Borrelli, A.; Massaro, M.; Dehghanpour, A.; Cipollari, S.; Catalano, C.; Panebianco, V. Bridging the experience gap in prostate multiparametric magnetic resonance imaging using artificial intelligence: A prospective multi-reader comparison study on inter-reader agreement in PI-RADS v2.1, image quality and reporting time between novice and expert readers. Eur. J. Radiol. 2023, 161, 110749. [Google Scholar] [CrossRef]
- Cipollari, S.; Pecoraro, M.; Forookhi, A.; Laschena, L.; Bicchetti, M.; Messina, E.; Lucciola, S.; Catalano, C.; Panebianco, V. Biparametric prostate MRI: Impact of a deep learning-based software and of quantitative ADC values on the inter-reader agreement of experienced and inexperienced readers. La Radiol. Medica 2022, 127, 1245–1253. [Google Scholar] [CrossRef]
- Muneer, A.; Waqas, M.; Saad, M.B.; Showkatian, E.; Bandyopadhyay, R.; Xu, H.; Li, W.; Chang, J.Y.; Liao, Z.; Haymaker, C. From Classical Machine Learning to Emerging Foundation Models: Review on Multimodal Data Integration for Cancer Research. arXiv 2025, arXiv:2507.09028. [Google Scholar] [CrossRef]






| PICO Element | Description |
|---|---|
| Participants/Population | Adult men (≥18 years) with clinical suspicion of prostate cancer or undergoing early diagnostic evaluation. Also includes asymptomatic but at-risk individuals (e.g., age > 50, family history, elevated PSA). |
| Intervention(s)/Exposure(s) | AI-based technologies for the early detection of prostate cancer, including machine learning and deep learning algorithms, radiomics, Natural Language Processing (NLP), AI-supported image analysis tools, and clinical decision-support systems. |
| Comparator(s)/Control | Conventional diagnostic pathways (DRE, PSA, TRUS, standard mpMRI without AI, biopsy) and/or human readers with predefined expertise levels. |
| Outcome(s) | Primary: diagnostic accuracy (AUC-ROC, sensitivity, specificity, PPV, NPV, DOR) and timeliness of reporting. Secondary: avoided procedures, clinical decision impact, cost-effectiveness, patient-reported outcomes, accessibility/equity. |
| Study (Author, Year) | Study Design | Population (Sample Size, Mean Age) | PSA, Risk Category | AI Intervention (Algorithm, Imaging/Data, Tool) | Comparators | Key Conclusion |
|---|---|---|---|---|---|---|
| Aldoj, 2019, Germany [20] | Retrospective, single institution | N = 200; 318 prostate lesions (75 csPCa, 243 non-significant); mean age NR | PSA NR; csPCa = Gleason ≥ 7 | 3D CNN, mpMRI (T2w, ADC, DWI, K-trans) | CNN vs. radiologists | CNN matched expert radiologists; DWI/ADC and K-trans are most valuable; feasible for clinical integration |
| Arslan, 2023, Turkey [21] | Retrospective, single-center | N = 153; mean age 63.6 ± 7.6 years (range 53–80) | Mean PSA 6.4 ± 3.9; csPCa = Gleason ≥ 3 + 4 (29.8%) | Siemens Syngo Prostate AI | 4 radiologists ± AI | Standalone DL achieved AUROC 0.756, comparable to radiologists with ≤3 years of experience |
| Cao, 2021, USA [22] | Retrospective | N = 553 (train 427, test 126); mean age 61–62 years | Median PSA 6.0–6.2; csPCa = GG ≥ 2 or ≥10 mm | FocalNet | AI vs. 4 expert radiologists | AI near-expert accuracy; detected 9% csPCa missed by readers; strongest in large/high-grade lesions |
| Debs, 2025, France [23] | Multicenter diagnostic/prognostic | 4381 bpMRI training cases (3800 positive, 581 negative); test set 328 patients; mean age 60.3 years (35–78) | PSA mean 9.6; test median 14; csPCa = GGG ≥ 2 | 3D nnU-Net (bpMRI) | AI vs. non-experts | AUC 0.83–0.88; >0.9 for large lesions; superior to non-experts; drop in small lesions |
| Deniffel, 2020, Canada [24] | Retrospective | Training: 449 men (mean 63.8 ± 8.1); test: 50 men (mean 64.4 ± 8.4) | Median PSA 7.2–7.6; csPCa = ISUP ≥ 2 | 3D CNN | vs PI-RADS (≥4; ≥3 + PSAd) | Well-calibrated CNN outperformed PI-RADS; reduced unnecessary biopsies without missing csPCa |
| Faiella, 2022, Italy [25] | Retrospective, preliminary | N = 108; mean age ~66–68 years across three clinical subgroups | PSA 6.3–8.2; ISUP 1–5 (34–3%) | Quantib Prostate | Expert vs. novice + AI | AI-assisted novice: sensitivity 92% vs. 72%; PPV 90% vs. 84%; improved detection in ISUP ≥ 3 |
| Giannini, 2021, Italy [26] | Retrospective | N = 130; mean age 68.4 ± 7.2 years (49–83) | Median PSA 7.4; csPCa = ISUP ≥ 2 | CAD system | 3 radiologists ± AI | AI ↑ sensitivity for less-experienced readers; improved consistency and diagnostic confidence |
| Giganti, 2025, UK [27] | Multicenter retrospective | N = 1045 (793 development, 252 validation); mean age 67.3 ± 8.5 years | Median PSA 6.8; csPCa = GG ≥ 2 (31%) | DL-CAD | Radiologists vs. AI | AI met non-inferiority vs. experts; sensitivity 95%; generalized well across centers |
| Hosseinzadeh, 2021, Netherlands [28] | Retrospective, 2 cohorts | N = 2734 biopsy-naïve men from 2 centers; median age 65–66 years | PSA 8/6.6; csPCa = ISUP ≥ 2 | Two-stage CNN | AI vs. radiologists | AI comparable to experts; improved less-experienced performance; reduced unnecessary biopsies |
| Labus, 2023, Germany [29] | Multi-reader, multi-case | N = 172; mean age 66 years (range 47–81) | Median PSA 7; 95 PCa (75 ISUP ≥ 2) | DL-CAD | 2 experts, 2 novices ± AI | AI improved novice detection accuracy; no significant benefit for experts |
| Lin, 2024, Singapore [30] | Prospective | N = 658; median age 67 years (IQR 61–71) | PSA 6.7; csPCa = ISUP ≥ 2 (45%) | DL-AI model | AI vs. radiologists | Sensitivity 96% ≈ experts; moderate segmentation (Dice 0.29); promising for biopsy planning |
| Maki, 2024, USA [31] | Multicase, multicenter | N = 150 bpMRI cases (6 sites, 12 scanners); median age 67 years (45–86) | PSA 7.2 (0.4–367); csPCa = Gleason ≥ 7 (~40%) | Prostate ID CADe/CADx | 9 readers ± AI | AUC improved with AI; standalone AUC 0.93; ↑ malignant biopsy rate, ↓ benign procedures |
| Mehralivand, 2020, USA [32] | Prospective | N = 236 (152 PCa, 84 controls); mean age NR | PSA NR; csPCa = GG ≥ 2 | AI attention mapping | Radiologists ± AI | AI improved conspicuity and sensitivity; reduced variability; accuracy ≈ experts |
| Mehta, 2021, UK [33] | Retrospective | N = 90 men >50 years (45 PCa, 45 controls) | PSA NR | AutoProstate | AI vs. radiologist | AI improved prostate volume and PSAd accuracy; potential for risk stratification |
| Saha, 2024, Netherland [12] | International, non-inferiority | N = 9129 men from 4 centers; median age 66 years (IQR 61–70) | PSA 8 (IQR 5–11); csPCa = GG ≥ 2 | Ensemble of top 5 DL models | AI vs. radiologists | AI ≥ experts; robust across centers; reduced inter-reader variability |
| Schelb, 2019, Germany [34] | Retrospective | N = 312 (train 250, test 62); median age 64 years (IQR 58–71) | PSA 7.0–6.9; csPCa = ISUP ≥ 2 | U-Net | AI vs. PI-RADS | U-Net ≈ PI-RADS; adding AI ↑ PPV from 48% to 67%; feasible segmentation |
| Schelb, 2020, Germany [35] | Retrospective | N = 259; median age 64 years (IQR 61–72) | PSA 7.2; csPCa = ISUP GG ≥ 2 | U-Net | Radiologists vs. AI | AI ≈ PI-RADS; combined use ↑ PPV (59→63%); stable over time |
| Sun, 2023, China [36] | Multicenter retrospective | N = 480; mean age 66.8 ± 10.2 years | PSA 7.7 (0.15–100); csPCa = Gleason ≥ 7 | AI system | Radiologists ± AI | AI ↑ lesion sensitivity (40→59%), specificity (58→72%), ↓ reading time (−56%) |
| Sun, 2024, China [37] | Multi-reader | N = 900; median age 67 years (IQR 59–74) | PSA 8.4 (4.6–17.8); csPCa = ISUP GG2–5 (40%) | AI system | 16 readers ± AI | Sensitivity ↑ (0.78→0.86); AUC ↑ (0.84→0.92); ↓ time (−48%), ↑ agreement |
| Wang, 2023, China [38] | Multicenter, multi-reader | N = 87 with PI-RADS 3 lesions; median age 67 years (32–88) | PSA 9.4 (0–100); csPCa = ISUP ≥ 2 (32%) | AI system | Radiologists ± AI | AI ↑ specificity (0.70 vs. 0.00), accuracy (0.74 vs. 0.32); ↓ reading time; ↑ confidence |
| Youn, 2021, Korea [39] | Retrospective | N = 121; mean age 68.2 ± 8.5 years (47–85) | PSA 6.5 (4.5–10.4); csPCa = Gleason ≥ 7 (36%) | DL algorithm | Radiologists, reports | Comparable to junior radiologists; ↑ specificity; ↓ equivocal (PI-RADS 3) cases |
| Zhang, 2022, Germany [40] | Pseudoprospective, multi-reader | N = 201; median age 66 years (IQR 59–72) | PSA 7.0 (5.2–10.1); csPCa = ISUP ≥ 2 | CNN (U-Net) | Radiologists ± AI | CNN ≈ PI-RADS; aided novices (↑ specificity); recalibration needed over time |
| Zhong, 2019, USA [41] | Retrospective | N = 140 biopsy-proven PCa; age range 43–80 years | Mean PSA 7.9 ± 12.5; csPCa = Gleason ≥ 7 | CNN (DTL) | Radiologists ± PI-RADS | DTL model outperformed basic DL; specificity > PI-RADS; reduced overdiagnosis |
Low,
Unclear (insufficient information or potential bias/concern), and
High.
Low,
Unclear (insufficient information or potential bias/concern), and
High.| Risk of Bias | Applicability | ||||||
|---|---|---|---|---|---|---|---|
| Study | Patient Selection | Index Test | Reference Standard | Flow and Timing | Patient Selection | Index Test | Reference Standard |
| Aldoj 2019 [20] | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
| Arslan 2023 [21] | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
| Cao 2021 [22] | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
| Debs 2025 [23] | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
| Deniffel 2020 [24] | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
| Faiella 2022 [25] | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
| Giannini 2021 [26] | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
| Giganti 2025 [27] | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
| Hosseinzadeh 2021 [28] | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
| Labus 2023 [29] | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
| Lin 2024 [30] | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
| Maki 2024 [31] | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
| Mehralivand 2020 [32] | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
| Mehta 2021 [33] | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
| Saha 2024 [12] | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
| Schelb 2019 [34] | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
| Schelb 2020 [35] | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
| Sun 2023 [36] | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
| Sun 2024 [37] | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
| Wang 2023 [38] | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
| Youn 2021 [39] | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
| Zhang 2022 [40] | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
| Zhong 2019 [41] | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
| Study | AI Intervention | Comparator | AUC-ROC |
|---|---|---|---|
| Aldoj, 2019, Germany [20] | 3D CNN | CNN vs. experienced radiologist | AI: 0.91 |
| Arslan, 2023, Turkey [21] | Prostate-AI (Siemens) | 4 radiologists with different experience levels, with and without AI support | Experienced Radiologist: 0.917 Unexperienced Radiologist: 0.813 Unexperienced Radiologist + AI: 0.821 AI: 0.756 |
| Debs, 2025, France [23] | 3D nnU-Net | Unexperienced Radiologist vs. AI | AI: 0.83 |
| Deniffel, 2020, Canada [24] | 3D CNN | AI vs. radiologist-assigned PI-RADS strategies (≥4; ≥3 + PSAd) | AI: 0.85 |
| Giannini, 2021, Italy [26] | CAD | 3 radiologists with and without AI support | Radiologists: 0.826, Radiologists + AI: 0.830 |
| Giganti, 2025, UK [27] | DL-CAD | Radiologists vs. AI | AI: 0.91 Radiologists: 0.95 |
| Hosseinzadeh, 2021, Netherlands [28] | Two-stage CNN framework | AI vs. radiologists | AI: 0.875 |
| Labus, 2023, Germany [29] | DL-CAD | 2 experienced radiologists and 2 less experienced radiologists both with and without the DL-CAD support | Radiologist: 0.73 Radiologist + AI: 0.83 AI: 0.83 |
| Maki, 2024, USA [31] | ProstateID CADe/CADx | AI vs. 9 radiologists vs. radiologists + AI | AI: 0.929 Radiologist: 0.672, Radiologist + AI: 0.718 |
| Mehralivand, 2020, USA [32] | AI system | Radiologists of different experience with and without AI support | Radiologists: 0.816 AI: 0.78 |
| Mehta, 2021, UK [33] | AutoProstate | AI vs. experienced radiologist | AI: 0.70 Radiologist: 0.64 |
| Saha, 2024, Netherlands [12] | Ensembled AI of top 5 DL models | AI vs. radiologists of different levels of experience | AI: 0.93 Radiologists: 0.86 |
| Sun, 2024, China [37] | AI system | 10 less-experienced and 6 experienced radiologists, both with and without the AI support | AI: 0.87, Radiologist: 0.86 Radiologist + AI: 0.91 |
| Youn, 2021, Korea [39] | AI system | Radiologists with different levels of experience vs. AI | AI: 0.808 Radiologists: 0.790 |
| Zhang, 2022, Germany [40] | CNN (U-Net) | Radiology with and without AI | AI: 0.77 Radiologist: 0.74 Radiologist + AI: 0.74 |
| Zhong, 2019, USA [41] | CNN | Expert radiologists vs. AI | AI: 0.726 Radiologist: 0.711 |
| (A) | ||||
|---|---|---|---|---|
| Study | Group | Sensitivity (%) | Specificity (%) | DOR |
| Debs, 2025, France [23] | AI Radiologists (PI-RADS ≥ 3) Radiologists (PI-RADS ≥ 4) | 76 (95% CI 0.70–0.82) 87 68 | 82 (1–FPR 0.18) 44 71 | 14.43 5.26 5.20 |
| Deniffel, 2020, Canada [24] | Calibrated AI CNN | 100 (threshold 0.05–0.20) | 13–52 | NR |
| Faiella, 2022, Italy [25] | Experienced Radiologist Intermediate Radiologist + AI | 71.7 92.3 | NR NR | NR NR |
| Giannini, 2021, Italy [26] | Radiologists Radiologists + AI AI | 67.4 70.4 95.6 | 94.8 89.6 NR | 37.7 20.5 NR |
| Giganti, 2025, UK [27] | AI Radiologists | 95 99 | 67 73 | 38.58 267.67 |
| Hosseinzadeh, 2021, Netherlands [28] | Radiologist AI | 91 NR | 77 77 | 33.9 NR |
| Labus, 2023, Germany [29] | Radiologist Radiologist + AI AI | 79 84 79 | 45 57 70 | 3.16 6.96 8.78 |
| Lin, 2024, Singapore [30] | Radiologist (PI-RADS ≥ 2) Radiologist (PI-RADS ≥ 3) Radiologist (PI-RADS ≥ 4) AI | 93 87 75 92–93 | NR NR NR 23 (15–32) | NR NR NR 3.97 |
| Maki, 2024, USA [31] | Radiologist Radiologist + AI | 77.7 80.5 | 56.2 55.5 | 4.47 5.15 |
| Mehralivand, 2020, USA [32] | Radiologist Radiologist + AI | 89.6 87.9 | 51.5 30 | 9.15 3.11 |
| Saha, 2024, Netherland [12] | Radiologist AI | 89.4 96.1 | 57.7 68.9 | 54.8 54.6 |
| Schelb, 2019, Germany [34] | Radiologist (PI-RADS ≥3) Radiologist (PI-RADS ≥4) AI ≥ 0.22 AI ≥ 0.33 | 96 88 96 92 | 22 50 31 47 | 6.77 7.33 10.78 10.20 |
| Schelb, 2020, Germany [35] | Radiologist (PI-RADS ≥ 3) Radiologist (PI-RADS ≥ 4) AI U-Net (d3 ≥ 3) AI U-Net (d4 ≥ 4) | 98 84 99 83 | 17 58 24 55 | 10.04 7.25 31.26 5.97 |
| Sun, 2023, China [36] | Radiologist Radiologist + AI | 88.3 93.9 | 57.7 71.7 | 10.3 39.0 |
| Sun, 2024, China [37] | Radiologist Radiologist + AI AI | 88 93 93 | 84 89 NR | 42.63 208.14 NR |
| Wang, 2023, China [38] | Radiologist Radiologist + AI | 100 82.1 | 069.5 | NR 10.45 |
| Youn, 2021, Korea [39] | Radiologist (PI-RADS 3) AI (P-PRADS 3) Radiologist (PI-RADS 4) AI (PI-RADS 4) | 82.3 73.1 76.2 69.2 | 47.6 87.0 69.0 88.4 | 4.22 18.19 7.1 17.1 |
| Zhang, 2022, Germany [40] | Radiologist (PI-RADS 3) Radiologist (PI-RADS 4) Radiologist (PI-RADS 5) AI (CDisplay) AI (CNN C3) AI (CNN C4) | 98.5 90.9 36.4 95.5 81.8 60.6 | 8.9 54.8 93.3 9.6 54.8 85.3 | 6.42 12.11 7.97 2.25 5.45 8.92 |
| Zhong, 2019, USA [41] | AI Radiologist | 63.6 86.4 | 80.0 48.0 | 6.99 NR |
| (B) | ||||
| Study | Group | Sensitivity (%) | Specificity (%) | DOR |
| Aldoj, 2019, Germany [20] | Group 1 (T2 + ADC + DWI + Ktrans) | 75.4 | 92.6 | 38.35 |
| Cao, 2021, USA [22] | Radiologists (suspicion score 1–5) AI (suspicion score 1–5) | 38.3–88.7 38.7–89.1 | NR | NR |
| Debs, 2025, France [23] | AI Radiologists (≥3) Radiologists (≥4) | 85–88 74 57 | NR NR NR | NR NR NR |
| Mehta, 2021, UK [33] | Experienced Radiologist AI | 78 76 | 48 57 | 3.27 4.20 |
| Sun, 2023, China [36] | Radiologist Radiologist + AI | 40.1 59.0 | NR NR | NR NR |
| Sun, 2024, China [37] | Radiologist Radiologist + AI | 78 86 | NR NR | NR NR |
| Group | Sensitivity Range | Specificity Range |
|---|---|---|
| Radiologist (per patient) | 0.364–1.00 | 0.00–0.948 |
| AI (per patient) | 0.58–1.00 | 0.096–0.90 |
| Radiologist + AI (per patient) | 0.704–0.939 | 0.30–0.896 |
| Radiologist (per lesion) | 0.383–0.887 | 0.48 |
| AI (per lesion) | 0.387–0.891 | 0.57–0.926 |
| Radiologist + AI (per lesion) | 0.574–0.86 | NR |
| (A) | |||
|---|---|---|---|
| Study | Group | PPV (%) | NPV (%) |
| Debs, 2025, France [23] | AI | 55 | NR |
| Deniffel, 2020, Canada [24] | Calibrated CNN Original CNN Radiologist (PI-RADS ≥ 4) | 41–56 64–79 57 | 100 78–82 86 |
| Faiella, 2022, Italy [25] | Experienced Radiologist Unexperienced Radiologist + AI | 84.4 90.1 | NR NR |
| Giannini, 2021, Italy [26] | Radiologist Radiologist + AI | 92.9 87.2 | 74.4 75.2 |
| Giganti, 2025, UK [27] | AI Radiologist | 56 63 | NR NR |
| Labus, 2023, Germany [29] | AI Radiologist Radiologist + AI | 77 64 71 | 73 64 75 |
| Lin, 2024, Singapore [30] | AI Radiologist (PI-RADS 2) | 65 69 | NR NR |
| Maki, 2024, USA [31] | Radiologist AI + Radiologist | 58.9 59.4 | 75.7 77.9 |
| Saha, 2024, Netherland [12] | Radiologist AI | 53.2 68 | 90.2 93.8 |
| Schelb, 2019, Germany [34] | Radiologist (PI-RADS 3) Radiologist (PI-RADS ≥ 4) AI (threshold ≥ 0.22) AI (threshold ≥ 0.33) | 47 56 50 56 | 89 86 92 89 |
| Schelb, 2020, Germany [35] | Radiologist (PI-RADS ≥ 3) Radiologist (PI-RADS ≥ 4) UNet (d3) ≥ 3 UNet(d4) ≥ 4 | 46 59 48 57 | 93 84 97 82 |
| Sun, 2023, China [36] | Radiologist Radiologist + AI | 55.6 66.6 | 89.2 95.1 |
| Sun, 2024, China [37] | Radiologist Radiologist + AI | 79 85 | 91 95 |
| Wang, 2023, China [38] | Radiologist Radiologist + AI | 32.2 56.1 | NR 89.1 |
| Youn, 2021, Korea [39] | AI (PI-RADS 3) Radiologist (PI-RADS 3) AI (PI-RADS 4) Radiologist (PI-RADS 4) | 80.9 54.5 81.8 65.4 | 81.1 77.4 79.2 80.1 |
| Zhang, 2022, Germany [40] | Radiologist (PI-RADS 3) Radiologist (PI-RADS 4 Radiologist (PI-RADS 5) AI (CNN cdisplay) AI (CNN C3) AI (CNN C4) | 34.6 49.6 72.6 34.1 46.9 66.8 | 92.4 92.5 75.0 81.4 86.0 81.6 |
| (B) | |||
| Study | Group | PPV (%) | NPV (%) |
| Aldoj, 2019, Germany [20] | Group 1 (T2 + ADC + DWI + Ktrans) | 75.9 | 92.4 |
| Debs, 2025, France [23] | Radiologist (PI-RADS ≥ 3) Radiologist (PI-RADS ≥ 4) AI (PI-RADS ≥ 3) AI (PI-RADS ≥ 4) | 34 41 34 41 | NR NR NR NR |
| Mehralivand, 2020, USA [32] | Radiologist Radiologist + AI | 60.7 46.6 | NR NR |
| Mehta, 2021, UK [33] | ER (Likert ≥ 4) AI | 78 80 | NR NR |
| Sun, 2024, China [37] | Radiologist Radiologist + AI | 62 75 | NR NR |
| (A) | |||
|---|---|---|---|
| Metric (Patient) | Radiologists | AI | AI + Radiologists |
| AUC-ROC | Range: 0.64–0.95; Median (IQR): 0.80 (0.73–0.86) | Range: 0.70–0.93; Median (IQR): 0.84 (0.775–0.893) | Range: 0.72–0.91 (median NR) |
| Sensitivity | Range: 0.364–1.00 | Range: 0.58–1.00 | Range: 0.704–0.939 |
| Specificity | Range: 0.00–0.948 | Range: 0.096–0.90 | Range: 0.30–0.896 |
| PPV | Range: 0.322–0.929 | Range: 0.341–0.818 | Range: 0.561–0.901 |
| NPV | Range: 0.61–0.93 | Range: 0.73–1.00 | Range: 0.75–0.951 |
| DOR | Range: 3.16–267.67; Median (IQR): 7.65 (6.42–12.11) | Range: 2.25–54.6; Median (IQR): 10.20 (6.48–17.65) | Range: 3.11–208.14 |
| (B) | |||
| Metric (Lesion) | Radiologists | AI | AI + Radiologists |
| Sensitivity | Range: 0.383–0.887 | Range: 0.387–0.891 | Range: 0.574–0.86 |
| Specificity | Single value reported: 0.48 | Range: 0.57–0.926 | NR |
| PPV | Range: 0.34–0.78 | Range: 0.34–0.80 | Range: 0.466–0.75 |
| NPV | NR | Single value reported: 0.924 | NR |
| DOR | Single value reported: 3.27 | Range: 4.2–38.35 | NR |
| Study | Reporting Time |
|---|---|
| Giannini, 2021, Italy [26] | Radiologist: 170 s Radiologist + AI: 66 s |
| Labus, 2023, Germany [29] | Radiologist: 157 s Radiologist + AI: 150 s |
| Mehralivand, 2020, USA [32] | Radiologist + AI: 4.66 min Radiologist: 4.03 min |
| Sun, 2023, China [36] | Radiologist: 423 s Radiologist + AI: 185 s |
| Sun, 2024, China [37] | Radiologist: 250 s Radiologist + AI: 130 s |
| Wang, 2023, China [38] | Radiologists: mean reading time (not specified, baseline) Radiologists + AI: mean reading time 351 s (p < 0.001 vs. without AI) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ciccone, V.; Garofano, M.; Del Sorbo, R.; Mongelli, G.; Izzo, M.; Negri, F.; Buonocore, R.; Salerno, F.; Gnazzo, R.; Ungaro, G.; et al. Improving Early Prostate Cancer Detection Through Artificial Intelligence: Evidence from a Systematic Review. Cancers 2025, 17, 3503. https://doi.org/10.3390/cancers17213503
Ciccone V, Garofano M, Del Sorbo R, Mongelli G, Izzo M, Negri F, Buonocore R, Salerno F, Gnazzo R, Ungaro G, et al. Improving Early Prostate Cancer Detection Through Artificial Intelligence: Evidence from a Systematic Review. Cancers. 2025; 17(21):3503. https://doi.org/10.3390/cancers17213503
Chicago/Turabian StyleCiccone, Vincenzo, Marina Garofano, Rosaria Del Sorbo, Gabriele Mongelli, Mariella Izzo, Francesco Negri, Roberta Buonocore, Francesca Salerno, Rosario Gnazzo, Gaetano Ungaro, and et al. 2025. "Improving Early Prostate Cancer Detection Through Artificial Intelligence: Evidence from a Systematic Review" Cancers 17, no. 21: 3503. https://doi.org/10.3390/cancers17213503
APA StyleCiccone, V., Garofano, M., Del Sorbo, R., Mongelli, G., Izzo, M., Negri, F., Buonocore, R., Salerno, F., Gnazzo, R., Ungaro, G., & Bramanti, A. (2025). Improving Early Prostate Cancer Detection Through Artificial Intelligence: Evidence from a Systematic Review. Cancers, 17(21), 3503. https://doi.org/10.3390/cancers17213503

