Current Applications and Future Directions of Artificial Intelligence in Prostate Cancer Diagnosis: A Narrative Review
Simple Summary
Abstract
1. Introduction
2. Methods
2.1. Study Design
2.2. Data Sources and Search Strategy
2.3. Eligibility Criteria (Inclusion and Exclusion)
2.4. Inclusion Criteria
2.5. Selection Priorities
2.6. Exclusions
2.7. Data Extraction and Synthesis Framework
2.8. Consideration of Risk of Bias
3. Artificial Intelligence in Medical Imaging: Capabilities and Constraints
3.1. AI Applications in Magnetic Resonance Imaging (MRI): Performance and Persistent Challenges
| Study (Ref.) | Objective & AI Model | MRI Protocol & Analysis Level | Dataset & Reference Standard | Validation Strategy | Key Performance & Clinical Utility |
|---|---|---|---|---|---|
| Liu et al. (2024) [19] | Grading: Hybrid model (Radiomics + CNN with CPCB and SE-Net modules) for binary classification of prostate cancer Gleason score (GS ≤ 3 + 4 vs. GS ≥ 4 + 3). | mpMRI (T2-weighted imaging (T2WI), ADC), 3.0T, 3 mm slice thickness analysis: Lesion-level (3D tumor ROI with 5 mm perilesional margin) | N = 650 (Two hospitals: GPPH n = 408, FAHGMU n = 242) Ref: Histopathology (biopsy/surgical GS grading) | Multi-center retrospective study; Training set (n = 520), Testing set (n = 130) | Accuracy: 0.907; F1-score: 0.918; Recall: 0.944; Precision: 0.893; AUC: 0.946 |
| Saha et al. (2024) [27] | Detection: Deep learning ensemble model (five top-performing algorithms) for detecting clinically significant prostate cancer (Gleason grade group ≥ 2). | bpMRI(Biparametric magnetic resonance imaging) Analysis: Case-level and lesion-level (0–100 likelihood score) | N = 10,207 MRI examinations (9129 patients); Training: 9207 cases; Testing: 1000 cases (400 for reader study) Ref: Histopathology (biopsy/prostatectomy) with ≥3 years follow-up (median 5 years) | External validation (Multi-center, international: Netherlands and Norway); Reader study with 62 radiologists from 45 centers in 20 countries | AUC: 0.91 (95% CI 0.87–0.94) vs. radiologists’ AUC 0.86 (0.83–0.89)—superior performance (p < 0.0001) At matched specificity (57.7%): 6.8% more csPCa detected, 50.4% fewer false positives, 20.0% fewer indolent cancers detected vs. mean radiologist performance |
| Sowmya et al. (2024) [21] | Classification: Taylor_AMFSOpt-DAttNNet (Deep Attention Neural Network with Adaptive Swarm Intelligence) for prostate cancer classification and severity estimation (high-grade vs. low-grade). | MRI (T2WI) Analysis: Image-level | N = 620 (Kaggle dataset: 440 training, 180 testing; external validation on SPIE-AAPM-NCI dataset with 346 patients) Ref: Histopathology (implied from dataset labels) | Internal validation with train–test split; External validation on SPIE-AAPM-NCI dataset | Accuracy: 97.8%; Precision: 97.9%; Recall: 96%; F1-score: 95%; Specificity: 94% |
| Zaridis et al. (2024) [23] | Segmentation: ProLesA-Net (multi-channel 3D architecture with MSSE and MSAG attention mechanisms) for prostate lesion segmentation. | bpMRI (T2W, ADC, Diffusion-weighted imaging (DWI) Analysis: Lesion-level (3D volumetric segmentation) | Training: PICAI dataset (N = 219 patients, 15,768 images); External testing: Prostate-158 dataset (N = 82 patients, 5904 images) Ref: Expert-annotated ground truth masks | External validation (multi-center: 3T Siemens and Philips scanners); compared against six DL models (3D U-Net, VNet, TransU-Net, USE-Net, nnU-Net, Attention U-Net) | Dice score: 30.56%; Recall: 33.91%; Precision: 33.72%; Hausdorff distance: 19.61 mm; Average surface distance: 4.43 mm For small lesions (<15 mm): Dice score: 22.33%; Recall: 27.54% For intermediate lesions (15–30 mm): Dice score: 39.63%; Hausdorff distance: 14.43 mm |
| Sanjid et al. (2024) [24] | Segmentation: HUNet (dual-pathway multi-scale hierarchical upsampling network) for prostate zonal segmentation (anatomical zones: transition zone, peripheral zone; tumor segmentation). | mpMRI (T2WI, DWI, ADC, Dynamic contrast-enhanced (DCE)) Analysis: Lesion-level and zonal-level (whole gland, anatomy, tumor) | ProstateX: N = 193 patients (163 training, 15 validation, 15 testing); Prostate158: N = 158 patients (128 training, 15 validation, 15 testing) Ref: Expert-annotated ground truth masks | Internal validation with train–test split; compared against U-Net and DenseNet baselines | ProstateX anatomy: Intersection over Union (IoU): 0.8449; Dice Similarity Coefficient (DSC): 0.9872; RAVD: 0.0944; ASSD: 0.0106 Prostate158 anatomy: IoU: 0.8065; DSC: 0.9831; RAVD: 0.0020; MHD: 3.5736 mm; ASD: 0.0070 mm Prostate158 tumor: DSC: 0.9974; MHD: 0.5205 mm |
| Cai et al. (2024) [31] | Detection: 3D CNN-based deep learning model to predict csPCa using patient-level labels without tumor location information; combined with clinical data (PSA, PSA density) and radiologist PI-RADS scores. | mpMRI (T2WI, DWI, ADC, DCE) Analysis: Patient-level (case-level) with Grad-CAM localization | Internal: N = 5735 examinations (5215 patients) from Mayo Clinic; External: ProstateX N = 204 Ref: Histopathology (biopsy/prostatectomy) for positive cases; PI-RADS 1-2 screening cases as negative | External validation (ProstateX); compared with radiologist performance (internal reports + external PI-RADS ratings from 4 radiologists) | Internal test set: AUC: 0.89 (DL) vs. 0.89 (radiologists); p = 0.88 External test set: AUC: 0.86 (DL) vs. 0.84 (radiologists); p = 0.68 Image + radiologist model: AUC: 0.89; p < 0.001 vs. radiologists alone Grad-CAM localization: 92% (35/38) internal true-positives; 97% (56/58) external true-positives |
| Horasan, A.; Güneş, A. (2024) [17] | Detection: Ensemble deep learning model (3D-CNN + ResNet + Inception-v3 with soft voting) for prostate cancer detection in MRI. | bpMRI (T2-weighted, DWI, ADC) Analysis: Lesion-level | N = 3458 training/1692 testing (SPIE-AAPM-NCI PROSTATEx dataset) Ref: Transperineal biopsy mapping templates (START criteria) | Hold-out validation on independent test set | Accuracy: 91.3%; Sensitivity: 90.2%; Specificity: 92.1%; Precision: 89.8%; F1-score: 90.0%; AUC: 0.95 |
| Li et al. (2024) [36] | Detection: Unsupervised domain adaptation (UDA) with unified generative model for prostate lesion detection across multisite bpMRI with varying b-values. | bpMRI (T2-weighted, DWI with various b-values, ADC) Analysis: Lesion-level; Case-level | N = 5150 patients (14,191 samples) from 9 imaging centers; Test set: 1692 cases (2393 samples) Ref: Expert radiologist-reviewed lesion annotations based on clinical reports | External validation on independent multisite test set; 34 different b-value combinations | Overall (PI-RADS ≥3): Baseline AUC: 0.73 → UDA AUC: 0.79 (p < 0.001) Overall (PI-RADS ≥4): Baseline AUC: 0.77 → UDA AUC: 0.80 (p < 0.001) Most unfavorable setting (Group 9, PI-RADS ≥3): Baseline AUC: 0.49 → UDA AUC: 0.76 (p < 0.001) Most unfavorable setting (Group 9, PI-RADS ≥4): Baseline AUC: 0.50 → UDA AUC: 0.77 (p < 0.001) |
| Li et al. (2024) [41] | Grading: 3D Efficient CapsNet with divide-and-conquer strategy for PCa risk stratification (low/medium/high grade based on Gleason score). | T2-weighted MRI (single modality) Analysis: Lesion-level (3D volumetric) | N = 976 (Cancer Imaging Archive, public dataset) Ref: Biopsy-proven Gleason scores (histopathology) | Five-fold cross-validation | Low vs. High: AUC 0.83, F1-score 0.64, Accuracy 0.84 Low + Medium vs. High: AUC 0.79, F1-score 0.75, Accuracy 0.81 Medium vs. High: AUC 0.75, F1-score 0.69, Accuracy 0.71 Low vs. Medium: AUC 0.59, F1-score 0.57, Accuracy 0.60 Final multi-class (divide-and-conquer): Accuracy 0.55, Weighted Cohen’s Kappa 0.41 |
| Sun et al. (2024) [39] | Detection: Cascaded 3D U-Net for csPCa detection and localization. | mpMRI (DWI + ADC maps; T2WI used for preprocessing) Analysis: Lesion-level, Sextant-level, Patient-level | Development: N = 2105 (4 hospitals, 2014–2019) External Validation: N = 557 (3 hospitals, 2020–2021) Ref: Image-guided biopsy + radical prostatectomy pathology (ISUP grading) | External validation on independent temporal and multi-center cohort | Lesion level: Sensitivity 0.654, Positive Predictive Value (PPV) 0.747 Sextant level: Sensitivity 0.846, Specificity 0.884, Accuracy 0.874 Patient level: Sensitivity 0.943, Specificity 0.776, Accuracy 0.849 csPCa patients accuracy: 0.943 vs. non-csPCa: 0.776 (p < 0.001) |
| Li et al. (2024) [42] | Detection of csPCa aggressiveness; Deep Transfer Learning (DTL) with ResNet50 using 2.5D segmentation (3 consecutive slices as input) vs. 2D model. | bpMRI (T2WI, ADC) Analysis: Lesion-level (manual ROI segmentation on largest lesion layer ± adjacent layers) | N = 231 (single center) Ref: Histopathology (Gleason score ≥ 7 = csPCa, <7 = non-csPCa) Split: Training n = 185, Test n = 46 (stratified random sampling) | Internal validation (single-center, random split) | 2.5D Combined Model (Test Set): AUC 0.949; Accuracy 0.884; Sensitivity 0.974; Specificity 0.849 2D Combined Model (Test Set): AUC 0.886 2.5D outperformed 2D in all sequence combinations |
| Schrader et al. (2024) [32] | Risk assessment for significant prostate cancer (sPC) to avoid unnecessary biopsies; Fully automatic nnUNet-based DL ensemble predicting lesion probability (UNet-probability) and PI-RADS-analogous 5-point scale (UNet-Likert), integrated into risk calculators (RCs). | mpMRI (T2WI, DWI/ADC; radiologists also had DCE) Analysis: Patient-level (whole prostate segmentation, voxel-wise probability maps) | N = 1627 consecutive exams (single center, 2014–2021) Ref: Extended systematic + MRI/US-fusion targeted biopsy (ISUP grade ≥ 2 = sPC) Split: Training n = 1021 (DL training + RC calibration), Test n = 517 (RC evaluation); 834 training/517 test cases without prior PCa for RC analysis | Temporal split validation; internal test set withheld from DL training | UNet-probability alone: AUC 0.89 (95% CI: 0.86–0.92) Newly fitted PI-RADS + UNet-probability RC: AUC 0.93 (95% CI: 0.90–0.95); Brier score 0.10 At 15% risk threshold: 49% biopsies spared (252/517) with Negative Predictive Value (NPV) 94% (vs. 37% spared by PI-RADS ≥4) DL + PI-RADS combination outperformed either alone; UNet-Likert substituted for PI-RADS without performance loss |
| Talaat et al. (2024) [43] | Detection: Modified ResNet50-based architecture integrating Faster R-CNN with dual optimizers (Adam + Stochastic Gradient Descent (SGD)) for prostate cancer detection; R-mask modification for segmentation. | MRI (histopathological images from prostate cancer dataset) Analysis: Image-level with Mask R-CNN for region segmentation | N ≈ 11,000 images (training: ~11,000; test: ~400) Ref: Histopathology (biopsy-confirmed) Source: Kaggle Prostate Cancer Grade Assessment dataset | Internal validation (80% train, 20% test) | Accuracy: 97.40%; Sensitivity: 97.09%; Specificity: 97.56%; Precision: 95.24% |
| Jin et al. (2024) [29] | Detection & Grading: 3D ResNet18 for prostate cancer detection and Gleason grade prediction using single-modality T2WI. | T2WI (single-modality) Analysis: Prostate-level segmentation (not lesion-level) | Internal: N = 497 (195 healthy + 302 PCa); External: N = 48; Public challenge: N = 91 Ref: Histopathology (biopsy or surgery) Gleason grade groups: 1–5 | Multi-center validation (internal + external + public challenge datasets) | PCa Detection: AUC: 0.918 (validation), AUC: 1.000 (training) Gleason Grade Prediction: AUC: 0.854 (validation), 0.776 (external), 0.838 (public challenge); Accuracy: 85.7% (validation), 76.2% (external), 80.6% (public challenge) |
| Zheng et al. (2024) [44] | Detection: Anatomical-aware PCa detection network (AtPCa-Net) with symmetric-aware architecture and Zonal Loss (ZL) for csPCa detection; 3D UNet-like backbone with nnU-Net structure. | bpMRI (T2WI, ADC, high-B DWI)—DCE excluded Analysis: Lesion-level with patient-level classification | N = 652 (Single institution: UCLA) Ref: Whole-mount histopathology (WMHP) after radical prostatectomy; csPCa defined as GS ≥ 7 Includes: 220 patients with PCa (246 lesions) + 432 patients without PCa (negative biopsies) | 5-fold cross-validation (internal validation) | Patient-level classification: AUC: 0.880 (95% CI: 0.846–0.914) csPCa detection sensitivity: 67.5% at 0.5 False Positive (FP)/patient; 72.8% at 1 FP/patient; 80.9% at 2.5 FP/patient Compared to nnUNet baseline: AUC improvement from 0.843 to 0.880 |
| Johnson et al. (2025) [40] | Risk stratification & scan tailoring: Multi-task 3D ResNet-50 for simultaneous PI-RADS ≥ 3 and Gleason ≥ 7 classification; real-time workflow integration using Mercure platform. | bpMRI (T2WI, DWI with ADC and b1500) Analysis: Patient-level with real-time inference (14–16 s latency) | Training: N = 26,129 studies (20,089 patients, 7 centers, 2015–2023) Prospective test: N = 142 (treatment-naive, 2024) Ground truth verified: N = 151 (biopsy/prostatectomy/follow-up confirmed) Ref: Prospective: Consensus of 3 radiologists (PI-RADS ≥ 3) Retrospective: Histopathology (Gleason ≥ 7) or long-term follow-up | Prospective validation + Ground truth verified retrospective validation (single institution) | PI-RADS ≥ 3 (prospective): AUC: 0.83 (95% CI: 0.77–0.89); Sensitivity: 93%; Specificity: 54% Gleason ≥ 7 (ground truth): AUC: 0.86 (95% CI: 0.80–0.91); Sensitivity: 93%; Specificity: 62%; PPV: 63% Workflow impact: ~32% of patients (46/142) could avoid mpMRI; ~9 min saved per abbreviated exam |
3.2. Beyond MRI: AI Applications in Prostate-Specific Membrane Antigen (PSMA) PET/CT and Ultrasound
4. Artificial Intelligence in Digital Pathology: Automation Meets Histological Complexity
| Study (Ref.) | Clinical Task | AI Approach & Input | Dataset & Ground Truth | Generalizability (Ext. Val.) | Performance & Implications |
|---|---|---|---|---|---|
| Harder et al. (2024) [74] | Virtual Biopsy: Optimize MRI-targeted biopsy approach (number of cores, intercore distance, grading strategy) & Gleason grading. | Method: Pixel-wise segmentation using U-Net++ (decoder) + EfficientNetB1 backbone (encoder) for tumor detection and Gleason pattern (GP3/GP4/GP5) segmentation; fast-track annotation transfer pipeline using DeepLabV3 + ResNet50 for gland segmentation Input: Whole-slide histopathology images (H&E-stained) from radical prostatectomy specimens simulating MRI-targeted biopsies | Train: TCGA cohort N = 362 patients (389 WSIs) + University Hospital Cologne N = 23 + Case Western University N = 30 Virtual biopsy validation: 480 virtual biopsy cores from 114 patients (120 tumors) Independent test: 121 clinically significant MRI-visible tumors from 115 RP patients Ref: Expert GU pathologist manual annotations (pixel-wise); WMHP for final GS | Independent cohort from Wiener Neustadt State Hospital (Austria); validation on MICCAI 2019 Gleason Challenge (222 TMAs) | Tumor detection: Sensitivity 0.99/Specificity 0.90 (aware version); 0.97/0.97 (balanced version) Grading: Quadratic kappa 0.77–0.78 vs. pathologists (non-inferior to inter-pathologist agreement) 65,340 virtual biopsies performed |
| Kong et al. (2024) [72] | Prostate cancer diagnosis (benign/malignant classification) and Gleason grading (ISUP categories 0–5) | Method: Federated Attention Consistent Learning (FACL) framework based on AttMIL (Attention-based Multiple Instance Learning) with CTransPath feature extractor Input: H&E-stained WSIs cropped into 224 × 224 pixel patches | Train: 19,461 WSIs from 7 centers (Hebei-1/2, Nanchang, DiagSet-B-1/2, PANDA-1/2) Val: 20% internal data from each center Ref: Pathologist annotations (ISUP grading standards) | Diagnosis: Two independent external datasets—DiagSet-A (430 slides) and QHD (765 slides) Grading: Independent dataset from Nanchang hospital | Diagnosis AUC: 0.9718 (vs. single-center average 0.9499) Grading Kappa: 0.8463 (vs. single-center average 0.7379) |
| Alici-Karaca et al. (2024) [76] | Prostate cancer detection and Gleason grading (8-class/4-class/2-class classification) | Method: Eff4-Attn (EfficientNet-B4 with Efficient Channel Attention (ECA) attention module), replacing original SE blocks with ECA Input: H&E-stained histopathology image patches (380 × 380 pixels), multiple magnifications (5×/10×/20×/40×) | Train/Val/Test: DiagSet-A.1 dataset (238 WSIs, randomly split 8:1:1) 8-class: A (artifact), N (normal), T (tissue), R1–R5 (Gleason grades 1–5) 4-class: A-N-T-R1-R2/R3/R4/R5 2-class: healthy (A-N-T) vs. cancer (R1–R5) Ref: Assessment by two independent pathologists | Random splitting strategy employed to reduce class imbalance effects; no explicitly mentioned completely independent external validation set | Best results at 40× magnification: 2-class accuracy: 96.18% (cancer detection) 4-class accuracy: 94.86% (cancer severity grading) 8-class accuracy: 93.32% (complete Gleason grading) |
| Kondejkar et al. (2024) [77] | Prostate cancer detection and Gleason grading (9-class patch-level classification: BG, T, N, A, R1–R5) | Method: ResNet-18/34/50 with transfer learning; DeepLabv3 as feature extractor for CNN classifiers Input: 256 × 256 pixel patches from H&E-stained WSIs at multiple magnifications (5×, 10×, 20×, 40×) | Train: DiagSet dataset (~2.6 M tissue patches from 430 fully annotated scans; 4000 images/class curated subset) Val: 5-fold cross-validation Test: 7200 images (stratified split) Ref: Pathologist annotations (9 classes: scan background, tissue background, normal, artifact, Gleason grades 1–5) | Cross-validation across multiple magnification levels; no independent external test set mentioned | ResNet34 Test Accuracy: 0.9999 (40×), 0.9999 (20×), 1.0000 (10×), 0.9993 (5×) ResNet18 Test Accuracy: 0.9977 (40×), 0.9992 (20×), 0.9964 (10×), 0.9921 (5×) ResNet50 Test Accuracy: 0.9957 (40×), 0.9915 (20×), 0.9952 (10×), 0.9981 (5×) |
| Huang et al. (2024) [78] | Automated Gleason grading of prostate cancer using fast multiphoton microscopy (MPM) with real-time diagnosis capability | Method: SwinIR (image super-resolution) + Swin Transformer (classification); transfer learning from ImageNet-1K Input: Label-free MPM images (unstained tissue); 128 × 128 Low Resolution (LowRes) or 512 × 512 High Resolution (HR); 300 × 300 for classification | Train Super Resolution (SR): 20,272 LowRes-HR image pairs (from 24 TMA spots, augmented to 24,576 pairs) Test SR: 4304 pairs Train Classification: 12,000 images (3500/class: benign, Gleason 3, 4, 5 after augmentation) Test Classification: 2500 images Ref: Pathologist annotations on corresponding H&E-stained sections | Internal validation only; 19 TMA spots (5 tissue microarrays) from single institution | SR Quality: PSNR 24.36 ± 4.38 dB, SSIM 0.9027 ± 0.0130 HR Classification: Accuracy 90.95%, Macro-F1 90.94% SR Classification: Accuracy 89.85%, Macro-F1 89.85% (vs. LowRes: 83.20% accuracy) Speed Improvement: Acquisition time reduced from 7.55 s to 0.73 s per frame (0.24 s LowRes + 0.49 s SR) |
| Mannas et al. (2024) [79] | Real-time detection of prostate cancer in unprocessed, fresh prostate biopsies using stimulated Raman histology (SRH) with AI interpretation | Method: Inception-ResNet-v2 CNN for patch-level and biopsy-level classification Input: Stimulated Raman histology (SRH) images of fresh, unstained, unlabeled prostate biopsies (label-free, no tissue processing) | Train: 303 biopsies from 100 participants (radical prostatectomy specimens) → 1.75 million patches Val: 4% of total patches (validation set during training) Test: 113 independent biopsies (59 ex vivo, 54 in vivo) from 44 participants Ref: Consensus of 2 genitourinary pathologists on H&E-stained sections (ISUP 2019 grading) | Independent test set with mixed ex vivo and in vivo biopsies; single-center study (NYU Langone Health) | Patch-level accuracy: 98.6% (validation), 99.6% (training) Biopsy-level accuracy: 96.5% (combined ex vivo + in vivo) Sensitivity: 96.3% Specificity: 96.6% AUC: 0.99 Speed: 2–2.75 min per biopsy (full scan); 0.24–0.73 s with 4× accelerated scan Limitation: Cannot assign tumor grades (Gleason grading not performed) |
| Hossam Magdy Balaha et al. (2024) [66] | Prostate cancer classification (cancer vs. normal) and Gleason grading segmentation (Grade 1–5) | Method: Transfer learning with 8 CNN architectures (ResNet152/152V2, MobileNet/V2/V3Small/V3Large, NASNetMobile/Large) + Aquila optimizer for hyperparameter tuning; U-Net for segmentation Input: H&E-stained histopathology images (PANDA), MRI images (Transverse Plane), ISUP grade-wise images | Train/Val/Test: “PANDA: Resized Train Data (512 × 512)”: segmentation dataset (~11,000 WSIs from Karolinska Institute & Radboud University) “ISUP Grade-wise Prostate Cancer”: 10,616 images (grades 0–5) “Transverse Plane Prostate Dataset”: 1528 MRI images from 64 patients Split: 85% train/val, 15% test for classification; 80% train, 20% test for SR task Ref: Pathologist annotations (ISUP grading for PANDA; significance labels for MRI) | Multi-dataset validation across 3 different data sources (histopathology + MRI); no independent external clinical cohort | Classification Accuracy: ISUP Grade-wise dataset: 88.91% (MobileNet best) Transverse Plane MRI dataset: 100% (MobileNet & ResNet152) Segmentation (U-Net on PANDA): Average accuracy: 98.46% Average AUC: 0.9778 Average Dice: 0.9873 Grade-specific Dice: 0.9761 (G1) to 0.9990 (G5) Optimization: Aquila optimizer improved hyperparameter selection vs. default settings |
| Yan Gao, Mahsa Vali (2024) [80] | Prostate cancer classification and grading (Gleason grading) from histopathology images. | Method: CNN with hybrid feature extraction Input: Preprocessed pathology images using DWT (Discrete Wavelet Transform) + GLCM (Gray-Level Co-occurrence Matrix) features | Train/Val: PROSTATEx dataset (204 mpMRI scans, 330 lesions for training; 208 lesions for testing) Ref: Pathologist annotation based on Gleason Grade Group (GGG) | Cross-validation with 7:3 train–test split; evaluated on multiple classification tasks (Benign vs. Malignant, Benign vs. Grade 3/4/5, Grade 3 vs. Grades 4&5) | Accuracy: 97.3% (average) Precision: 98% AUC: 0.95 F1-score: 91.05% (macro average) |
| Xinmi Huo et al. (2024) [65] | AI-assisted Gleason grading for prostate cancer detection and grading from WSIs. | Method: Deep learning classification model (ResNet50, VGG16, NasNet Mobile) with weighted classification layer; Image appearance migration for generalization Input: H&E-stained whole slide images from prostatectomy and biopsy specimens | Train: 131 WSIs (prostatectomy) with 22,148 mm2 annotation area (12,630 instances) Val/Test: 56 WSIs (prostatectomy) + 156 biopsy specimens with 2223 mm2 annotation area (2852 instances); additional 140 test mpMRIs from PROSTATEx Ref: Multi-pathologist annotation (3 pathologists from NUH, 9 pathologists from 5 Chinese hospitals for validation) | Tested across 6 different scanners (Akoya, Olympus, KFBio, Zeiss Leica, Philips); validated with 5 pathologists from Singapore and China in three-phase clinical study | Annotation-level F1: 0.80 (Akoya), improved to 0.88 with generalization techniques (other scanners) WSI-level Quadratic Weighted Kappa: 0.71 Gleason Pattern Detection F1: 0.73 → 0.88 (with image appearance migration) Time efficiency: 43% reduction in Gleason scoring time Semi-auto annotation efficiency: 2.5× faster than manual annotation |
5. Artificial Intelligence in Liquid Biopsy: Non-Invasive Detection with Persistent Limitations
6. Artificial Intelligence in Multimodal Data Integration: Synergy and Complexity
7. Discussion
7.1. Core Challenges and Practical Bottlenecks
7.2. “Performance” for Clinical Translation
7.3. Future Prospects
8. Conclusions
Supplementary Materials
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| ADC | Apparent Diffusion Coefficient |
| AI | Artificial intelligence |
| AIUSP | AI-guided Ultrasound Strategy for Prostate biopsy |
| AUC | Area Under the Curve |
| bpMRI | Biparametric magnetic resonance imaging |
| CAD | Computer-aided diagnosis |
| CI | Confidence Interval |
| CNN/CNNs | Convolutional Neural Networks |
| csPCa | Clinically significant prostate cancer |
| ctDNA | Circulating tumor DNA |
| DCE | Dynamic contrast-enhanced |
| DL | Deep learning |
| DL-CAD | Deep learning-based computer-aided diagnosis |
| DRE | Digital rectal examination |
| DSC | Dice Similarity Coefficient |
| DTL | Deep Transfer Learning |
| DWI | Diffusion-weighted imaging |
| DWT | Discrete Wavelet Transform |
| ECA | Efficient Channel Attention |
| EpCAM | Epithelial cell adhesion molecule |
| EV | Extracellular vesicle |
| FACL | Federated attention consistent learning |
| FDA | Food and Drug Administration |
| FP | False Positive |
| GGG | Gleason Grade Group |
| GLCM | Gray Level Co-occurrence Matrix |
| HIS | Hospital Information Systems |
| HR | High Resolution |
| IoU | Intersection over Union |
| ISUP | International Society of Urological Pathology |
| LIS | Laboratory Information Systems |
| LowRes | Low Resolution |
| LogReg | Logistic Regression |
| miRNAs | microRNAs |
| ML | Machine learning |
| MPM | Multiphoton Microscopic |
| mpMRI | Multiparametric magnetic resonance imaging |
| MRI | Magnetic resonance imaging |
| NPV | Negative Predictive Value |
| PACS | Picture Archiving and Communication Systems |
| PCa | Prostate cancer |
| PET/CT | Positron emission tomography/computed tomography |
| PI-RADS | Prostate Imaging–Reporting and Data System |
| PPV | Positive Predictive Value |
| PSA | Prostate-specific antigen |
| PSMA | Prostate-specific membrane antigen |
| RC/RCs | Risk Calculators |
| RF | Random Forest |
| SGD | Stochastic Gradient Descent |
| SR | Super Resolution |
| SRH | Stimulated Raman Histology |
| SRS | Stimulated Raman Scattering |
| SVMs | Support Vector Machines |
| T2WI | T2-weighted imaging |
| UDA | Unsupervised domain adaptation |
| WHO | World Health Organization |
| WMHP | whole-mount histopathology |
| WSI | Whole-slide imaging |
| XAI | Explainable Artificial Intelligence |
References
- Bray, F.; Laversanne, M.; Sung, H.; Ferlay, J.; Siegel, R.L.; Soerjomataram, I.; Jemal, A. Global cancer statistics 2022: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 2024, 74, 229–263. [Google Scholar] [CrossRef]
- Cuzick, J.; Thorat, M.A.; Andriole, G.; Brawley, O.W.; Brown, P.H.; Culig, Z.; Eeles, R.A.; Ford, L.G.; Hamdy, F.C.; Holmberg, L.; et al. Prevention and early detection of prostate cancer. Lancet Oncol. 2014, 15, e484–e492. [Google Scholar] [CrossRef] [PubMed]
- Sundaresan, V.M.; Smani, S.; Rajwa, P.; Renzulli, J.; Sprenkle, P.C.; Kim, I.Y.; Leapman, M.S. Prostate-specific antigen screening for prostate cancer: Diagnostic performance, clinical thresholds, and strategies for refinement. Urol. Oncol. -Semin. Orig. Investig. 2025, 43, 41–48. [Google Scholar] [CrossRef]
- Martelin, N.; De Witt, B.; Chen, B.; Eschwege, P. Development and validation of an imageless machine-learning algorithm for the initial screening of prostate cancer. Prostate 2024, 84, 842–849. [Google Scholar] [CrossRef]
- Maffei, D.; Moore, C.M. Personalized risk-adapted models in prostate cancer during active surveillance using MRI—A narrative review. Eur. Radiol. 2025, 35, 6444–6453. [Google Scholar] [CrossRef]
- Wong, C.H.-M.; Ko, I.C.-H.; Ng, C.F. Liquid biomarkers in prostate cancer: Recent advancements and future directions. Curr. Opin. Urol. 2025, 35, 3–12. [Google Scholar] [CrossRef] [PubMed]
- Ravì, D.; Wong, C.; Deligianni, F.; Berthelot, M.; Andreu-Perez, J.; Lo, B.; Yang, G.-Z. Deep Learning for Health Informatics. IEEE J. Biomed. Health Inform. 2017, 21, 4–21. [Google Scholar] [CrossRef]
- Lubbad, M.; Karaboga, D.; Basturk, A.; Akay, B.; Nalbantoglu, U.; Pacal, I. Machine learning applications in detection and diagnosis of urology cancers: A systematic literature review. Neural Comput. Appl. 2024, 36, 6355–6379. [Google Scholar] [CrossRef]
- Chen, Z.L.; Huang, Z.C.; Lin, S.S.; Li, Z.H.; Dou, R.L.; Xu, Y.; Jiang, S.Q.; Li, M.Q. Clinical value of a radiomics model based on machine learning for the prediction of prostate cancer. J. Int. Med. Res. 2024, 52, 3000605241275338. [Google Scholar] [CrossRef]
- Liu, J.C.; Ruan, X.H.; Chun, T.T.; Yao, C.; Huang, D.; Wong, H.L.; Lai, C.T.; Tsang, C.F.; Ho, S.H.; Ng, T.L.; et al. MRI T2w Radiomics-Based Machine Learning Models in Imaging Simulated Biopsy Add Diagnostic Value to PI-RADS in Predicting Prostate Cancer: A Retrospective Diagnostic Study. Cancers 2024, 16, 2944. [Google Scholar] [CrossRef]
- Huang, J.; He, C.; Xu, P.; Song, B.; Zhao, H.; Yin, B.; He, M.; Lu, X.; Wu, J.; Wang, H. Development and validation of a clinical-radiomics model for prediction of prostate cancer: A multicenter study. World J. Urol. 2024, 42, 275. [Google Scholar] [CrossRef]
- Winkel, D.J.; Breit, H.C.; Shi, B.B.; Boll, D.T.; Seifert, H.H.; Wetterauer, C. Predicting clinically significant prostate cancer from quantitative image features including compressed sensing radial MRI of prostate perfusion using machine learning: Comparison with PI-RADS v2 assessment scores. Quant. Imaging Med. Surg. 2020, 10, 808–823. [Google Scholar] [CrossRef]
- Bratan, F.; Niaf, E.; Melodelima, C.; Chesnais, A.L.; Souchon, R.; Mège-Lechevallier, F.; Colombel, M.; Rouvière, O. Influence of imaging and histological factors on prostate cancer detection and localisation on multiparametric MRI: A prospective study. Eur. Radiol. 2013, 23, 2019–2029. [Google Scholar] [CrossRef]
- Aldoj, N.; Lukas, S.; Dewey, M.; Penzkofer, T. Semi-automatic classification of prostate cancer on multi-parametric MR imaging using a multi-channel 3D convolutional neural network. Eur. Radiol. 2020, 30, 1243–1253. [Google Scholar] [CrossRef]
- Arif, M.; Schoots, I.G.; Castillo Tovar, J.; Bangma, C.H.; Krestin, G.P.; Roobol, M.J.; Niessen, W.; Veenland, J.F. Clinically significant prostate cancer detection and segmentation in low-risk patients using a convolutional neural network on multi-parametric MRI. Eur. Radiol. 2020, 30, 6582–6592. [Google Scholar] [CrossRef]
- Bleker, J.; Kwee, T.C.; Rouw, D.; Roest, C.; Borstlap, J.; de Jong, I.J.; Dierckx, R.; Huisman, H.; Yakar, D. A deep learning masked segmentation alternative to manual segmentation in biparametric MRI prostate cancer radiomics. Eur. Radiol. 2022, 32, 6526–6535. [Google Scholar] [CrossRef]
- Horasan, A.; Gunes, A. Advancing Prostate Cancer Diagnosis: A Deep Learning Approach for Enhanced Detection in MRI Images. Diagnostics 2024, 14, 1871. [Google Scholar] [CrossRef] [PubMed]
- Bashkanov, O.; Rak, M.; Meyer, A.; Engelage, L.; Lumiani, A.; Muschter, R.; Hansen, C. Automatic detection of prostate cancer grades and chronic prostatitis in biparametric MRI. Comput. Methods Programs Biomed. 2023, 239, 107624. [Google Scholar] [CrossRef] [PubMed]
- Liu, F.; Zhao, Y.; Song, J.; Tu, G.; Liu, Y.; Peng, Y.; Mao, J.; Yan, C.; Wang, R. A hybrid classification model with radiomics and CNN for high and low grading of prostate cancer Gleason score on mp-MRI. Displays 2024, 83, 102703. [Google Scholar] [CrossRef]
- Gavade, A.B.; Nerli, R.; Kanwal, N.; Gavade, P.A.; Pol, S.S.; Rizvi, S.T.H. Automated Diagnosis of Prostate Cancer Using mpMRI Images: A Deep Learning Approach for Clinical Decision Support. Computers 2023, 12, 152. [Google Scholar] [CrossRef]
- Sowmya, D.; Bhavani, S.A.; Sasank, V.V.S.; Rao, T.S. Prostate cancer classification using adaptive swarm Intelligence based deep attention neural network. Biomed. Signal Process. Control 2024, 96, 106654. [Google Scholar] [CrossRef]
- Li, S.T.; Zhang, L.; Guo, P.; Pan, H.Y.; Chen, P.Z.; Xie, H.F.; Xie, B.K.; Chen, J.Y.; Lai, Q.Q.; Li, Y.Z.; et al. Prostate cancer of magnetic resonance imaging automatic segmentation and detection of based on 3D-Mask RCNN. J. Radiat. Res. Appl. Sci. 2023, 16, 100636. [Google Scholar] [CrossRef]
- Zaridis, D.I.; Mylona, E.; Tsiknakis, N.; Tachos, N.S.; Matsopoulos, G.K.; Marias, K.; Tsiknakis, M.; Fotiadis, D.I. ProLesA-Net: A multi-channel 3D architecture for prostate MRI lesion segmentation with multi-scale channel and spatial attentions. Patterns 2024, 5, 100992. [Google Scholar] [CrossRef] [PubMed]
- Sanjid, K.S.; Junayed, M.S.S.; Hossain, M.T.; Wang, Y.L.; Uddin, M.M.; Haque, S.A. From pixels to pathology: A novel dual-pathway multi-scale hierarchical upsampling network for MRI-based prostate zonal segmentation. Intell. Syst. Appl. 2024, 22, 200382. [Google Scholar] [CrossRef]
- Bhattacharya, I.; Seetharaman, A.; Kunder, C.; Shao, W.; Chen, L.C.; Soerensen, S.J.C.; Wang, J.B.; Teslovich, N.C.; Fan, R.E.; Ghanouni, P.; et al. Selective identification and localization of indolent and aggressive prostate cancers via CorrSigNIA: An MRI-pathology correlation and deep learning framework. Med. Image Anal. 2022, 75, 102288. [Google Scholar] [CrossRef] [PubMed]
- Winkel, D.J.; Tong, A.; Lou, B.; Kamen, A.; Comaniciu, D.; Disselhorst, J.A.; Rodriguez-Ruiz, A.; Huisman, H.; Szolar, D.; Shabunin, I.; et al. A Novel Deep Learning Based Computer-Aided Diagnosis System Improves the Accuracy and Efficiency of Radiologists in Reading Biparametric Magnetic Resonance Images of the Prostate Results of a Multireader, Multicase Study. Investig. Radiol. 2021, 56, 605–613. [Google Scholar] [CrossRef]
- Saha, A.; Bosma, J.S.; Twilt, J.J.; van Ginneken, B.; Bjartell, A.; Padhani, A.R.; Bonekamp, D.; Villeirs, G.; Salomon, G.; Giannarini, G.; et al. Artificial intelligence and radiologists in prostate cancer detection on MRI (PI-CAI): An international, paired, noninferiority, confirmatory study. Lancet Oncol. 2024, 25, 879–887. [Google Scholar] [CrossRef] [PubMed]
- Yu, R.; Jiang, K.-w.; Bao, J.; Hou, Y.; Yi, Y.; Wu, D.; Song, Y.; Hu, C.-H.; Yang, G.; Zhang, Y.-D. PI-RADSAI: Introducing a new human-in-the-loop AI model for prostate cancer diagnosis based on MRI. Br. J. Cancer 2023, 128, 1019–1029. [Google Scholar] [CrossRef]
- Jin, L.; Yu, Z.; Gao, F.; Li, M. T2-weighted imaging-based deep-learning method for noninvasive prostate cancer detection and Gleason grade prediction: A multicenter study. Insights Into Imaging 2024, 15, 111. [Google Scholar] [CrossRef]
- Chung, E.M.; Zhang, S.C.; Nguyen, A.T.; Atkins, K.M.; Kamrava, M. Feasibility and Acceptability of ChatGPT Generated Radiology Report Summaries for Cancer Patients. Digit Health 2023, 9, 20552076231221620. [Google Scholar] [CrossRef]
- Cai, J.C.; Nakai, H.; Kuanar, S.; Froemming, A.T.; Bolan, C.W.; Kawashima, A.; Takahashi, H.; Mynderse, L.A.; Dora, C.D.; Humphreys, M.R.; et al. Fully Automated Deep Learning Model to Detect Clinically Significant Prostate Cancer at MRI. Radiology 2024, 312, e232635. [Google Scholar] [CrossRef]
- Schrader, A.; Netzer, N.; Hielscher, T.; Goertz, M.; Zhang, K.S.; Schuetz, V.; Stenzinger, A.; Hohenfellner, M.; Schlemmer, H.-P.; Bonekamp, D. Prostate cancer risk assessment and avoidance of prostate biopsies using fully automatic deep learning in prostate MRI: Comparison to PI-RADS and integration with clinical data in nomograms. Eur. Radiol. 2024, 34, 7909–7920. [Google Scholar] [CrossRef]
- Sun, Z.; Wang, K.; Kong, Z.; Xing, Z.; Chen, Y.; Luo, N.; Yu, Y.; Song, B.; Wu, P.; Wang, X.; et al. A multicenter study of artificial intelligence-aided software for detecting visible clinically significant prostate cancer on mpMRI. Insights Into Imaging 2023, 14, 72. [Google Scholar] [CrossRef]
- Padhani, A.R.; Papanikolaou, N. AI and human interactions in prostate cancer diagnosis using MRI. Eur. Radiol. 2025, 35, 5695–5700. [Google Scholar] [CrossRef]
- Zhao, L.; Bao, J.; Qiao, X.; Jin, P.; Ji, Y.; Li, Z.; Zhang, J.; Su, Y.; Ji, L.; Shen, J.; et al. Predicting clinically significant prostate cancer with a deep learning approach: A multicentre retrospective study. Eur. J. Nucl. Med. Mol. Imaging 2023, 50, 727–741. [Google Scholar] [CrossRef]
- Li, H.; Liu, H.; von Busch, H.; Grimm, R.; Huisman, H.; Tong, A.; Winkel, D.; Penzkofer, T.; Shabunin, I.; Choi, M.H.; et al. Deep Learning-based Unsupervised Domain Adaptation via a Unified Model for Prostate Lesion Detection Using Multisite Biparametric MRI Datasets. Radiology. Artif. Intell. 2024, 6, e230521. [Google Scholar] [CrossRef]
- Mehmood, M.; Abbasi, S.H.; Aurangzeb, K.; Majeed, M.F.; Anwar, M.S.; Alhussein, M. A classifier model for prostate cancer diagnosis using CNNs and transfer learning with multi-parametric MRI. Front. Oncol. 2023, 13, 1225490. [Google Scholar] [CrossRef] [PubMed]
- Twilt, J.J.; Saha, A.; Bosma, J.S.; Padhani, A.R.; Bonekamp, D.; Giannarini, G.; van den Bergh, R.; Kasivisvanathan, V.; Obuchowski, N.; Yakar, D.; et al. AI-Assisted vs Unassisted Identification of Prostate Cancer in Magnetic Resonance Images. JAMA Netw. Open 2025, 8, e2515672. [Google Scholar] [CrossRef] [PubMed]
- Sun, Z.; Wang, K.; Wu, C.; Chen, Y.; Kong, Z.; She, L.; Song, B.; Luo, N.; Wu, P.; Wang, X.; et al. Using an artificial intelligence model to detect and localize visible clinically significant prostate cancer in prostate magnetic resonance imaging: A multicenter external validation study. Quant. Imaging Med. Surg. 2024, 14, 43–60. [Google Scholar] [CrossRef]
- Johnson, P.M.; Dutt, T.; Ginocchio, L.A.; Saimbhi, A.S.; Umapathy, L.; Block, K.T.; Sodickson, D.K.; Chopra, S.; Tong, A.; Chandarana, H. Prostate Cancer Risk Stratification and Scan Tailoring Using Deep Learning on Abbreviated Prostate MRI. J. Magn. Reson. Imaging 2025, 62, 858–866. [Google Scholar] [CrossRef] [PubMed]
- Li, Y.; Wynne, J.; Wang, J.; Roper, J.; Chang, C.W.; Patel, A.B.; Shelton, J.; Liu, T.; Mao, H.; Yang, X. MRI-based prostate cancer classification using 3D efficient capsule network. Med. Phys. 2024, 51, 4748–4758. [Google Scholar] [CrossRef]
- Li, M.; Ding, N.; Yin, S.; Lu, Y.; Ji, Y.; Jin, L. Enhancing automatic prediction of clinically significant prostate cancer with deep transfer learning 2.5-dimensional segmentation on bi-parametric magnetic resonance imaging (bp-MRI). Quant. Imaging Med. Surg. 2024, 14, 4893–4902. [Google Scholar] [CrossRef]
- Talaat, F.M.; El-Sappagh, S.; Alnowaiser, K.; Hassan, E. Improved prostate cancer diagnosis using a modified ResNet50-based deep learning architecture. BMC Med. Inform. Decis. Mak. 2024, 24, 23. [Google Scholar] [CrossRef]
- Zheng, H.; Hung, A.L.Y.; Miao, Q.; Song, W.; Scalzo, F.; Raman, S.S.; Zhao, K.; Sung, K. AtPCa-Net: Anatomical-aware prostate cancer detection network on multi-parametric MRI. Sci. Rep. 2024, 14, 5740. [Google Scholar] [CrossRef]
- Lawhn-Heath, C.; Salavati, A.; Behr, S.C.; Rowe, S.P.; Calais, J.; Fendler, W.P.; Eiber, M.; Emmett, L.; Hofman, M.S.; Hope, T.A. Prostate-specific Membrane Antigen PET in Prostate Cancer. Radiology 2021, 299, 248–260. [Google Scholar] [CrossRef] [PubMed]
- Yi, Z.; Hu, S.; Lin, X.; Zou, Q.; Zou, M.; Zhang, Z.; Xu, L.; Jiang, N.; Zhang, Y. Machine learning-based prediction of invisible intraprostatic prostate cancer lesions on (68) Ga-PSMA-11 PET/CT in patients with primary prostate cancer. Eur. J. Nucl. Med. Mol. Imaging 2022, 49, 1523–1534. [Google Scholar] [CrossRef] [PubMed]
- Jafari, E.; Zarei, A.; Dadgar, H.; Keshavarz, A.; Manafi-Farid, R.; Rostami, H.; Assadi, M. A convolutional neural network-based system for fully automatic segmentation of whole-body [(68)Ga]Ga-PSMA PET images in prostate cancer. Eur. J. Nucl. Med. Mol. Imaging 2024, 51, 1476–1487. [Google Scholar] [CrossRef] [PubMed]
- Wang, X.; Xie, Y.; Zheng, X.; Liu, B.; Chen, H.; Li, J.; Ma, X.; Xiang, J.; Weng, G.; Zhu, W.; et al. A prospective multi-center randomized comparative trial evaluating outcomes of transrectal ultrasound (TRUS)-guided 12-core systematic biopsy, mpMRI-targeted 12-core biopsy, and artificial intelligence ultrasound of prostate (AIUSP) 6-core targeted biopsy for prostate cancer diagnosis. World J. Urol. 2023, 41, 653–662. [Google Scholar]
- Ao, J.; Shao, X.; Liu, Z.; Liu, Q.; Xia, J.; Shi, Y.; Qi, L.; Pan, J.; Ji, M. Stimulated Raman Scattering Microscopy Enables Gleason Scoring of Prostate Core Needle Biopsy by a Convolutional Neural Network. Cancer Res. 2023, 83, 641–651. [Google Scholar] [CrossRef]
- Yang, F.; Wang, C.; Shen, J.; Ren, Y.; Yu, F.; Luo, W.; Su, X. End-to-end 18F PSMA-1007 PET/CT radiomics-based pipeline for predicting ISUP grade group in prostate cancer. Abdom. Radiol. 2025, 50, 1641–1652. [Google Scholar] [CrossRef]
- van den Kroonenberg, D.L.; Jager, A.; Garrido-Utrilla, A.; Reitsma, J.B.; Postema, A.W.; Beerlage, H.P.; Oddens, J.R. Clinical Validation of Multiparametric Ultrasound for Detecting Clinically Significant Prostate Cancer Using Computer-Aided Diagnosis: A Direct Comparison with the Magnetic Resonance Imaging Pathway. Eur. Urol. Open Sci. 2024, 66, 60–66. [Google Scholar] [CrossRef]
- Mew, A.; Chau, E.; Bera, K.; Ramaiya, N.; Tirumani, S.H. Recommendations from Imaging, Oncology, and Radiology Organizations to Guide Management in Prostate Cancer: Summary of Current Recommendations. Radiol. Imaging Cancer 2025, 7, e240091. [Google Scholar] [CrossRef]
- Grosset, A.A.; Dallaire, F.; Nguyen, T.; Birlea, M.; Wong, J.; Daoust, F.; Roy, N.; Kougioumoutzakis, A.; Azzi, F.; Aubertin, K.; et al. Identification of intraductal carcinoma of the prostate on tissue specimens using Raman micro-spectroscopy: A diagnostic accuracy case-control study with multicohort validation. PLoS Med. 2020, 17, e1003281. [Google Scholar] [CrossRef] [PubMed]
- Pantanowitz, L.; Quiroga-Garza, G.M.; Bien, L.; Heled, R.; Laifenfeld, D.; Linhart, C.; Sandbank, J.; Albrecht Shach, A.; Shalev, V.; Vecsler, M.; et al. An artificial intelligence algorithm for prostate cancer diagnosis in whole slide images of core needle biopsies: A blinded clinical validation and deployment study. Lancet Digit. Health 2020, 2, e407–e416. [Google Scholar] [CrossRef] [PubMed]
- Duran-Lopez, L.; Dominguez-Morales, J.P.; Conde-Martin, A.F.; Vicente-Diaz, S.; Linares-Barranco, A. PROMETEO: A CNN-Based Computer-Aided Diagnosis System for WSI Prostate Cancer Detection. Ieee Access 2020, 8, 128613–128628. [Google Scholar] [CrossRef]
- Jung, M.; Jin, M.-S.; Kim, C.; Lee, C.; Nikas, I.P.; Park, J.H.; Ryu, H.S. Artificial intelligence system shows performance at the level of uropathologists for the detection and grading of prostate cancer in core needle biopsy: An independent external validation study. Mod. Pathol. 2022, 35, 1449–1457. [Google Scholar] [CrossRef]
- Bulten, W.; Balkenhol, M.; Belinga, J.A.; Brilhante, A.; Çakır, A.; Egevad, L.; Eklund, M.; Farré, X.; Geronatsiou, K.; Molinié, V.; et al. Artificial intelligence assistance significantly improves Gleason grading of prostate biopsies by pathologists. Mod. Pathol. 2021, 34, 660–671. [Google Scholar] [CrossRef]
- Huang, W.; Randhawa, R.; Jain, P.; Iczkowski, K.A.; Hu, R.; Hubbard, S.; Eickhoff, J.; Basu, H.; Roy, R. Development and Validation of an Artificial Intelligence-Powered Platform for Prostate Cancer Grading and Quantification. JAMA Netw. Open 2021, 4, e2132554. [Google Scholar] [CrossRef]
- da Silva, L.M.; Pereira, E.M.; Salles, P.G.O.; Godrich, R.; Ceballos, R.; Kunz, J.D.; Casson, A.; Viret, J.; Chandarlapaty, S.; Ferreira, C.G.; et al. Independent real-world application of a clinical-grade automated prostate cancer detection system. J. Pathol. 2021, 254, 147–158. [Google Scholar] [CrossRef] [PubMed]
- Singhal, N.; Soni, S.; Bonthu, S.; Chattopadhyay, N.; Samanta, P.; Joshi, U.; Jojera, A.; Chharchhodawala, T.; Agarwal, A.; Desai, M.; et al. A deep learning system for prostate cancer diagnosis and grading in whole slide images of core needle biopsies. Sci. Rep. 2022, 12, 3383. [Google Scholar] [CrossRef]
- Qiu, Y.; Hu, Y.; Kong, P.; Xie, H.; Zhang, X.; Cao, J.; Wang, T.; Lei, B. Automatic Prostate Gleason Grading Using Pyramid Semantic Parsing Network in Digital Histopathology. Front. Oncol. 2022, 12, 772403. [Google Scholar] [CrossRef] [PubMed]
- Nagpal, K.; Foote, D.; Tan, F.S.; Liu, Y.; Chen, P.H.C.; Steiner, D.F.; Manoj, N.; Olson, N.; Smith, J.L.; Mohtashamian, A.; et al. Development and Validation of a Deep Learning Algorithm for Gleason Grading of Prostate Cancer From Biopsy Specimens. JAMA Oncol. 2020, 6, 1372–1380. [Google Scholar] [CrossRef] [PubMed]
- Ström, P.; Kartasalo, K.; Olsson, H.; Solorzano, L.; Delahunt, B.; Berney, D.M.; Bostwick, D.G.; Evans, A.J.; Grignon, D.J.; Humphrey, P.A.; et al. Artificial intelligence for diagnosis and grading of prostate cancer in biopsies: A population-based, diagnostic study. Lancet Oncol. 2020, 21, 222–232. [Google Scholar] [CrossRef] [PubMed]
- Salman, M.E.; Çakar, G.; Azimjonov, J.; Kösem, M.; Cedimoglu, I.H. Automated prostate cancer grading and diagnosis system using deep learning-based Yolo object detection algorithm. Expert Syst. Appl. 2022, 201, 117148. [Google Scholar] [CrossRef]
- Huo, X.; Ong, K.H.; Lau, K.W.; Gole, L.; Young, D.M.; Tan, C.L.; Zhu, X.; Zhang, C.; Zhang, Y.; Li, L.; et al. A comprehensive AI model development framework for consistent Gleason grading. Commun. Med. 2024, 4, 84. [Google Scholar] [CrossRef]
- Balaha, H.M.; Shaban, A.O.; El-Gendy, E.M.; Saafan, M.M. Prostate cancer grading framework based on deep transfer learning and Aquila optimizer. Neural Comput. Appl. 2024, 36, 7877–7902. [Google Scholar] [CrossRef]
- Marron-Esquivel, J.M.; Duran-Lopez, L.; Linares-Barranco, A.; Dominguez-Morales, J.P. A comparative study of the inter-observer variability on Gleason grading against Deep Learning-based approaches for prostate cancer. Comput. Biol. Med. 2023, 159, 106856. [Google Scholar] [CrossRef]
- Kartasalo, K.; Bulten, W.; Delahunt, B.; Chen, P.-H.C.; Pinckaers, H.; Olsson, H.; Ji, X.; Mulliqi, N.; Samaratunga, H.; Tsuzuki, T.; et al. Artificial Intelligence for Diagnosis and Gleason Grading of Prostate Cancer in Biopsies—Current Status and Next Steps. Eur. Urol. Focus 2021, 7, 687–691. [Google Scholar] [CrossRef]
- Perincheri, S.; Levi, A.W.; Celli, R.; Gershkovich, P.; Rimm, D.; Morrow, J.S.; Rothrock, B.; Raciti, P.; Klimstra, D.; Sinard, J. An independent assessment of an artificial intelligence system for prostate cancer detection shows strong diagnostic accuracy. Mod. Pathol. 2021, 34, 1588–1595. [Google Scholar] [CrossRef]
- Steiner, D.F.; Nagpal, K.; Sayres, R.; Foote, D.J.; Wedin, B.D.; Pearce, A.; Cai, C.J.; Winter, S.R.; Symonds, M.; Yatziv, L.; et al. Evaluation of the Use of Combined Artificial Intelligence and Pathologist Assessment to Review and Grade Prostate Biopsies. JAMA Netw. Open 2020, 3, e2023267. [Google Scholar] [CrossRef]
- Xiang, J.; Wang, X.; Wang, X.; Zhang, J.; Yang, S.; Yang, W.; Han, X.; Liu, Y. Automatic diagnosis and grading of Prostate Cancer with weakly supervised learning on whole slide images. Comput. Biol. Med. 2023, 152, 106340. [Google Scholar] [CrossRef]
- Kong, F.; Wang, X.; Xiang, J.; Yang, S.; Wang, X.; Yue, M.; Zhang, J.; Zhao, J.; Han, X.; Dong, Y.; et al. Federated attention consistent learning models for prostate cancer diagnosis and Gleason grading. Comput. Struct. Biotechnol. J. 2024, 23, 1439–1449. [Google Scholar] [CrossRef]
- Yang, Z.; Wang, X.; Xiang, J.; Zhang, J.; Yang, S.; Wang, X.; Yang, W.; Li, Z.; Han, X.; Liu, Y. The devil is in the details: A small-lesion sensitive weakly supervised learning framework for prostate cancer detection and grading. Virchows Arch. 2023, 482, 525–538. [Google Scholar] [CrossRef] [PubMed]
- Harder, C.; Pryalukhin, A.; Quaas, A.; Eich, M.-L.; Tretiakova, M.; Klein, S.; Seper, A.; Heidenreich, A.; Netto, G.J.; Hulla, W.; et al. Enhancing Prostate Cancer Diagnosis: Artificial intelligence-Driven Virtual Biopsy for Optimal Magnetic Resonance Imaging-Targeted Biopsy Approach and Gleason Grading Strategy. Mod. Pathol. 2024, 37, 100564. [Google Scholar] [CrossRef]
- Du, X.; Hao, S.; Olsson, H.; Kartasalo, K.; Mulliqi, N.; Rai, B.; Menges, D.; Heintz, E.; Egevad, L.; Eklund, M.; et al. Effectiveness and Cost-effectiveness of Artificial Intelligence-assisted Pathology for Prostate Cancer Diagnosis in Sweden: A Microsimulation Study. Eur. Urol. Oncol. 2025, 8, 80–86. [Google Scholar] [CrossRef]
- Alici-Karaca, D.; Akay, B. An Efficient Deep Learning Model for Prostate Cancer Diagnosis. Ieee Access 2024, 12, 150776–150792. [Google Scholar] [CrossRef]
- Kondejkar, T.; Al-Heejawi, S.M.A.; Breggia, A.; Ahmad, B.; Christman, R.; Ryan, S.T.; Amal, S. Multi-Scale Digital Pathology Patch-Level Prostate Cancer Grading Using Deep Learning: Use Case Evaluation of DiagSet Dataset. Bioengineering 2024, 11, 624. [Google Scholar] [CrossRef] [PubMed]
- Huang, X.; Wang, Q.; He, J.; Ban, C.; Zheng, H.; Chen, H.; Zhu, X. Fast Multiphoton Microscopic Imaging Joint Image Super-Resolution for Automated Gleason Grading of Prostate Cancers. J. Biophotonics 2024, 17, e202400233. [Google Scholar] [CrossRef] [PubMed]
- Mannas, M.P.; Deng, F.M.; Ion-Margineanu, A.; Jones, D.; Hoskoppal, D.; Melamed, J.; Pastore, S.; Freudiger, C.; Orringer, D.A.; Taneja, S.S. Stimulated Raman Histology Interpretation by Artificial Intelligence Provides Near-Real-Time Pathologic Feedback for Unprocessed Prostate Biopsies. J. Urol. 2024, 211, 384–391. [Google Scholar] [CrossRef]
- Gao, Y.; Vali, M. Combination of Deep and Statistical Features of the Tissue of Pathology Images to Classify and Diagnose the Degree of Malignancy of Prostate Cancer. J. Imaging Inform. Med. 2025, 38, 2241–2259. [Google Scholar] [CrossRef]
- Ji, X.; Salmon, R.; Mulliqi, N.; Khan, U.; Wang, Y.; Blilie, A.; Olsson, H.; Pedersen, B.G.; Sorensen, K.D.; Ulhoi, B.P.; et al. Physical Color Calibration of Digital Pathology Scanners for Robust Artificial Intelligence-Assisted Cancer Diagnosis. Mod. Pathol. 2025, 38, 100715. [Google Scholar] [CrossRef]
- Johnson, H.; Guo, J.; Zhang, X.; Zhang, H.; Simoulis, A.; Wu, A.H.B.; Xia, T.; Li, F.; Tan, W.; Johnson, A.; et al. Development and validation of a 25-Gene Panel urine test for prostate cancer diagnosis and potential treatment follow-up. Bmc Med. 2020, 18, 376. [Google Scholar] [CrossRef]
- Ramirez-Mena, A.; Andres-Leon, E.; Alvarez-Cubero, M.J.; Anguita-Ruiz, A.; Martinez-Gonzalez, L.J.; Alcala-Fdez, J. Explainable artificial intelligence to predict and identify prostate cancer tissue by gene expression. Comput. Methods Programs Biomed. 2023, 240, 107719. [Google Scholar] [CrossRef]
- Kim, H.; Park, S.; Jeong, I.G.; Song, S.H.; Jeong, Y.; Kim, C.S.; Lee, K.H. Noninvasive Precision Screening of Prostate Cancer by Urinary Multimarker Sensor and Artificial Intelligence Analysis. Acs Nano 2021, 15, 4054–4065. [Google Scholar] [CrossRef]
- Cani, A.K.; Hu, K.; Liu, C.J.; Siddiqui, J.; Zheng, Y.; Han, S.; Nallandhighal, S.; Hovelson, D.H.; Xiao, L.; Pham, T.; et al. Development of a Whole-urine, Multiplexed, Next-generation RNA-sequencing Assay for Early Detection of Aggressive Prostate Cancer. Eur. Urol. Oncol. 2022, 5, 430–439. [Google Scholar] [CrossRef]
- Shi, F.; Qi, Y.; Jiang, S.; Sun, N.; Deng, C. Hollow Core-Shell Metal Oxide Heterojunctions for the Urinary Metabolic Fingerprint-Based Noninvasive Diagnostic Strategy. Anal. Chem. 2023, 95, 7312–7319. [Google Scholar] [CrossRef] [PubMed]
- Smelik, M.; Diaz-Roncero Gonzalez, D.; An, X.; Heer, R.; Henningsohn, L.; Li, X.; Wang, H.; Zhao, Y.; Benson, M. Combining Spatial Transcriptomics, Pseudotime, and Machine Learning Enables Discovery of Biomarkers for Prostate Cancer. Cancer Res. 2025, 85, 2514–2526. [Google Scholar] [CrossRef]
- Zhong, X.; Yang, Y.; He, H.; Xiong, Y.; Zhong, M.; Wang, S.; Xia, Q. Integrating multi-cohort machine learning and clinical sample validation to explore peripheral blood mRNA diagnostic biomarkers for prostate cancer. Cancer Cell Int. 2025, 25, 158. [Google Scholar] [CrossRef]
- Dai, Y.; Wang, Y.; Cao, Y.; Yu, P.; Zhang, L.; Liu, Z.; Ping, Y.; Wang, D.; Zhang, G.; Sang, Y.; et al. A Multivariate Diagnostic Model Based on Urinary EpCAM-CD9-Positive Extracellular Vesicles for Prostate Cancer Diagnosis. Front. Oncol. 2021, 11, 777684. [Google Scholar] [CrossRef] [PubMed]
- Paproski, R.J.; Pink, D.; Sosnowski, D.L.; Vasquez, C.; Lewis, J.D. Building predictive disease models using extracellular vesicle microscale flow cytometry and machine learning. Mol. Oncol. 2023, 17, 407–421. [Google Scholar] [CrossRef] [PubMed]
- Fei, X.; Du, X.; Wang, J.; Liu, J.; Gong, Y.; Zhao, Z.; Cao, Z.; Fu, Q.; Zhu, Y.; Dong, L.; et al. Precise diagnosis and risk stratification of prostate cancer by comprehensive serum metabolic fingerprints: A prediction model study. Int. J. Surg. 2024, 110, 1450–1462. [Google Scholar] [CrossRef] [PubMed]
- Chen, S.; Zhang, H.; Yang, X.; Shao, X.; Li, T.; Chen, N.; Chen, Z.; Xue, W.; Pan, J.; Liu, S. Raman Spectroscopy Reveals Abnormal Changes in the Urine Composition of Prostate Cancer: An Application of an Intelligent Diagnostic Model with a Deep Learning Algorithm. Adv. Intell. Syst. 2021, 3, e2000090. [Google Scholar] [CrossRef]
- Marvaso, G.; Isaksson, L.J.; Zaffaroni, M.; Vincini, M.G.; Summers, P.E.; Pepa, M.; Corrao, G.; Mazzola, G.C.; Rotondi, M.; Mastroleo, F.; et al. Can we predict pathology without surgery? Weighing the added value of multiparametric MRI and whole prostate radiomics in integrative machine learning models. Eur. Radiol. 2024, 34, 6241–6253. [Google Scholar] [CrossRef]
- Wang, Y.; Liu, W.; Chen, Z.; Zang, Y.; Xu, L.; Dai, Z.; Zhou, Y.; Zhu, J. A noninvasive method for predicting clinically significant prostate cancer using magnetic resonance imaging combined with PRKY promoter methylation level: A machine learning study. BMC Med. Imaging 2024, 24, 60. [Google Scholar] [CrossRef]
- Zhao, W.; Hou, M.; Wang, J.; Song, D.; Niu, Y. Interpretable machine learning model for predicting clinically significant prostate cancer: Integrating intratumoral and peritumoral radiomics with clinical and metabolic features. BMC Med. Imaging 2024, 24, 353. [Google Scholar] [CrossRef]
- Zhang, H.; Ji, J.; Liu, Z.; Lu, H.; Qian, C.; Wei, C.; Chen, S.; Lu, W.; Wang, C.; Xu, H.; et al. Artificial intelligence for the diagnosis of clinically significant prostate cancer based on multimodal data: A multicenter study. Bmc Med. 2023, 21, 270. [Google Scholar] [CrossRef] [PubMed]
- Peng, Z.; Wang, Y.; Wu, X.; Yang, S.; Du, X.; Xu, X.; Hu, C.; Liu, W.; Zhu, Y.; Dong, B.; et al. Identifying High Gleason Score Prostate Cancer by Prostate Fluid Metabolic Fingerprint-Based Multi-Modal Recognition. Small Methods 2024, 8, e2301684. [Google Scholar] [CrossRef] [PubMed]
- Hiremath, A.; Shiradkar, R.; Fu, P.; Mahran, A.; Rastinehad, A.R.; Tewari, A.; Tirumani, S.H.; Purysko, A.; Ponsky, L.; Madabhushi, A. An integrated nomogram combining deep learning, Prostate Imaging-Reporting and Data System (PI-RADS) scoring, and clinical variables for identification of clinically significant prostate cancer on biparametric MRI: A retrospective multicentre study. Lancet Digit. Health 2021, 3, e445–e454. [Google Scholar] [CrossRef]
- Zhang, J.; Kang, F.; Gao, J.; Jiao, J.; Quan, Z.; Ma, S.; Li, Y.; Guo, S.; Li, Z.; Jing, Y.; et al. A Prostate-Specific Membrane Antigen PET-Based Approach for Improved Diagnosis of Prostate Cancer in Gleason Grade Group 1: A Multicenter Retrospective Study. J. Nucl. Med. 2023, 64, 1750–1757. [Google Scholar] [CrossRef]
- Bao, J.; Hou, Y.; Qin, L.; Zhi, R.; Wang, X.M.; Shi, H.B.; Sun, H.Z.; Hu, C.H.; Zhang, Y.D. High-throughput precision MRI assessment with integrated stack-ensemble deep learning can enhance the preoperative prediction of prostate cancer Gleason grade. Br. J. Cancer 2023, 128, 1267–1277. [Google Scholar] [CrossRef]
- Talyshinskii, A.; Hameed, B.M.Z.; Ravinder, P.P.; Naik, N.; Randhawa, P.; Shah, M.; Rai, B.P.; Tokas, T.; Somani, B.K. Catalyzing Precision Medicine: Artificial Intelligence Advancements in Prostate Cancer Diagnosis and Management. Cancers 2024, 16, 1809. [Google Scholar] [CrossRef]
- Chiarelli, G.; Stephens, A.; Finati, M.; Cirulli, G.O.; Beatrici, E.; Filipas, D.K.; Arora, S.; Tinsley, S.; Bhandari, M.; Carrieri, G.; et al. Adequacy of prostate cancer prevention and screening recommendations provided by an artificial intelligence-powered large language model. Int. Urol. Nephrol. 2024, 56, 2589–2595. [Google Scholar] [CrossRef] [PubMed]
- Shiri, I.; Salimi, Y.; Maghsudi, M.; Jenabi, E.; Harsini, S.; Razeghi, B.; Mostafaei, S.; Hajianfar, G.; Sanaat, A.; Jafari, E.; et al. Differential privacy preserved federated transfer learning for multi-institutional (68)Ga-PET image artefact detection and disentanglement. Eur. J. Nucl. Med. Mol. Imaging 2023, 51, 40–53. [Google Scholar] [CrossRef] [PubMed]
- Choi, J.Y.; Park, S.; Shim, J.S.; Park, H.J.; Kuh, S.U.; Jeong, Y.; Park, M.G.; Il Noh, T.; Yoon, S.G.; Park, Y.M.; et al. Explainable artificial intelligence-driven prostate cancer screening using exosomal multi-marker based dual-gate FET biosensor. Biosens. Bioelectron. 2025, 267, 116773. [Google Scholar] [CrossRef]
- World Health Organization. Ethics and Governance of Artificial Intelligence for Health: Guidance on Large Multi-Modal Models; World Health Organization: Geneva, Switzerland, 2024.
- Ren, C.; Chen, X.; Hao, X.; Wu, C.; Xie, L.; Liu, X. Integrated machine learning algorithms reveal a bone metastasis-related signature of circulating tumor cells in prostate cancer. Sci. Data 2024, 11, 701. [Google Scholar] [CrossRef]
- Huang, R.H.; Ge, Z.L.; Xu, G.; Zeng, Q.M.; Jiang, B.; Xiao, G.C.; Xia, W.; Wu, Y.T.; Liao, Y.F. Prognosis and diagnosis of prostate cancer based on hypergraph regularization sparse least partial squares regression algorithm. Aging 2024, 16, 9599–9624. [Google Scholar] [CrossRef] [PubMed]
- Sekhoacha, M.; Riet, K.; Motloung, P.; Gumenku, L.; Adegoke, A.; Mashele, S. Prostate Cancer Review: Genetics, Diagnosis, Treatment Options, and Alternative Approaches. Molecules 2022, 27, 5730. [Google Scholar] [CrossRef]
- Koller, D.; Beam, A.; Manrai, A.; Ashley, E.; Liu, X.; Gichoya, J.; Holmes, C.; Zou, J.; Dagan, N.; Wong, T.Y.; et al. Why We Support and Encourage the Use of Large Language Models in NEJM AI Submissions. NEJM AI 2024, 1, AIe2300128M. [Google Scholar] [CrossRef]


Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Zhu, C.-Y.; Qu, R.; Dai, Y.; Yang, L. Current Applications and Future Directions of Artificial Intelligence in Prostate Cancer Diagnosis: A Narrative Review. Curr. Oncol. 2026, 33, 166. https://doi.org/10.3390/curroncol33030166
Zhu C-Y, Qu R, Dai Y, Yang L. Current Applications and Future Directions of Artificial Intelligence in Prostate Cancer Diagnosis: A Narrative Review. Current Oncology. 2026; 33(3):166. https://doi.org/10.3390/curroncol33030166
Chicago/Turabian StyleZhu, Cong-Yi, Rui Qu, Yi Dai, and Luo Yang. 2026. "Current Applications and Future Directions of Artificial Intelligence in Prostate Cancer Diagnosis: A Narrative Review" Current Oncology 33, no. 3: 166. https://doi.org/10.3390/curroncol33030166
APA StyleZhu, C.-Y., Qu, R., Dai, Y., & Yang, L. (2026). Current Applications and Future Directions of Artificial Intelligence in Prostate Cancer Diagnosis: A Narrative Review. Current Oncology, 33(3), 166. https://doi.org/10.3390/curroncol33030166

