Next Article in Journal
The Role of VEGF in Intervention-Mediated Injuries: Neointimal Hyperplasia and In-Stent Restenosis
Previous Article in Journal
Alcohol Use Disorder—Stress, Sense of Coherence, and Its Impact on Satisfaction with Life
Previous Article in Special Issue
Computer-Assisted Protocol-Adherent Blood Lipid Evaluation in Vascular Outpatients (CAPABLE-Vascular)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

The Integration of Artificial Intelligence into Robotic Cancer Surgery: A Systematic Review

by
Agnieszka Leszczyńska
1,2,
Rafał Obuchowicz
3,
Michał Strzelecki
4,* and
Michał Seweryn
1,2
1
EconMed Europe, Młyńska 9/4, 31-469 Krakow, Poland
2
Faculty of Medicine, Andrzej Frycz Modrzewski Krakow University, Gustawa Herlinga-Grudzińskiego 1, 30-705 Krakow, Poland
3
Lux Med Ltd., 02-678 Warsaw, Poland
4
Institute of Electronics, Lodz University of Technology, 93-590 Lodz, Poland
*
Author to whom correspondence should be addressed.
J. Clin. Med. 2025, 14(17), 6181; https://doi.org/10.3390/jcm14176181
Submission received: 18 July 2025 / Revised: 26 August 2025 / Accepted: 27 August 2025 / Published: 1 September 2025

Abstract

Background/Objectives: This systematic review aims to synthesize recent studies on the integration of artificial intelligence (AI) into robotic surgery for oncological patients. It focuses on studies using real patient data and AI tools in robotic oncologic surgery. Methods: This systematic review followed PRISMA guidelines to ensure a robust methodology. A comprehensive search was conducted in June 2025 across Embase, Medline, Web of Science, medRxiv, Google Scholar, and IEEE databases, using MeSH terms, relevant keywords, and Boolean logic. Eligible studies were original research articles published in English between 2024 and 2025, focusing on AI applications in robotic cancer surgery using real patient data. Studies were excluded if they were non-peer-reviewed, used synthetic/preclinical data, addressed non-oncologic indications, or explored non-robotic AI applications. This approach ensured the selection of studies with practical clinical relevance. Results: The search identified 989 articles, with 17 duplicates removed. After screening, 921 were excluded, and 37 others were eliminated for reasons such as misalignment with inclusion criteria or lack of full text. Ultimately, 14 articles were included, with 8 using a retrospective design and 6 based on prospective data. These included articles that varied significantly in terms of the number of participants, ranging from several dozen to several thousand. These studies explored the application of AI across various stages of robotic oncologic surgery, including preoperative planning, intraoperative support, and postoperative predictions. The quality of 11 included studies was very good and good. Conclusions: AI significantly supports robotic oncologic surgery at various stages. In preoperative planning, it helps estimate the risk of conversion from minimally invasive to open colectomy in colon cancer. During surgery, AI enables precise tumor and vascular structure localization, enhancing resection accuracy, preserving healthy tissue, and reducing warm ischemia time. Postoperatively, AI’s flexibility in predicting functional and oncological outcomes through context-specific models demonstrates its value in improving patient care. Due to the relatively small number of cases analyzed, further analysis of the issues presented in this review is necessary.

1. Introduction

Since the first iterations of the da Vinci surgical system marked the beginning of computer-assisted surgery, the field has rapidly progressed [1]. Over time, the integration of artificial intelligence (AI) into surgical workflows has intensified, from basic assistive functions to advanced models capable of supporting real-time decision making and personalized surgical strategies.
The application of smart technologies in surgery was already comprehensively addressed in 2018 in a landmark narrative review by Hashimoto et al. [2]. This early work provided a structured overview of how AI could be integrated into various stages of surgical care—ranging from preoperative planning and intraoperative guidance to postoperative monitoring.
In the following years, AI technologies have been continuously refined and expanded. A notable milestone in the knowledge and research on this topic was the systematic review published in 2021 by Moglia et al. [3], which was among the first attempts to summarize the role of AI in robot-assisted surgery. However, most of the included studies were preclinical or based on simulations, focusing primarily on technical validation rather than clinical outcomes. Moglia’s review did not include studies with real oncologic patients and therefore clinical outcomes were largely missing.
The present systematic review offers an updated synthesis of recent studies on the integration of artificial intelligence in robotic surgery, with a specific focus on applications evaluated in real oncological patients. By narrowing the scope to clinically implemented AI tools, this review aims to highlight how these technologies are currently being translated into real-world surgical practice. It examines the use of AI across the perioperative continuum—including preoperative planning and risk stratification, intraoperative decision support and image guidance, as well as postoperative outcome prediction and recovery monitoring—emphasizing its role in enabling personalized oncologic surgery based on individual patient data. Another distinguishing feature of this review is its exclusive focus on studies conducted in populations of oncologic patients. Cancer surgery constitutes one of the most technically and cognitively complex areas of surgical care, due to the need for accurate tumor excision, margin clearance, and preservation of vital structures. Beyond tumor removal, it requires achieving negative margins, performing adequate lymphadenectomy, and preserving vital anatomical and functional structures. These procedures are often carried out in anatomically complex regions and directly affect patient survival and quality of life. Integrating AI into robotic cancer surgery represents a shift from mechanical assistance to intelligent surgical support. AI applications span from preoperative imaging analysis and intraoperative structure recognition to real-time navigation, margin assessment, and outcome prediction. In oncology—where surgical precision is critical—these tools hold particular promise.
To ensure this review’s relevance and timeliness, we included only studies published in the last 18 months. Given the rapid advancements in this field—particularly the emergence of transformer-based models and multi-modal AI architectures—restricting the timeframe to recent years allowed us to capture the most clinically meaningful innovations.
Eligible publications were limited to studies applying AI tools to robotic oncologic surgery based on real patient data with demonstrable clinical potential. We excluded studies using only synthetic or preclinical datasets, non-oncologic indications, or AI applications unrelated to robotic surgery (e.g., conventional laparoscopy or training simulations). This focused approach ensured practical applicability and alignment with the review’s aim of evaluating AI’s real-world contribution to oncologic robotic surgery.
To structure the analysis and facilitate clinical interpretation, the included publications were grouped into three thematic categories: preoperative planning, intraoperative support, and postoperative predictions. This categorization reflects the natural progression of the surgical care continuum and highlights the distinct roles AI can play at each phase—from optimizing risk assessment and surgical strategy before the procedure, through real-time guidance and anatomical recognition during surgery, to forecasting outcomes and informing personalized follow-up after the operation.
While some reviews have examined AI in surgery or robotic systems, the specific intersection of AI, robotics, and oncologic surgery remains underexplored. A dedicated systematic review is therefore both timely and necessary to map current evidence, identify gaps, and inform future clinical and research directions.

2. Materials and Methods

2.1. Study Design and Search Strategy

This systematic review adhered to the preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines to guarantee a thorough and well-organized methodology [4], though the systematic review’s protocol has not been prepared. The search was conducted in June 2025. A literature search was performed across Embase, Medline, Web of Science, medRxiv, Google Scholar, and Institute of Electrical and Electronics Engineers database (IEEE) to ensure comprehensive coverage of both medical and technological aspects of the investigated topic. The search approach combined medical subject headings (MeSH) terms, relevant keywords, and Boolean logic to achieve a wide yet focused selection of studies. The search strings used in each database are presented in Table A1 (Appendix A).

2.2. Eligibility Criteria and Study Selection

Studies were eligible for inclusion if they were original research articles published in English between 2024 and 2025, focused on the application of artificial intelligence to any phase of robotic cancer surgery. Only publications applying AI tools to robotic oncologic surgery based on real patient data and demonstrating clinical potential were included. Studies were excluded if they were non-peer-reviewed (e.g., editorials, letters, abstracts, or case reports), used only synthetic or preclinical datasets, addressed non-oncologic indications, or explored AI applications unrelated to robotic surgery (e.g., conventional laparoscopy or surgical training simulations). This focused approach ensured practical applicability and alignment with the review’s aim of evaluating AI’s real-world contribution to oncologic robotic surgery.
The selection of relevant studies was carried out independently and in a blinded manner by two authors (A.L. and M.Se). In case of disagreement, a third author was consulted to reach a consensus. Titles and abstracts were screened for relevance, followed by full-text assessments based on the predefined eligibility criteria. Any discrepancies between the reviewers were resolved through discussion and consensus. The study selection process is summarized in the PRISMA flow diagram, Figure 1.

2.3. Data Extraction and Assessment

The following variables were extracted and reported: study characteristics, clinical context, AI application, model evaluation, and clinical relevance. Outcomes were extracted as reported in the studies, without any predefined assumptions regarding which specific results should be collected. This review comprises studies with heterogeneous designs and methodologies, and the endpoints assessed varied substantially. Some studies focused on populations with colorectal cancer, while others examined patients with kidney, prostate, or pancreatic cancer. Additionally, some studies employed a retrospective design, while others were prospective in nature. The outcome measures also differed across studies, with some reporting metrics such as the area under the curve (AUC), while others used the Dice coefficient or Tetrafecta, among other measures. Taken together, these factors made it infeasible to perform a formal meta-analysis. Data were summarized in structured tables by one reviewer and verified by another. A formal quality assessment was conducted by both reviewers using the National Institutes of Health (NIH) criteria and the Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis–Artificial Intelligence (TRIPOD-AI) guidelines (see Table A2, Appendix A). The collected results were presented in both narrative and tabular form.

3. Results

The initial search of the databases identified 989 articles, with 17 duplicates removed. Following a review of titles and abstracts, 921 articles were excluded as they did not meet the inclusion criteria. Of the remaining articles, 51 were deemed potentially relevant and were assessed further; however, 37 of them (as well as the 2 identified during the evaluation) were ultimately excluded due to reasons such as a study objective not aligned with our inclusion criteria [5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23], an intervention different from the one analyzed [24,25,26,27,28,29,30,31,32,33,34], the study not involving human subjects [35,36,37,38], or unavailability of the full text for our reviewers [39,40,41,42,43]. Ultimately, 14 articles were selected for inclusion in the review [44,45,46,47,48,49,50,51,52,53,54,55,56,57]. Eight studies employed a retrospective design, while six reported data from prospective enrolment. The included studies varied significantly in terms of the number of participants, ranging from 17 in Chen et al. to 26,546 in Emile et al. (Table 1).

3.1. Preoperative Planning

The five analyzed studies demonstrate the growing integration of various types of artificial intelligence into preoperative planning in robotic surgery, each using distinct AI approaches to enhance decision making and outcome prediction. Emile et al. [44] developed a logistic regression-based predictive model using the National Cancer Database to estimate the risk of conversion from minimally invasive to open colectomy in colon cancer. Their analysis identified key clinical and tumor-related predictors, and demonstrated that robotic surgery is independently associated with a significantly lower risk of conversion compared with laparoscopy. Huang et al. [45] utilized AI-based analysis of three-dimensional computed tomography (CT) reconstructions of renal tumors to enhance anatomical characterization and improve prediction of Tetrafecta outcomes in robotic-assisted partial nephrectomy, achieving a higher AUC of 0.854 compared with 0.755 obtained with traditional two-dimensional system based on preoperative aspects and dimensions used for an anatomical classification scoring (SPARE). Lu et al. [46] examined how pelvic and prostate anatomical dimensions affect the surgical difficulty of robot-assisted radical prostatectomy by applying an eXtreme Gradient Boosting (XGBoost) model to magnetic resonance imaging (MRI)-derived metrics. The model outperformed logistic regression in predicting prolonged operative time and increased blood loss. Mei et al. [47] developed a convolutional neural network (CNN) trained directly on pelvic MRI images to predict intraoperative difficulty in radical prostatectomy, achieving high predictive performance (AUC ~0.85) and outperforming traditional morphometric models based on manually extracted anatomical features. Finally, Saikali et al. [48] developed artificial neural network models using preoperative clinical and functional variables from over 8500 patients treated at a high-volume prostate cancer referral center, aiming to predict recovery of urinary continence (AUC 0.68) and erectile function (AUC 0.74) one year after robotic-assisted radical prostatectomy.

3.2. Intraoperative Support

A synthesis of seven recent studies demonstrates how various AI-based solutions contribute to increasing the precision and safety of surgical procedures. It is important to note that these studies vary in size. For instance, some studies, such as those by Amparore et al. [49] and Chen et al. [53], are early feasibility research with relatively small sample sizes (20 and 17 patients, respectively). While these pilot studies provide valuable insights into the potential of AI in intraoperative support, their results should be interpreted with caution, and further validation in larger populations is needed. In contrast, studies like Bannone et al.’s [50] EX-MACHYNA trial, which includes 169 patients, offer more robust evidence, demonstrating the broader applicability and effectiveness of AI-driven surgical tools across diverse patient groups.
Amparore et al. [49] and Shi et al. [50] investigated the integration of AI with augmented reality (AR) in robot-assisted partial nephrectomy. Both teams developed AI-enhanced systems combining computer vision and machine learning to automatically align preoperative three-dimensional (3D) anatomical models with intraoperative imaging. This enabled accurate localization of tumors and vascular structures, thereby assisting surgeons in performing more precise resections, improving healthy tissue preservation, and reducing warm ischemia time. These findings highlight the value of AI in providing spatial navigation support and reducing cognitive load during complex minimally invasive procedures.
Another promising application of AI in robotic cancer surgery is intraoperative tissue recognition. In the EX-MACHYNA trial, Bannone et al. [51] combined hyperspectral imaging with deep learning to develop an AI-driven system capable of distinguishing malignant from benign tissues during surgery. This approach, termed “surgical optomics,” enabled real-time automatic tissue classification with high diagnostic accuracy and holds potential for supporting intraoperative margin control in oncologic procedures. Similarly, Mannas et al. [52] integrated stimulated Raman histology (SRH) with AI to assess surgical margins during radical prostatectomy. Their system delivered near real-time feedback on residual cancer presence, demonstrating diagnostic performance comparable to standard histopathology and significantly reducing the time between tissue excision and clinical decision making.
Artificial intelligence is also increasingly applied to real-time anatomical structure segmentation during robotic surgery. Chen et al. [53] developed a CNN to identify and delineate the ureters in robot-assisted radical cystectomy. Their fluorescence-like navigation system provided continuous visual feedback, helping reduce the risk of iatrogenic injury. Similarly, Nakamura et al. [54] applied a semantic segmentation model to accurately highlight the pancreas during robot-assisted gastrectomy—supporting surgeons in avoiding unintended damage to this anatomically challenging and poorly visualized organ. These applications demonstrate how AI-based segmentation can enhance intraoperative safety and improve the precision of complex surgical procedures.
Furube et al. [55] addressed the intraoperative challenge of identifying recurrent laryngeal nerves during robot-assisted minimally invasive esophagectomy. They developed and validated an AI model that analyzed endoscopic video in real time to generate visual cues for nerve localization. This system assisted surgeons in preserving nerve integrity and helped prevent complications such as vocal cord paralysis, demonstrating the value of AI in enhancing nerve safety during complex thoracic procedures.

3.3. Postoperative Predictions

Both studies by Geitenbeek et al. [56] and Ghaffar et al. [57] illustrate the clinical utility of AI in predicting postoperative outcomes after robotic cancer surgery, despite differing in surgical context, data sources, and clinical endpoints. Ghaffar et al. applied computer vision techniques to intraoperative video recordings from nerve-sparing radical prostatectomy to quantify neurovascular bundle (NVB) retraction. The extracted image-based features, when integrated into machine learning models, significantly improved the prediction of erectile function recovery, with the AUC increasing from 0.78 to 0.83. In contrast, Geitenbeek et al. [56] analyzed structured clinical and pathological data from a large international multicenter cohort of patients undergoing robotic total mesorectal excision (R-TME). Using an XGBoost model, they predicted the risk of local recurrence with an AUC of 0.76, further enhanced by explainable AI techniques: SHapley Additive exPlanation (SHAP) and local interpretable model-agnostic explanations (LIME) that identified key predictors such as metastasis stage, margin status, and postoperative complications. Although both studies were retrospective and lacked external validation, they demonstrate the flexibility of AI in addressing distinct postoperative outcomes—functional and oncological—through context-specific modeling approaches. Taken together, these studies highlight how AI can complementarily leverage different types of data—structured clinical/pathological information and intraoperative video—to provide both clinical and procedural insights in robotic oncologic surgery.

4. Discussion

The present review builds upon the thematic foundations established by the systematic review of Moglia et al. [3], titled “A systematic review on artificial intelligence in robot-assisted surgery”, which was among the first efforts to comprehensively summarize the role of AI in this rapidly evolving surgical field. Both reviews highlight the increasing integration of AI technologies—particularly computer vision, machine learning, and deep learning—into robot-assisted surgical workflows and assess how these tools may enhance surgical performance, improve clinical outcomes, and support decision making. However, revisiting this topic was both necessary and timely for several reasons. First, the field has advanced considerably since the publication of the earlier review, with a marked increase in studies reporting clinical applications and translational outcomes. Second, while Moglia et al. laid a valuable foundation, their review primarily focused on preclinical, technical, and simulation-based studies. Many of the included investigations addressed tool tracking, surgical gesture recognition, or system validation in artificial or non-human environments. In contrast, the current review focuses exclusively on the integration of AI in robotic cancer surgery and includes only studies conducted with real patients.
It should be noted that evidence-based advantages of many other robotic procedures in surgical oncology remain unclear [58]. However, the field is advancing rapidly, with ongoing research continuing to push the boundaries of clinical applicability. For instance, Quero et al. [59] provided in 2022 a comprehensive overview of computer-vision and AI applications in colorectal cancer surgery, such as automatic recognition of surgical phases and guidance during com-plex resections, as an early overview of potential intraoperative AI tools. More recently, Chen et al. [53] presented a more advanced stage of development—demonstrating an early yet clinically grounded implementation of this approach using data from real patients undergoing radical cystectomy.
The primary aim of this review was to synthesize available evidence on how AI is applied in clinical oncological settings and to present findings that reflect actual patient outcomes. By narrowing the scope to real-world oncological procedures, this review seeks to bridge the gap between experimental validation and clinical relevance—thereby providing insights directly applicable to surgical practice and future research directions.
The studies analyzed in the preoperative planning section highlight the growing importance of AI in robotic surgery, with applications including structured data modeling, AI-enhanced 3D imaging, and deep learning. Traditional machine learning algorithms have shown strong performance in analyzing structured clinical data, whereas deep learning—especially convolutional neural networks (CNNs)—has demonstrated superiority in processing medical images and predicting procedural complexity. AI-driven 3D imaging tools also facilitate enhanced anatomical visualization and support the development of personalized surgical strategies. Across the reviewed studies, predictive performance ranged from moderate to high (AUCs 0.68–0.85), underlining AI’s potential to optimize surgical planning, improve risk stratification, and enable patient-centered decision making.
Intraoperative support is one of the most promising areas for AI integration in robotic surgery. The included studies describe diverse AI applications—from segmentation of anatomical structures and fluorescence-like navigation to tissue classification and nerve identification—all aimed at enhancing real-time intraoperative decision making. These tools primarily rely on supervised deep learning models trained on large, annotated datasets and have shown promising results in terms of accuracy, sensitivity, and specificity. While most are still in the early phases of clinical validation, their integration into real-time surgical workflows suggests growing feasibility and utility in improving intraoperative awareness, supporting key maneuvers, reducing complication risk, and ultimately improving patient outcomes.
Finally, the studies by Ghaffar et al. [57] and Geitenbeek et al. [56] underscore AI’s growing role in postoperative outcome prediction following robotic oncologic surgery. By using diverse data sources—intraoperative video in one case and structured clinical data in the other—these studies illustrate how tailored AI models may support personalized risk stratification for both functional recovery and oncologic recurrence.
Despite the promising results obtained in this systematic review, several limitations should be considered. The heterogeneity of the study designs and interventions presented in the analyzed studies made it difficult to draw definitive conclusions regarding the optimal use of AI in specific surgical procedures. The included studies were based on diverse oncology patient populations, most of which were small, and there are still relatively few studies employing real patient data and AI tools in robotic oncologic surgery. Many studies, such as those by Amparore et al. [49], Shi et al. [50] and Chen et al. [53] lacked external validation, which limits the generalizability of their findings. Additionally, a number of studies did not make their code or models publicly available (Table A2), raising concerns about reproducibility and transparency—critical aspects in AI research. Another key challenge is the “black-box” nature of many CNN-based models, where the decision-making process is not easily interpretable. This lack of interpretability can pose a barrier to clinical acceptance, as clinicians may be hesitant to rely on AI predictions without understanding the underlying rationale. Overall, while the findings gathered in this review provide useful insights and serve as a valuable reference point, they do not constitute definitive evidence of the benefits of specific AI applications in surgical practice. Further validation in larger multicenter prospective cohorts, open-access models, and the integration of explainable AI techniques remains necessary to ensure that AI-driven surgical tools can be trusted and safely implemented in clinical practice.

5. Conclusions

New medical technologies represent a cornerstone of the future of medicine. Robotic surgery has established itself as an integral part of oncological care and is increasingly recognized as a clinical standard for the treatment of various cancers. The integration of AI into robotic surgery holds the promise of further enhancing the efficiency, precision, and overall quality of surgical care.
AI supports surgeons at multiple stages of the clinical pathway: from preoperative planning, through real-time intraoperative assistance, to the prediction of postoperative outcomes. Evidence suggests that its use can improve surgical precision, reduce the risk of complications, and ultimately contribute to better patient outcomes in robotic cancer surgery.
Looking ahead, AI’s potential in robotic oncologic surgery is vast. Emerging trends such as multimodal AI, which integrates imaging, clinical, and genomic data, hold promise for more personalized surgical planning and risk prediction. Future research should focus on optimizing AI algorithms for greater predictive accuracy and treatment personalization, as well as evaluating their effectiveness and efficiency in clinical practice. Integration with emerging technologies, such as augmented reality and advanced robotics, could further revolutionize minimally invasive procedures. International collaborative datasets may help overcome current limitations of small single-center studies, enhancing model generalizability. Finally, ethical and legal considerations remain crucial: the extent to which intraoperative decision making can be safely delegated to AI, and how responsibility is shared between clinicians and AI systems, will require careful evaluation. Addressing these aspects will be key to ensuring safe, effective, and broadly applicable AI-driven surgical interventions in the future.

Author Contributions

A.L.: conceptualization, methodology, resources, investigation, data curation, screening, writing—original draft preparation, visualization; R.O.: conceptualization, writing—review and editing; M.S. (Michał Strzelecki): conceptualization, writing—review and editing; M.S. (Michał Seweryn): conceptualization, methodology, screening, writing—review and editing, supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

Rafał Obuchowicz affiliated with Lux Med Ltd. declares no conflicts of interest.

Appendix A

Table A1. Search strings used across databases for the systematic literature review.
Table A1. Search strings used across databases for the systematic literature review.
DatabaseSearch String
EMBASE((((((((surgical AND ‘procedure’/exp OR surgical) AND oncology OR oncologic) AND surgery OR cancer) AND surgery OR tumor) AND resection OR oncological) AND resection OR tumor) AND removal OR cancer) AND operation*) AND (‘neoplasm’/exp OR cancer OR tumor OR tumour OR malignan* OR oncology OR neoplasm*) AND ((((((‘robotics’/exp OR robot) AND assisted AND ‘surgery’/exp OR robotic) AND surgery OR ‘robot assisted’) AND surgery OR surgical) AND robot* OR robotic) AND system*) AND (((((((artificial AND ‘intelligence’/exp OR machine) AND ‘learning’/exp OR deep) AND learning OR neural) AND network* OR predictive) AND algorithm* OR artificial) AND intelligence OR machine) AND learning OR ai))
MEDLINE((exp Artificial Intelligence/) OR (exp Machine Learning/) OR (exp Neural Networks, Computer/) OR (artificial intelligence or AI or machine learning or deep learning or neural network* or predictive algorithm*).mp.)) AND ((exp Robotic Surgical Procedures/) OR (robotic surgery or robot-assisted surgery or surgical robot* or robotic system*).mp. OR (exp Robotics/)) AND ((exp Neoplasms/) OR (cancer or oncology or tumour or tumor or neoplasm* or malignan*).mp.) AND ((exp Surgical Procedures, Operative/) OR (oncologic surgery or surgical oncology or cancer surgery or tumor resection or oncological resection or tumor removal).mp.)
Web of ScienceRefine results for TS=(“artificial intelligence” OR “machine learning” OR “deep learning” OR “neural network*” OR “predictive algorithm*” OR “AI”) AND TS=(“robotic surgery” OR “robot-assisted surgery” OR “surgical robotics” OR “robotic system*” OR “surgical robot*”) AND TS=(“cancer” OR “oncology” OR “tumor” OR “tumour” OR “neoplasm*” OR “malignan*”) AND TS=(“surgery” OR “surgical procedure” OR “oncologic surgery” OR “cancer surgery” OR “complex surgery” OR “tumor resection” OR “tumour removal”) and 2024 or 2025 (Publication Years) and Early Access or Review Article or Article (Document Types) and English (Languages)
Google Scholar(“artificial intelligence” OR “machine learning” OR “deep learning” OR “neural networks”) AND (“robotic surgery” OR “robot-assisted surgery” OR “surgical robotics”) AND (“oncology” OR “cancer” OR “tumor” OR “tumour” OR “neoplasm”) AND (“complex surgery” OR “oncologic surgery”)
medRxiv(“artificial intelligence” OR “machine learning” OR “deep learning”) AND (“robotic surgery” OR “robot-assisted surgery” OR “surgical robotics”) AND (“oncology” OR “cancer” OR “neoplasm”) AND (“complex surgery” OR “tumor resection” OR “oncologic surgery”)
IEEE(“Full Text Only”:robotic surgery OR “Full Text Only”:surgical robotics) AND (Full Text Only”:oncology OR “Full Text Only”:cancer OR “Full Text Only”:tumor) AND (Full Text Only”:artificial intelligence OR “Full Text Only”:deep learning)
Table A2. Quality assessment of included publications.
Table A2. Quality assessment of included publications.
Author (Year)TRIPOD-AI Score *
Assessment and Comment
NIH Assessment **
Score and Comment
Overall ***
Assessment
Mannas et al. (2025) [52]GoodVery well-prepared publication, strong validation and results, no model release.GoodWell-executed with reliable data sources and outcome evaluation.3
Mei et al. (2025) [47]GoodFull documentation, interpretability, multicenter validation, code available.GoodInnovative approach with appropriate use of AI. Needs external validation and clearer reporting.3
Amparore et al. (2024) [49]Fair+Very well described methodology, detailed description of pipeline and metrics; no code and external validationGoodComprehensive design with clearly defined exposure and outcome measures.2
Bannone et al. (2024) [51]Fair+Full architecture and metrics; no model release and limited interpretability.GoodClear methodology with a focus on practical implementation.2
Chen et al. (2025) [53]Fair+Good methodology and clinical application, no interpretability and code.GoodReal-world clinical data supports robustness of early clinical use.2
Emile et al. (2024) [44]Fair+Strong clinical analysis, multicenter; however, full interpretability and model availability are lacking.GoodWell-designed with clearly defined objectives and consistent methodology. Limited external validation.2
Furube et al. (2024) [55]Fair+Model works intraoperatively, good methodology; no external validation and code.GoodInnovative AI use with clear patient segmentation.2
Geitenbeek et al. (2025) [56]Fair+Extensive clinical analysis, good metrics; no interpretability and model.GoodComprehensive design with appropriate outcome tracking.2
Lu et al. (2024) [46]Fair+Real-time system, but no interpretability and code, internal validation only.GoodRobust statistical methods and adequate sample size. Some missing details in handling confounders.2
Saikali et al. (2025) [48]Fair+Good implementation and analysis, no code and external validation.GoodThorough methodology and strong outcome focus. Reporting of model performance could be expanded.2
Shi et al. (2025) [50]Fair+Good presentation of results, but no code and external validation.GoodStrong performance metrics with appropriate AI integration.2
Ghaffar et al. (2025) [57]FairInnovative topic but missing many key elements of AI reporting.FairLacks confounder control and statistical depth, but relevant AI usage.1
Huang et al. (2025) [45]GoodHighest level of detail, multicenter validation, partially available model.FairGood clinical relevance but lacks transparency in reporting and has potential selection bias.1
Nakamura et al. (2024) [54]FairNo interpretability and code; limited description of the model architecture.GoodStrong methodology with robust outcome definitions.1
* TRIPOD-AI scale: good 15–17; fair+ 12–14.5; fair 10–11.5. ** NIH scale: good—the study fulfills most key criteria. There are no major methodological flaws, and the risk of bias is low. The results are considered reliable and valid; fair—the study meets some criteria but has some methodological weaknesses, such as incomplete blinding, limited data reporting, or unclear control of confounding. These do not critically undermine the results; poor—the study has major limitations, such as unclear population, lack of temporality between exposure and outcome, uncontrolled confounding, or substantial loss to follow-up. There is a high risk of bias, and the results are likely unreliable. *** 3—good and good; 2—fair+ and good; 1—good/fair and fair; 0—fair/poor and poor. NIH: National Institutes of Health; TRIPOD-AI: Transparent Reporting of a multivariable pre-diction model for Individual Prognosis or Diagnosis–Artificial Intelligence.

References

  1. Taylor, R.; Menciassi, A.; Fichtinger, G.; Dario, P. Medical Robotics and Computer-Integrated Surgery. In Springer Handbook of Robotics, 1st ed.; Siciliano, B., Khatib, O., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; pp. 1199–1222. [Google Scholar] [CrossRef]
  2. Hashimoto, D.A.; Rosman, G.; Rus, D.; Meireles, O.R. Artificial Intelligence in Surgery: Promises and Perils. Ann. Surg. 2018, 268, 70–76. [Google Scholar] [CrossRef]
  3. Moglia, A.; Georgiou, K.; Georgiou, E.; Satava, R.M.; Cuschieri, A. A systematic review on artificial intelligence in robot-assisted surgery. Int. J. Surg. 2021, 95, 106151. [Google Scholar] [CrossRef] [PubMed]
  4. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef] [PubMed]
  5. Zuluaga, L.; Bamby, J.; Okhawere, K.E.; Ucpinar, B.; Razdan, S.; Badani, K.K. Assessing operative variability in robot-assisted radical prostatectomy (RARP) through AI. J. Robot. Surg. 2025, 19, 99. [Google Scholar] [CrossRef]
  6. Truckenmueller, P.; Früh, A.; Kissner, J.F.; Moser, N.K.; Misch, M.; Faust, K.; Onken, J.; Vajkoczy, P.; Xu, R. Integration of a lightweight and table-mounted robotic alignment tool with automated patient-to-image registration using robotic cone-beam CT for intracranial biopsies and stereotactic electroencephalography. Neurosurg. Focus 2024, 57, E2. [Google Scholar] [CrossRef]
  7. Sato, K.; Takenaka, S.; Kitaguchi, D.; Zhao, X.; Yamada, A.; Ishikawa, Y.; Takeshita, N.; Takeshita, N.; Sakamoto, S.; Ichikawa, T.; et al. Objective surgical skill assessment based on automatic recognition of dissection and exposure times in robot-assisted radical prostatectomy. Langenbecks Arch. Surg. 2025, 410, 39. [Google Scholar] [CrossRef]
  8. Sharma, V.; Fadel, A.; Tollefson, M.K.; Psutka, S.P.; Blezek, D.J.; Frank, I.; Thapa, P.; Tarrell, R.; Viers, L.D.; Potretzke, A.M.; et al. Artificial intelligence-based assessment of preoperative body composition is associated with early complications after radical cystectomy. J. Urol. 2025, 213, 228–237. [Google Scholar] [CrossRef]
  9. Zhang, X.; Zhang, Y.; Yang, J.; Du, H. A prostate seed implantation robot system based on human-computer interactions: Augmented reality and voice control. Math. Biosci. Eng. 2024, 21, 5947–5971. [Google Scholar] [CrossRef]
  10. Wang, H.; Shen, B.; Jia, P.; Li, H.; Bai, X.; Li, Y.; Xu, K.; Hu, P.; Xia, X.; Fang, Y.; et al. Guiding post-pancreaticoduodenectomy interventions for pancreatic cancer patients utilizing decision tree models. Front. Oncol. 2024, 14, 139929714. [Google Scholar] [CrossRef]
  11. Shimodaira, K.; Inoue, R.; Hashimoto, T.; Satake, N.; Shishido, T.; Namiki, K.; Harada, K.; Nagao, T.; Ohno, Y. Significance of the cribriform morphology area ratio for biochemical recurrence in Gleason score 4 + 4 prostate cancer patients following robot-assisted radical prostatectomy. Cancer Med. 2024, 13, e7086. [Google Scholar] [CrossRef]
  12. Yamada, Y.; Fujii, Y.; Kakutani, S.; Kimura, N.; Sugimoto, K.; Hakozaki, Y.; Sugihara, T.; Takeshima, Y.; Kawai, T.; Nakamura, M.; et al. Development of risk-score model in patients with negative surgical margin after robot-assisted radical prostatectomy. Sci. Rep. 2024, 14, 7607. [Google Scholar] [CrossRef]
  13. Lee, J.; Ham, S.; Kim, N.; Park, H.S. Development of a deep learning-based model for guiding a dissection during robotic breast surgery. Breast Cancer Res. 2025, 27, 34. [Google Scholar] [CrossRef] [PubMed]
  14. Lin, Y.; Wang, J.; Liu, Q.; Zhang, K.; Liu, M.; Wang, Y. CFANet: Context fusing attentional network for preoperative CT image segmentation in robotic surgery. Comput. Biol. Med. 2024, 171, 108115. [Google Scholar] [CrossRef] [PubMed]
  15. Pak, S.; Park, S.G.; Park, J.; Choi, H.R.; Lee, J.H.; Lee, W.; Cho, S.T.; Lee, Y.G.; Ahn, H. Application of deep learning for semantic segmentation in robotic prostatectomy: Comparison of convolutional neural networks and visual transformers. Investig. Clin. Urol. 2024, 65, 551–558. [Google Scholar] [CrossRef] [PubMed]
  16. Sinha, R.; Rallabandi, H.; Bana, R.; Bag, M.; Raina, R.; Sridhar, D.; Deepika, H.K.; Reddy, P. Ovarian loss in laparoscopic and robotic cystectomy compared using artificial intelligence pathology. JSLS 2024, 28, e2024.00001. [Google Scholar] [CrossRef]
  17. Younis, R.; Yamlahi, A.; Bodenstedt, S.; Scheikl, P.M.; Kisilenko, A.; Daum, M.; Schulze, A.; Wise, P.A.; Nickel, F.; Mathis-Ullrich, F.; et al. A surgical activity model of laparoscopic cholecystectomy for co-operation with collaborative robots. Surg. Endosc. 2024, 38, 4316–4328. [Google Scholar] [CrossRef]
  18. Albo, G.; Gallioli, A.; Ripa, F.; De Lorenzis, E.; Boeri, L.; Bebi, C.; Rocchini, L.; Longo, F.; Zanetti, S.P.; Turetti, M.; et al. Extended pelvic lymph node dissection during robotic prostatectomy: Antegrade versus retrograde technique. BMC Urol. 2024, 24, 64, Erratum in BMC Urol. 2024, 24, 86. 10.1186/s12894-024-01477-w. [Google Scholar] [CrossRef]
  19. Angerer, M.; Wülfing, C.; Dieckmann, K.P. Robotic retroperitoneal lymph node dissection for testicular cancer—First experience and learning curve of a single surgeon. Cancers 2025, 17, 1476. [Google Scholar] [CrossRef]
  20. Zhang, W.; Yu, J.; Yu, X.; Zhang, Y.; Men, Z. Study on Bionic Design and Tissue Manipulation of Breast Interventional Robot. Sensors 2024, 24, 6408. [Google Scholar] [CrossRef]
  21. Hölgyesi, Á.; Zrubka, Z.; Gulácsi, L.; Baji, P.; Haidegger, T.; Kozlovszky, M.; Weszl, M.; Kovács, L.; Péntek, M. Robot-assisted surgery and artificial intelligence-based tumour diagnostics: Social preferences with a representative cross-sectional survey. BMC Med. Inform. Decis. Mak. 2024, 24, 87. [Google Scholar] [CrossRef]
  22. Klontzas, M.E.; Ri, M.; Koltsakis, E.; Stenqvist, E.; Kalarakis, G.; Boström, E.; Kechagias, A.; Schizas, D.; Rouvelas, I.; Tzortzakakis, A. Prediction of anastomotic leakage in esophageal cancer surgery: A multimodal machine learning model integrating imaging and clinical data. Acad. Radiol. 2024, 31, 4878–4885. [Google Scholar] [CrossRef]
  23. Anania, G.; Chiozza, M.; Pedarzani, E.; Resta, G.; Campagnaro, A.; Pedon, S.; Valpiani, G.; Silecchia, G.; Mascagni, P.; Cuccurullo, D.; et al. Predicting postoperative length of stay in patients undergoing laparoscopic right hemicolectomy for colon cancer: A machine learning approach using SICE (Società Italiana di Chirurgia Endoscopica) CoDIG data. Cancers 2024, 16, 2857. [Google Scholar] [CrossRef]
  24. Antonella, C.; Discenza, A.; Rauseo, M.; Matella, M.; Caggianelli, G.; Ciaramelletti, R.; Mirabella, L.; Cinnella, G. Intraoperative hypotension during robotic-assisted radical prostatectomy: A randomised controlled trial comparing standard goal-directed fluid therapy with hypotension prediction index-guided goal-directed fluid therapy. Eur. J. Anaesthesiol. 2025, Epub ahead of print. [Google Scholar] [CrossRef] [PubMed]
  25. Flammia, R.S.; Anceschi, U.; Tuderti, G.; Di Maida, F.; Grosso, A.A.; Lambertini, L.; Mari, A.; Mastroianni, R.; Bove, A.; Capitanio, U.; et al. Development and internal validation of a nomogram predicting 3-year chronic kidney disease upstaging following robot-assisted partial nephrectomy. Int. Urol. Nephrol. 2024, 56, 913–921. [Google Scholar] [CrossRef] [PubMed]
  26. Hagedorn, C.; Dornhöfer, N.; Aktas, B.; Weydandt, L.; Lia, M. Risk factors for surgical wound infection and fascial dehiscence after open gynecologic oncologic surgery: A retrospective cohort study. Cancers 2024, 16, 4157. [Google Scholar] [CrossRef] [PubMed]
  27. Chung, J.H.; Song, W.; Kang, M.; Sung, H.H.; Jeon, H.G.; Jeong, B.C.; Jeon, S.S.; Lee, H.M.; Seo, S.I. Risk factors of recurrence after robot-assisted laparoscopic partial nephrectomy for solitary localized renal cell carcinoma. Sci. Rep. 2024, 14, 4481. [Google Scholar] [CrossRef]
  28. Pires, R.D.S.; Pereira, C.W.A.; Favorito, L.A. Is the learning curve of the urology resident for conventional radical prostatectomy similar to that of staff initiating robot-assisted radical prostatectomy? Int. Braz. J. Urol. 2024, 50, 335–345. [Google Scholar] [CrossRef]
  29. Pavone, M.; Baby, B.; Carles, E.; Innocenzi, C.; Baroni, A.; Arboit, L.; Murali, A.; Rosati, A.; Iacobelli, V.; Fagotti, A.; et al. Critical view of safety assessment in sentinel node dissection for endometrial and cervical cancer: Artificial intelligence to enhance surgical safety and lymph node detection (LYSE study). Int. J. Gynecol. Cancer 2025, 35, 101789. [Google Scholar] [CrossRef]
  30. El Mohady, B.; Larmure, O.; Zeroual, A.; Elgorban, A.M.; El Idrissi, M.; Alfagham, A.T.; Syed, A.; Lemelle, J.-L.; Lienard, J. The Advancing Frontier: Robotic-Assisted Laparoscopy in Pediatric Tumor Management. Indian J. Surg. Oncol. 2025. [Google Scholar] [CrossRef]
  31. Faulkner, J.; Arora, A.; McCulloch, P.; Robertson, S.; Rovira, A.; Ourselin, S.; Jeannon, J.P. Prospective development study of the Versius Surgical System for use in transoral robotic surgery: An IDEAL stage 1/2a first in human and initial case series experience. Eur. Arch. Otorhinolaryngol. 2024, 281, 2667–2678. [Google Scholar] [CrossRef]
  32. Goldstone, R.N.; Francone, T.; Milky, G.; Shih, I.F.; Bossie, H.; Li, Y.; Ricciardi, R. Outcomes comparison of robotic-assisted versus laparoscopic and open surgery for patients undergoing rectal cancer resection with concurrent stoma creation. Surg. Endosc. 2024, 38, 4550–4558. [Google Scholar] [CrossRef] [PubMed]
  33. Kim, J.K.; Lee, C.R.; Kang, S.W.; Jeong, J.J.; Nam, K.H.; Chung, W.Y. Expansion of thyroid surgical territory through 10,000 cases under the da Vinci robotic knife. Sci. Rep. 2024, 14, 7555. [Google Scholar] [CrossRef] [PubMed]
  34. Kohjimoto, Y.; Yamashita, S.; Iwagami, S.; Muraoka, S.; Wakamiya, T.; Hara, I. hinotori™ vs. da Vinci®: Propensity score-matched analysis of surgical outcomes of robot-assisted radical prostatectomy. J. Robot. Surg. 2024, 18, 130. [Google Scholar] [CrossRef] [PubMed]
  35. Aguilera Saiz, L.; Groen, H.C.; Heerink, W.J.; Ruers, T.J.M. The influence of the da Vinci surgical robot on electromagnetic tracking in a clinical environment. J. Robot. Surg. 2024, 18, 54. [Google Scholar] [CrossRef]
  36. Kim, S.H.; Kwon, T.; Choi, H.S.; Kim, C.; Won, S.; Jeon, H.J.; Kim, E.S.; Keum, B.; Jeen, Y.T.; Hwang, J.H.; et al. Robot-assisted gastric endoscopic submucosal dissection significantly improves procedure time at challenging dissection locations. Surg. Endosc. 2024, 38, 2280–2287. [Google Scholar] [CrossRef]
  37. Zhao, Z.; Zhang, Y.; Lin, L.; Huang, W.; Xiao, C.; Liu, J.; Chai, G. Intelligent electromagnetic navigation system for robot-assisted intraoral osteotomy in mandibular tumor resection: A model experiment. Front. Immunol. 2024, 15, 1436276. [Google Scholar] [CrossRef]
  38. Furnari, G.; Secchi, C.; Ferraguti, F. Sequence-based imitation learning for surgical robot operations. Artif. Intell. Surg. 2025, 5, 103–115. [Google Scholar] [CrossRef]
  39. Furnari, G.; Minelli, M.; Puliatti, S.; Micali, S.; Secchi, C.; Ferraguti, F. Selective clamping for robot-assisted surgical procedures. In Proceedings of the 2024 46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 15–19 July 2024; pp. 1–7. [Google Scholar] [CrossRef]
  40. Zhang, S.; Zhang, G.; Wang, M.; Guo, S.B.; Wang, F.; Li, Y.; Kadier, K.; Zhou, Z.; Zhang, P.; Chi, H.; et al. Artificial intelligence hybrid survival assessment system for robot-assisted proctectomy: A retrospective cohort study. JCO Precis. Oncol. 2024, 8, e2400089. [Google Scholar] [CrossRef]
  41. Bakker, A.F.H.A.; de Nijs, J.V.; Jaspers, T.J.M.; de With, P.H.N.; Beulens, A.J.W.; van der Poel, H.G.; van der Sommen, F.; Brinkman, W.M. Estimating surgical urethral length on intraoperative robot-assisted prostatectomy images using artificial intelligence anatomy recognition. J. Endourol. 2024, 38, 690–696. [Google Scholar] [CrossRef]
  42. Too, C.W.; Fong, K.Y.; Hang, G.; Sato, T.; Nyam, C.Q.; Leong, S.H.; Ng, K.W.; Ng, W.L.; Kawai, T. Artificial intelligence-guided segmentation and path planning software for transthoracic lung biopsy. J. Vasc. Interv. Radiol. 2024, 35, 780–789.e1. [Google Scholar] [CrossRef]
  43. Sengun, B.; Iscan, Y.; Yazici, Z.A.; Sormaz, I.C.; Aksakal, N.; Tunca, F.; Senyurek, Y.G. Utilization of artificial intelligence in minimally invasive right adrenalectomy: Recognition of anatomical landmarks with deep learning. Acta Chir. Belg. 2024, 124, 492–498. [Google Scholar] [CrossRef]
  44. Emile, S.H.; Horesh, N.; Garoufalia, Z.; Gefen, R.; Rogers, P.; Wexner, S.D. An artificial intelligence-designed predictive calculator of conversion from minimally invasive to open colectomy in colon cancer. Updates Surg. 2024, 76, 1321–1330. [Google Scholar] [CrossRef]
  45. Huang, H.; Chen, B.; Feng, C.; Chen, W.; Wu, D. Using three-dimensional virtual imaging of renal masses to improve prediction of robotic-assisted partial nephrectomy Tetrafecta with SPARE score. World J. Urol. 2024, 43, 37. [Google Scholar] [CrossRef]
  46. Lu, H.; Yu, C.; Yu, X.; Yang, D.; Yu, S.; Xia, L.; Lin, Y.; Yang, B.; Wu, Y.; Li, G. Effects of bony pelvic and prostate dimensions on surgical difficulty of robot-assisted radical prostatectomy: An original study and meta-analysis. Ann. Surg. Oncol. 2024, 31, 8405–8420. [Google Scholar] [CrossRef] [PubMed]
  47. Mei, H.; Wang, Z.; Zheng, Q.; Jiao, P.; Wu, J.; Liu, X.; Yang, R. Deep learning for predicting difficulty in radical prostatectomy: A novel evaluation scheme. Urology 2025, 198, 1–7. [Google Scholar] [CrossRef] [PubMed]
  48. Saikali, S.; Reddy, S.; Gokaraju, M.; Goldsztein, N.; Dyer, A.; Gamal, A.; Jaber, A.; Moschovas, M.; Rogers, T.; Vangala, A.; et al. Development and assessment of an AI-based machine learning model for predicting urinary continence and erectile function recovery after robotic-assisted radical prostatectomy: Insights from a prostate cancer referral center. Comput. Methods Programs Biomed. 2025, 259, 108522. [Google Scholar] [CrossRef] [PubMed]
  49. Amparore, D.; Sica, M.; Verri, P.; Piramide, F.; Checcucci, E.; De Cillis, S.; Porpiglia, F. Computer vision and machine-learning techniques for automatic 3D virtual images overlapping during augmented reality guided robotic partial nephrectomy. Technol. Cancer Res. Treat. 2024, 23, 9368. [Google Scholar] [CrossRef]
  50. Shi, X.; Yang, B.; Guo, F.; Zhi, C.; Xiao, G.; Zhao, L.; Wang, Y.; Zhang, W.; Xiao, C.; Wu, Z.; et al. Artificial intelligence based augmented reality navigation in minimally invasive partial nephrectomy. Urology 2025, 199, 20–26. [Google Scholar] [CrossRef]
  51. Bannone, E.; Collins, T.; Esposito, A.; Cinelli, L.; De Pastena, M.; Pessaux, P.; Felli, E.; Andreotti, E.; Okamoto, N.; Barberio, M.; et al. Surgical optomics: Hyperspectral imaging and deep learning towards precision intraoperative automatic tissue recognition—Results from the EX-MACHYNA trial. Surg. Endosc. 2024, 38, 3758–3772. [Google Scholar] [CrossRef]
  52. Mannas, M.P.; Deng, F.M.; Ion-Margineanu, A.; Freudiger, C.; Lough, L.; Huang, W.; Wysock, J.; Huang, R.; Pastore, S.; Jones, D.; et al. Stimulated Raman histology and artificial intelligence provide near real-time interpretation of radical prostatectomy surgical margins. J. Urol. 2025, 213, 609–616. [Google Scholar] [CrossRef]
  53. Chen, W.; Fukuda, S.; Yoshida, S.; Kobayashi, N.; Fukada, K.; Fukunishi, M.; Otani, Y.; Matsumoto, S.; Kobayashi, M.; Nakamura, Y.; et al. Pioneering AI-guided fluorescence-like navigation in urological surgery: Real-time ureter segmentation during robot-assisted radical cystectomy using convolutional neural network. J. Robot. Surg. 2025, 19, 188. [Google Scholar] [CrossRef]
  54. Nakamura, T.; Kobayashi, N.; Kumazu, Y.; Fukata, K.; Murakami, M.; Kohno, S.; Hojo, Y.; Nakao, E.; Kurahashi, Y.; Ishida, Y.; et al. Precise highlighting of the pancreas by semantic segmentation during robot-assisted gastrectomy: Visual assistance with artificial intelligence for surgeons. Gastric Cancer 2024, 27, 869–875. [Google Scholar] [CrossRef] [PubMed]
  55. Furube, T.; Takeuchi, M.; Kawakubo, H.; Noma, K.; Maeda, N.; Daiko, H.; Ishiyama, K.; Otsuka, K.; Sato, Y.; Koyanagi, K.; et al. Usefulness of an artificial intelligence model in recognizing recurrent laryngeal nerves during robot-assisted minimally invasive esophagectomy. Ann. Surg. Oncol. 2024, 31, 9344–9351. [Google Scholar] [CrossRef] [PubMed]
  56. Geitenbeek, R.T.J.; Duhoky, R.; Burghgraef, T.A.; Piozzi, G.N.; Masum, S.; Hopgood, A.A.; Denost, Q.; van Eetvelde, E.; Bianchi, P.; Rouanet, P.; et al. Analysis of local recurrence after robotic-assisted total mesorectal excision (ALRITE): An international, multicentre, retrospective cohort. Cancers 2025, 17, 992. [Google Scholar] [CrossRef] [PubMed]
  57. Ghaffar, U.; Olsen, R.; Deo, A.; Yang, C.; Varghese, J.; Tsai, R.G.; Heard, J.; Dadashian, E.; Prentice, C.; Wager, P.; et al. Computer vision for evaluating retraction of the neurovascular bundle during nerve-sparing prostatectomy. J. Robot. Surg. 2025, 19, 257. [Google Scholar] [CrossRef] [PubMed]
  58. Chatterjee, S.; Das, S.; Ganguly, K.; Mukherjee, S. Advancements in Robotic Surgery: Innovations, Challenges and Future Prospects. J. Robot Surg. 2024, 18, 28. Available online: https://link.springer.com/article/10.1007/s11701-023-01801-w (accessed on 17 July 2025). [CrossRef] [PubMed]
  59. Quero, G.; Mascagni, P.; Kolbinger, F.R.; Fiorillo, C.; De Sio, D.; Longo, F.; Alberto Schena, C.; Laterza, V.; Rosa, F.; Menghi, R.; et al. Artificial Intelligence in Colorectal Cancer Surgery: Present and Future Perspectives. Cancers 2022, 14, 3803. [Google Scholar] [CrossRef]
Figure 1. PRISMA flow chart.
Figure 1. PRISMA flow chart.
Jcm 14 06181 g001
Table 1. Characteristics of research and data.
Table 1. Characteristics of research and data.
StudyStudy CharacteristicsClinical ContextAI ApplicationModel EvaluationClinical Relevance
Preoperative planning
Emile et al. (2024) [44]Retrospective case–control; demographic, clinical, surgical data; internal validation (NCDB)Conversion from MIS to open colectomy; 30-/90-day mortality, LOS, readmission, OS; 26,546 stage I–III colon cancer patientsChatGPT-generated R code; multivariate logistic regression; OR-based model; VIF; R code availableOR up to 17.8 (high-risk); reduced to 8.9 with robotics; AUC not reportedMay assist in surgical planning and platform selection
Huang et al. (2024) [45]Retrospective cohort; demographic, perioperative, CT imaging; internal onlyRAPN; Tetrafecta (WIT < 25 min, negative margins, no major complications, preserved renal function); 141 patientsAI-based segmentation + 3D reconstruction (Yorktal IPS); automated SPARE score + Tetrafecta predictionAUC: 0.854 (3D) vs. 0.755 (2D); categorical 0.658 vs. 0.643Improved risk stratification and surgical planning using 3D imaging
Lu et al. * (2024) [46]Prospective cohort; anatomical (MRI) and surgical data; internal validationRARP; operative time, EBL, surgical margin; 219 patientsXGBoost; prediction of prolonged operative time; SHAP explainabilityXGBoost outperformed logistic regression (no details reported)Identifying challenging anatomy may aid surgical planning
Mei et al. (2025) [47]Retrospective DL with segmentation; MRI + spatial features; internal and external validationSurgical difficulty in RARP; EBL and OT; 290 patients with MRInnUNet_v2 + modified PointNet; regression of spatial metricsDice = 0.8641 (segmentation); mm-level landmark accuracyNew evaluation scheme for preoperative planning
Saikali et al. (2025) [48]Retrospective observational (single center); preoperative clinical data; internal comparison onlyPrediction of urinary continence and erectile function at 12 months post-RARP; 8524 patientsANN; prediction of continence and potency; feature importance analysisAUC: 0.68 (continence), 0.74 (potency)Patient counseling and care optimization
Intraoperative support
Amparore et al. (2024) [49]Prospective single-center; intraoperative video + clinical data; internal validation onlyRobotic nephrectomy; overlay time and procedure safety; 20 patients with renal massesComputer vision + CNN; automatic 3D model registration; expert visual assessmentOverlay time: CV~7 s, CNN~11 sFaster accurate AR-assisted surgery
Shi et al. (2025) [50]Prospective–retrospective development; preop CT + laparoscopy video; clinical use onlyMIPN patients; navigation and dissection standardization; 46 patientsAugmented reality with AI overlay; real-time anatomic guidance; 3D visual overlayPerformance not quantifiedImproved surgical precision and consistency
Bannone et al. (2024) [51]Prospective multicenter; hyperspectral + RGB images; internal + external (inter-center)Tissue recognition during surgery; 13 tissue classes; 169 patientsCNN; real-time tissue segmentation; expert review onlyTPR: skin 100%, liver 97%; Dice > 80%Improved intraoperative tissue identification
Mannas et al. (2024) [52]Prospective pilot; 121 intraoperative SRH images; tested on 10 patientsSurgical margin interpretation in RALP; accuracy, sensitivity, specificity vs. pathology; 22 patientsCNN; classify margin status in SRH; no internal explainability methodsAccuracy 98%, sensitivity 83%, specificity 99% (surgeons)Supports intraoperative decision making; may reduce positive margins
Chen et al. (2025) [53]Prospective developmental; 730 RGB images from RARC; retrospective validation on 41 imagesReal-time ureter segmentation during RARC; segmentation quality (Dice, IoU, recall, precision); 17 casesCNN; semantic segmentation of ureter; limited explainability (surgeon only)Dice 0.71; IoU 0.55; recall 0.90; precision 0.60Reduces ureter misidentification; improves safety and training
Nakamura et al. (2024) [54]Retrospective image-based; annotated surgical video frames; internal test setRobot-assisted gastrectomy; pancreas localization; 926 train, 232 val., 80 test images; 10 surgeonsSemantic segmentation (HRNet); visual overlay (mask)Precision 0.70, recall 0.59, Dice 0.61May improve anatomy recognition intraoperatively; reduce POPF
Furube et al. (2024) [55]Retrospective multicenter; surgical videos from RAMIE; external validation (8 videos)Intraoperative RLN identification; IoU, recognition rate improvement; 128 surgeriesDeep learning (CNN assumed); semantic segmentation and localization of RLNIoU: 0.40 (right), 0.34 (left); accuracy increased from 46.9% to 81.3%May improve nerve identification and reduce complications
Postoperative predictions
Geitenbeek et al. (2025) [56]Retrospective multicenter cohort; clinical/pathological data; internal cross-validationLocal recurrence after R-TME; 3-year LR (3.8%) prediction; 1039 rectal cancer patients in 6 EU countriesML (XGBoost, others); SHAP for feature importance; LR predictionXGBoost: accuracy 77.1%, AUC 0.76Supports safe R-TME; helps identify patients at high LR risk
Ghaffar et al. (2025) [57]Retrospective video-based cohort; surgical video + clinical data; 4 centersNerve-sparing technique vs. erectile recovery; AUC for 12 mo erectile function; 64 patients, 1104 NVB retractionsComputer vision + supervised ML (RF, MLP, XGBoost); gesture-derived visual featuresRF: AUC 0.83; MLP: AUC 0.74; XGBoost: AUC 0.78; 5-fold nested CVReal-time alerts; ICC 0.68–0.76; potential training tool for surgeons
* Only original data. 3D: three-dimensional; AI: artificial intelligence; ANN: artificial neural network; AR: augmented reality; AUC: area under the curve; ChatGPT: chat generative pre-trained transformer; CNN: convolutional neural network; CT: computed tomography; CV: computer vision; DL: deep learning; EBL: estimated blood loss; HRNet: high-resolution network; ICC: intra-class correlation coefficient; IoU: intersection over union; IPS: image processing system; LOS: length of stay; LR: local recurrence; MIPN: minimally invasive partial nephrectomy; MIS: minimally invasive surgery; ML: machine learning; MLP: multi-layer perceptron; MRI: magnetic resonance imaging; NCDB: the National Cancer Database; NVB: neurovascular bundle; OR: odds ratio; OS: overall survival; OT: operation time; POPF: postoperative pancreatic fistulas; RALP: robotic-assisted laparoscopic radical prostatectomy; RAMIE: robot-assisted minimally invasive esophagectomy; RAPN: robot-assisted partial nephrectomy; RARC: robot-assisted radical cystectomy; RARP: robot-assisted radical prostatectomy; RF: random forest; RGB: red green blue; RLN: recurrent laryngeal nerve; R-TME: robot-assisted total mesorectal excision; SHAP: SHapley Additive exPlanation; SPARE: scoring system based on preoperative aspects and dimensions used for an anatomical classification; SRH: stimulated Raman histology; Tetrafecta: optimal perioperative outcomes in nephron-sparing surgery; TPR: true positive rate; VIF: variance inflation factor; WIT: warm ischemic time; XGBoost: eXtreme Gradient Boosting.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Leszczyńska, A.; Obuchowicz, R.; Strzelecki, M.; Seweryn, M. The Integration of Artificial Intelligence into Robotic Cancer Surgery: A Systematic Review. J. Clin. Med. 2025, 14, 6181. https://doi.org/10.3390/jcm14176181

AMA Style

Leszczyńska A, Obuchowicz R, Strzelecki M, Seweryn M. The Integration of Artificial Intelligence into Robotic Cancer Surgery: A Systematic Review. Journal of Clinical Medicine. 2025; 14(17):6181. https://doi.org/10.3390/jcm14176181

Chicago/Turabian Style

Leszczyńska, Agnieszka, Rafał Obuchowicz, Michał Strzelecki, and Michał Seweryn. 2025. "The Integration of Artificial Intelligence into Robotic Cancer Surgery: A Systematic Review" Journal of Clinical Medicine 14, no. 17: 6181. https://doi.org/10.3390/jcm14176181

APA Style

Leszczyńska, A., Obuchowicz, R., Strzelecki, M., & Seweryn, M. (2025). The Integration of Artificial Intelligence into Robotic Cancer Surgery: A Systematic Review. Journal of Clinical Medicine, 14(17), 6181. https://doi.org/10.3390/jcm14176181

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop