The Integration of Artificial Intelligence into Robotic Cancer Surgery: A Systematic Review
Abstract
1. Introduction
2. Materials and Methods
2.1. Study Design and Search Strategy
2.2. Eligibility Criteria and Study Selection
2.3. Data Extraction and Assessment
3. Results
3.1. Preoperative Planning
3.2. Intraoperative Support
3.3. Postoperative Predictions
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Conflicts of Interest
Appendix A
Database | Search String |
---|---|
EMBASE | ((((((((surgical AND ‘procedure’/exp OR surgical) AND oncology OR oncologic) AND surgery OR cancer) AND surgery OR tumor) AND resection OR oncological) AND resection OR tumor) AND removal OR cancer) AND operation*) AND (‘neoplasm’/exp OR cancer OR tumor OR tumour OR malignan* OR oncology OR neoplasm*) AND ((((((‘robotics’/exp OR robot) AND assisted AND ‘surgery’/exp OR robotic) AND surgery OR ‘robot assisted’) AND surgery OR surgical) AND robot* OR robotic) AND system*) AND (((((((artificial AND ‘intelligence’/exp OR machine) AND ‘learning’/exp OR deep) AND learning OR neural) AND network* OR predictive) AND algorithm* OR artificial) AND intelligence OR machine) AND learning OR ai)) |
MEDLINE | ((exp Artificial Intelligence/) OR (exp Machine Learning/) OR (exp Neural Networks, Computer/) OR (artificial intelligence or AI or machine learning or deep learning or neural network* or predictive algorithm*).mp.)) AND ((exp Robotic Surgical Procedures/) OR (robotic surgery or robot-assisted surgery or surgical robot* or robotic system*).mp. OR (exp Robotics/)) AND ((exp Neoplasms/) OR (cancer or oncology or tumour or tumor or neoplasm* or malignan*).mp.) AND ((exp Surgical Procedures, Operative/) OR (oncologic surgery or surgical oncology or cancer surgery or tumor resection or oncological resection or tumor removal).mp.) |
Web of Science | Refine results for TS=(“artificial intelligence” OR “machine learning” OR “deep learning” OR “neural network*” OR “predictive algorithm*” OR “AI”) AND TS=(“robotic surgery” OR “robot-assisted surgery” OR “surgical robotics” OR “robotic system*” OR “surgical robot*”) AND TS=(“cancer” OR “oncology” OR “tumor” OR “tumour” OR “neoplasm*” OR “malignan*”) AND TS=(“surgery” OR “surgical procedure” OR “oncologic surgery” OR “cancer surgery” OR “complex surgery” OR “tumor resection” OR “tumour removal”) and 2024 or 2025 (Publication Years) and Early Access or Review Article or Article (Document Types) and English (Languages) |
Google Scholar | (“artificial intelligence” OR “machine learning” OR “deep learning” OR “neural networks”) AND (“robotic surgery” OR “robot-assisted surgery” OR “surgical robotics”) AND (“oncology” OR “cancer” OR “tumor” OR “tumour” OR “neoplasm”) AND (“complex surgery” OR “oncologic surgery”) |
medRxiv | (“artificial intelligence” OR “machine learning” OR “deep learning”) AND (“robotic surgery” OR “robot-assisted surgery” OR “surgical robotics”) AND (“oncology” OR “cancer” OR “neoplasm”) AND (“complex surgery” OR “tumor resection” OR “oncologic surgery”) |
IEEE | (“Full Text Only”:robotic surgery OR “Full Text Only”:surgical robotics) AND (Full Text Only”:oncology OR “Full Text Only”:cancer OR “Full Text Only”:tumor) AND (Full Text Only”:artificial intelligence OR “Full Text Only”:deep learning) |
Author (Year) | TRIPOD-AI Score * Assessment and Comment | NIH Assessment ** Score and Comment | Overall *** Assessment | ||
---|---|---|---|---|---|
Mannas et al. (2025) [52] | Good | Very well-prepared publication, strong validation and results, no model release. | Good | Well-executed with reliable data sources and outcome evaluation. | 3 |
Mei et al. (2025) [47] | Good | Full documentation, interpretability, multicenter validation, code available. | Good | Innovative approach with appropriate use of AI. Needs external validation and clearer reporting. | 3 |
Amparore et al. (2024) [49] | Fair+ | Very well described methodology, detailed description of pipeline and metrics; no code and external validation | Good | Comprehensive design with clearly defined exposure and outcome measures. | 2 |
Bannone et al. (2024) [51] | Fair+ | Full architecture and metrics; no model release and limited interpretability. | Good | Clear methodology with a focus on practical implementation. | 2 |
Chen et al. (2025) [53] | Fair+ | Good methodology and clinical application, no interpretability and code. | Good | Real-world clinical data supports robustness of early clinical use. | 2 |
Emile et al. (2024) [44] | Fair+ | Strong clinical analysis, multicenter; however, full interpretability and model availability are lacking. | Good | Well-designed with clearly defined objectives and consistent methodology. Limited external validation. | 2 |
Furube et al. (2024) [55] | Fair+ | Model works intraoperatively, good methodology; no external validation and code. | Good | Innovative AI use with clear patient segmentation. | 2 |
Geitenbeek et al. (2025) [56] | Fair+ | Extensive clinical analysis, good metrics; no interpretability and model. | Good | Comprehensive design with appropriate outcome tracking. | 2 |
Lu et al. (2024) [46] | Fair+ | Real-time system, but no interpretability and code, internal validation only. | Good | Robust statistical methods and adequate sample size. Some missing details in handling confounders. | 2 |
Saikali et al. (2025) [48] | Fair+ | Good implementation and analysis, no code and external validation. | Good | Thorough methodology and strong outcome focus. Reporting of model performance could be expanded. | 2 |
Shi et al. (2025) [50] | Fair+ | Good presentation of results, but no code and external validation. | Good | Strong performance metrics with appropriate AI integration. | 2 |
Ghaffar et al. (2025) [57] | Fair | Innovative topic but missing many key elements of AI reporting. | Fair | Lacks confounder control and statistical depth, but relevant AI usage. | 1 |
Huang et al. (2025) [45] | Good | Highest level of detail, multicenter validation, partially available model. | Fair | Good clinical relevance but lacks transparency in reporting and has potential selection bias. | 1 |
Nakamura et al. (2024) [54] | Fair | No interpretability and code; limited description of the model architecture. | Good | Strong methodology with robust outcome definitions. | 1 |
References
- Taylor, R.; Menciassi, A.; Fichtinger, G.; Dario, P. Medical Robotics and Computer-Integrated Surgery. In Springer Handbook of Robotics, 1st ed.; Siciliano, B., Khatib, O., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; pp. 1199–1222. [Google Scholar] [CrossRef]
- Hashimoto, D.A.; Rosman, G.; Rus, D.; Meireles, O.R. Artificial Intelligence in Surgery: Promises and Perils. Ann. Surg. 2018, 268, 70–76. [Google Scholar] [CrossRef]
- Moglia, A.; Georgiou, K.; Georgiou, E.; Satava, R.M.; Cuschieri, A. A systematic review on artificial intelligence in robot-assisted surgery. Int. J. Surg. 2021, 95, 106151. [Google Scholar] [CrossRef] [PubMed]
- Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef] [PubMed]
- Zuluaga, L.; Bamby, J.; Okhawere, K.E.; Ucpinar, B.; Razdan, S.; Badani, K.K. Assessing operative variability in robot-assisted radical prostatectomy (RARP) through AI. J. Robot. Surg. 2025, 19, 99. [Google Scholar] [CrossRef]
- Truckenmueller, P.; Früh, A.; Kissner, J.F.; Moser, N.K.; Misch, M.; Faust, K.; Onken, J.; Vajkoczy, P.; Xu, R. Integration of a lightweight and table-mounted robotic alignment tool with automated patient-to-image registration using robotic cone-beam CT for intracranial biopsies and stereotactic electroencephalography. Neurosurg. Focus 2024, 57, E2. [Google Scholar] [CrossRef]
- Sato, K.; Takenaka, S.; Kitaguchi, D.; Zhao, X.; Yamada, A.; Ishikawa, Y.; Takeshita, N.; Takeshita, N.; Sakamoto, S.; Ichikawa, T.; et al. Objective surgical skill assessment based on automatic recognition of dissection and exposure times in robot-assisted radical prostatectomy. Langenbecks Arch. Surg. 2025, 410, 39. [Google Scholar] [CrossRef]
- Sharma, V.; Fadel, A.; Tollefson, M.K.; Psutka, S.P.; Blezek, D.J.; Frank, I.; Thapa, P.; Tarrell, R.; Viers, L.D.; Potretzke, A.M.; et al. Artificial intelligence-based assessment of preoperative body composition is associated with early complications after radical cystectomy. J. Urol. 2025, 213, 228–237. [Google Scholar] [CrossRef]
- Zhang, X.; Zhang, Y.; Yang, J.; Du, H. A prostate seed implantation robot system based on human-computer interactions: Augmented reality and voice control. Math. Biosci. Eng. 2024, 21, 5947–5971. [Google Scholar] [CrossRef]
- Wang, H.; Shen, B.; Jia, P.; Li, H.; Bai, X.; Li, Y.; Xu, K.; Hu, P.; Xia, X.; Fang, Y.; et al. Guiding post-pancreaticoduodenectomy interventions for pancreatic cancer patients utilizing decision tree models. Front. Oncol. 2024, 14, 139929714. [Google Scholar] [CrossRef]
- Shimodaira, K.; Inoue, R.; Hashimoto, T.; Satake, N.; Shishido, T.; Namiki, K.; Harada, K.; Nagao, T.; Ohno, Y. Significance of the cribriform morphology area ratio for biochemical recurrence in Gleason score 4 + 4 prostate cancer patients following robot-assisted radical prostatectomy. Cancer Med. 2024, 13, e7086. [Google Scholar] [CrossRef]
- Yamada, Y.; Fujii, Y.; Kakutani, S.; Kimura, N.; Sugimoto, K.; Hakozaki, Y.; Sugihara, T.; Takeshima, Y.; Kawai, T.; Nakamura, M.; et al. Development of risk-score model in patients with negative surgical margin after robot-assisted radical prostatectomy. Sci. Rep. 2024, 14, 7607. [Google Scholar] [CrossRef]
- Lee, J.; Ham, S.; Kim, N.; Park, H.S. Development of a deep learning-based model for guiding a dissection during robotic breast surgery. Breast Cancer Res. 2025, 27, 34. [Google Scholar] [CrossRef] [PubMed]
- Lin, Y.; Wang, J.; Liu, Q.; Zhang, K.; Liu, M.; Wang, Y. CFANet: Context fusing attentional network for preoperative CT image segmentation in robotic surgery. Comput. Biol. Med. 2024, 171, 108115. [Google Scholar] [CrossRef] [PubMed]
- Pak, S.; Park, S.G.; Park, J.; Choi, H.R.; Lee, J.H.; Lee, W.; Cho, S.T.; Lee, Y.G.; Ahn, H. Application of deep learning for semantic segmentation in robotic prostatectomy: Comparison of convolutional neural networks and visual transformers. Investig. Clin. Urol. 2024, 65, 551–558. [Google Scholar] [CrossRef] [PubMed]
- Sinha, R.; Rallabandi, H.; Bana, R.; Bag, M.; Raina, R.; Sridhar, D.; Deepika, H.K.; Reddy, P. Ovarian loss in laparoscopic and robotic cystectomy compared using artificial intelligence pathology. JSLS 2024, 28, e2024.00001. [Google Scholar] [CrossRef]
- Younis, R.; Yamlahi, A.; Bodenstedt, S.; Scheikl, P.M.; Kisilenko, A.; Daum, M.; Schulze, A.; Wise, P.A.; Nickel, F.; Mathis-Ullrich, F.; et al. A surgical activity model of laparoscopic cholecystectomy for co-operation with collaborative robots. Surg. Endosc. 2024, 38, 4316–4328. [Google Scholar] [CrossRef]
- Albo, G.; Gallioli, A.; Ripa, F.; De Lorenzis, E.; Boeri, L.; Bebi, C.; Rocchini, L.; Longo, F.; Zanetti, S.P.; Turetti, M.; et al. Extended pelvic lymph node dissection during robotic prostatectomy: Antegrade versus retrograde technique. BMC Urol. 2024, 24, 64, Erratum in BMC Urol. 2024, 24, 86. 10.1186/s12894-024-01477-w. [Google Scholar] [CrossRef]
- Angerer, M.; Wülfing, C.; Dieckmann, K.P. Robotic retroperitoneal lymph node dissection for testicular cancer—First experience and learning curve of a single surgeon. Cancers 2025, 17, 1476. [Google Scholar] [CrossRef]
- Zhang, W.; Yu, J.; Yu, X.; Zhang, Y.; Men, Z. Study on Bionic Design and Tissue Manipulation of Breast Interventional Robot. Sensors 2024, 24, 6408. [Google Scholar] [CrossRef]
- Hölgyesi, Á.; Zrubka, Z.; Gulácsi, L.; Baji, P.; Haidegger, T.; Kozlovszky, M.; Weszl, M.; Kovács, L.; Péntek, M. Robot-assisted surgery and artificial intelligence-based tumour diagnostics: Social preferences with a representative cross-sectional survey. BMC Med. Inform. Decis. Mak. 2024, 24, 87. [Google Scholar] [CrossRef]
- Klontzas, M.E.; Ri, M.; Koltsakis, E.; Stenqvist, E.; Kalarakis, G.; Boström, E.; Kechagias, A.; Schizas, D.; Rouvelas, I.; Tzortzakakis, A. Prediction of anastomotic leakage in esophageal cancer surgery: A multimodal machine learning model integrating imaging and clinical data. Acad. Radiol. 2024, 31, 4878–4885. [Google Scholar] [CrossRef]
- Anania, G.; Chiozza, M.; Pedarzani, E.; Resta, G.; Campagnaro, A.; Pedon, S.; Valpiani, G.; Silecchia, G.; Mascagni, P.; Cuccurullo, D.; et al. Predicting postoperative length of stay in patients undergoing laparoscopic right hemicolectomy for colon cancer: A machine learning approach using SICE (Società Italiana di Chirurgia Endoscopica) CoDIG data. Cancers 2024, 16, 2857. [Google Scholar] [CrossRef]
- Antonella, C.; Discenza, A.; Rauseo, M.; Matella, M.; Caggianelli, G.; Ciaramelletti, R.; Mirabella, L.; Cinnella, G. Intraoperative hypotension during robotic-assisted radical prostatectomy: A randomised controlled trial comparing standard goal-directed fluid therapy with hypotension prediction index-guided goal-directed fluid therapy. Eur. J. Anaesthesiol. 2025, Epub ahead of print. [Google Scholar] [CrossRef] [PubMed]
- Flammia, R.S.; Anceschi, U.; Tuderti, G.; Di Maida, F.; Grosso, A.A.; Lambertini, L.; Mari, A.; Mastroianni, R.; Bove, A.; Capitanio, U.; et al. Development and internal validation of a nomogram predicting 3-year chronic kidney disease upstaging following robot-assisted partial nephrectomy. Int. Urol. Nephrol. 2024, 56, 913–921. [Google Scholar] [CrossRef] [PubMed]
- Hagedorn, C.; Dornhöfer, N.; Aktas, B.; Weydandt, L.; Lia, M. Risk factors for surgical wound infection and fascial dehiscence after open gynecologic oncologic surgery: A retrospective cohort study. Cancers 2024, 16, 4157. [Google Scholar] [CrossRef] [PubMed]
- Chung, J.H.; Song, W.; Kang, M.; Sung, H.H.; Jeon, H.G.; Jeong, B.C.; Jeon, S.S.; Lee, H.M.; Seo, S.I. Risk factors of recurrence after robot-assisted laparoscopic partial nephrectomy for solitary localized renal cell carcinoma. Sci. Rep. 2024, 14, 4481. [Google Scholar] [CrossRef]
- Pires, R.D.S.; Pereira, C.W.A.; Favorito, L.A. Is the learning curve of the urology resident for conventional radical prostatectomy similar to that of staff initiating robot-assisted radical prostatectomy? Int. Braz. J. Urol. 2024, 50, 335–345. [Google Scholar] [CrossRef]
- Pavone, M.; Baby, B.; Carles, E.; Innocenzi, C.; Baroni, A.; Arboit, L.; Murali, A.; Rosati, A.; Iacobelli, V.; Fagotti, A.; et al. Critical view of safety assessment in sentinel node dissection for endometrial and cervical cancer: Artificial intelligence to enhance surgical safety and lymph node detection (LYSE study). Int. J. Gynecol. Cancer 2025, 35, 101789. [Google Scholar] [CrossRef]
- El Mohady, B.; Larmure, O.; Zeroual, A.; Elgorban, A.M.; El Idrissi, M.; Alfagham, A.T.; Syed, A.; Lemelle, J.-L.; Lienard, J. The Advancing Frontier: Robotic-Assisted Laparoscopy in Pediatric Tumor Management. Indian J. Surg. Oncol. 2025. [Google Scholar] [CrossRef]
- Faulkner, J.; Arora, A.; McCulloch, P.; Robertson, S.; Rovira, A.; Ourselin, S.; Jeannon, J.P. Prospective development study of the Versius Surgical System for use in transoral robotic surgery: An IDEAL stage 1/2a first in human and initial case series experience. Eur. Arch. Otorhinolaryngol. 2024, 281, 2667–2678. [Google Scholar] [CrossRef]
- Goldstone, R.N.; Francone, T.; Milky, G.; Shih, I.F.; Bossie, H.; Li, Y.; Ricciardi, R. Outcomes comparison of robotic-assisted versus laparoscopic and open surgery for patients undergoing rectal cancer resection with concurrent stoma creation. Surg. Endosc. 2024, 38, 4550–4558. [Google Scholar] [CrossRef] [PubMed]
- Kim, J.K.; Lee, C.R.; Kang, S.W.; Jeong, J.J.; Nam, K.H.; Chung, W.Y. Expansion of thyroid surgical territory through 10,000 cases under the da Vinci robotic knife. Sci. Rep. 2024, 14, 7555. [Google Scholar] [CrossRef] [PubMed]
- Kohjimoto, Y.; Yamashita, S.; Iwagami, S.; Muraoka, S.; Wakamiya, T.; Hara, I. hinotori™ vs. da Vinci®: Propensity score-matched analysis of surgical outcomes of robot-assisted radical prostatectomy. J. Robot. Surg. 2024, 18, 130. [Google Scholar] [CrossRef] [PubMed]
- Aguilera Saiz, L.; Groen, H.C.; Heerink, W.J.; Ruers, T.J.M. The influence of the da Vinci surgical robot on electromagnetic tracking in a clinical environment. J. Robot. Surg. 2024, 18, 54. [Google Scholar] [CrossRef]
- Kim, S.H.; Kwon, T.; Choi, H.S.; Kim, C.; Won, S.; Jeon, H.J.; Kim, E.S.; Keum, B.; Jeen, Y.T.; Hwang, J.H.; et al. Robot-assisted gastric endoscopic submucosal dissection significantly improves procedure time at challenging dissection locations. Surg. Endosc. 2024, 38, 2280–2287. [Google Scholar] [CrossRef]
- Zhao, Z.; Zhang, Y.; Lin, L.; Huang, W.; Xiao, C.; Liu, J.; Chai, G. Intelligent electromagnetic navigation system for robot-assisted intraoral osteotomy in mandibular tumor resection: A model experiment. Front. Immunol. 2024, 15, 1436276. [Google Scholar] [CrossRef]
- Furnari, G.; Secchi, C.; Ferraguti, F. Sequence-based imitation learning for surgical robot operations. Artif. Intell. Surg. 2025, 5, 103–115. [Google Scholar] [CrossRef]
- Furnari, G.; Minelli, M.; Puliatti, S.; Micali, S.; Secchi, C.; Ferraguti, F. Selective clamping for robot-assisted surgical procedures. In Proceedings of the 2024 46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 15–19 July 2024; pp. 1–7. [Google Scholar] [CrossRef]
- Zhang, S.; Zhang, G.; Wang, M.; Guo, S.B.; Wang, F.; Li, Y.; Kadier, K.; Zhou, Z.; Zhang, P.; Chi, H.; et al. Artificial intelligence hybrid survival assessment system for robot-assisted proctectomy: A retrospective cohort study. JCO Precis. Oncol. 2024, 8, e2400089. [Google Scholar] [CrossRef]
- Bakker, A.F.H.A.; de Nijs, J.V.; Jaspers, T.J.M.; de With, P.H.N.; Beulens, A.J.W.; van der Poel, H.G.; van der Sommen, F.; Brinkman, W.M. Estimating surgical urethral length on intraoperative robot-assisted prostatectomy images using artificial intelligence anatomy recognition. J. Endourol. 2024, 38, 690–696. [Google Scholar] [CrossRef]
- Too, C.W.; Fong, K.Y.; Hang, G.; Sato, T.; Nyam, C.Q.; Leong, S.H.; Ng, K.W.; Ng, W.L.; Kawai, T. Artificial intelligence-guided segmentation and path planning software for transthoracic lung biopsy. J. Vasc. Interv. Radiol. 2024, 35, 780–789.e1. [Google Scholar] [CrossRef]
- Sengun, B.; Iscan, Y.; Yazici, Z.A.; Sormaz, I.C.; Aksakal, N.; Tunca, F.; Senyurek, Y.G. Utilization of artificial intelligence in minimally invasive right adrenalectomy: Recognition of anatomical landmarks with deep learning. Acta Chir. Belg. 2024, 124, 492–498. [Google Scholar] [CrossRef]
- Emile, S.H.; Horesh, N.; Garoufalia, Z.; Gefen, R.; Rogers, P.; Wexner, S.D. An artificial intelligence-designed predictive calculator of conversion from minimally invasive to open colectomy in colon cancer. Updates Surg. 2024, 76, 1321–1330. [Google Scholar] [CrossRef]
- Huang, H.; Chen, B.; Feng, C.; Chen, W.; Wu, D. Using three-dimensional virtual imaging of renal masses to improve prediction of robotic-assisted partial nephrectomy Tetrafecta with SPARE score. World J. Urol. 2024, 43, 37. [Google Scholar] [CrossRef]
- Lu, H.; Yu, C.; Yu, X.; Yang, D.; Yu, S.; Xia, L.; Lin, Y.; Yang, B.; Wu, Y.; Li, G. Effects of bony pelvic and prostate dimensions on surgical difficulty of robot-assisted radical prostatectomy: An original study and meta-analysis. Ann. Surg. Oncol. 2024, 31, 8405–8420. [Google Scholar] [CrossRef] [PubMed]
- Mei, H.; Wang, Z.; Zheng, Q.; Jiao, P.; Wu, J.; Liu, X.; Yang, R. Deep learning for predicting difficulty in radical prostatectomy: A novel evaluation scheme. Urology 2025, 198, 1–7. [Google Scholar] [CrossRef] [PubMed]
- Saikali, S.; Reddy, S.; Gokaraju, M.; Goldsztein, N.; Dyer, A.; Gamal, A.; Jaber, A.; Moschovas, M.; Rogers, T.; Vangala, A.; et al. Development and assessment of an AI-based machine learning model for predicting urinary continence and erectile function recovery after robotic-assisted radical prostatectomy: Insights from a prostate cancer referral center. Comput. Methods Programs Biomed. 2025, 259, 108522. [Google Scholar] [CrossRef] [PubMed]
- Amparore, D.; Sica, M.; Verri, P.; Piramide, F.; Checcucci, E.; De Cillis, S.; Porpiglia, F. Computer vision and machine-learning techniques for automatic 3D virtual images overlapping during augmented reality guided robotic partial nephrectomy. Technol. Cancer Res. Treat. 2024, 23, 9368. [Google Scholar] [CrossRef]
- Shi, X.; Yang, B.; Guo, F.; Zhi, C.; Xiao, G.; Zhao, L.; Wang, Y.; Zhang, W.; Xiao, C.; Wu, Z.; et al. Artificial intelligence based augmented reality navigation in minimally invasive partial nephrectomy. Urology 2025, 199, 20–26. [Google Scholar] [CrossRef]
- Bannone, E.; Collins, T.; Esposito, A.; Cinelli, L.; De Pastena, M.; Pessaux, P.; Felli, E.; Andreotti, E.; Okamoto, N.; Barberio, M.; et al. Surgical optomics: Hyperspectral imaging and deep learning towards precision intraoperative automatic tissue recognition—Results from the EX-MACHYNA trial. Surg. Endosc. 2024, 38, 3758–3772. [Google Scholar] [CrossRef]
- Mannas, M.P.; Deng, F.M.; Ion-Margineanu, A.; Freudiger, C.; Lough, L.; Huang, W.; Wysock, J.; Huang, R.; Pastore, S.; Jones, D.; et al. Stimulated Raman histology and artificial intelligence provide near real-time interpretation of radical prostatectomy surgical margins. J. Urol. 2025, 213, 609–616. [Google Scholar] [CrossRef]
- Chen, W.; Fukuda, S.; Yoshida, S.; Kobayashi, N.; Fukada, K.; Fukunishi, M.; Otani, Y.; Matsumoto, S.; Kobayashi, M.; Nakamura, Y.; et al. Pioneering AI-guided fluorescence-like navigation in urological surgery: Real-time ureter segmentation during robot-assisted radical cystectomy using convolutional neural network. J. Robot. Surg. 2025, 19, 188. [Google Scholar] [CrossRef]
- Nakamura, T.; Kobayashi, N.; Kumazu, Y.; Fukata, K.; Murakami, M.; Kohno, S.; Hojo, Y.; Nakao, E.; Kurahashi, Y.; Ishida, Y.; et al. Precise highlighting of the pancreas by semantic segmentation during robot-assisted gastrectomy: Visual assistance with artificial intelligence for surgeons. Gastric Cancer 2024, 27, 869–875. [Google Scholar] [CrossRef] [PubMed]
- Furube, T.; Takeuchi, M.; Kawakubo, H.; Noma, K.; Maeda, N.; Daiko, H.; Ishiyama, K.; Otsuka, K.; Sato, Y.; Koyanagi, K.; et al. Usefulness of an artificial intelligence model in recognizing recurrent laryngeal nerves during robot-assisted minimally invasive esophagectomy. Ann. Surg. Oncol. 2024, 31, 9344–9351. [Google Scholar] [CrossRef] [PubMed]
- Geitenbeek, R.T.J.; Duhoky, R.; Burghgraef, T.A.; Piozzi, G.N.; Masum, S.; Hopgood, A.A.; Denost, Q.; van Eetvelde, E.; Bianchi, P.; Rouanet, P.; et al. Analysis of local recurrence after robotic-assisted total mesorectal excision (ALRITE): An international, multicentre, retrospective cohort. Cancers 2025, 17, 992. [Google Scholar] [CrossRef] [PubMed]
- Ghaffar, U.; Olsen, R.; Deo, A.; Yang, C.; Varghese, J.; Tsai, R.G.; Heard, J.; Dadashian, E.; Prentice, C.; Wager, P.; et al. Computer vision for evaluating retraction of the neurovascular bundle during nerve-sparing prostatectomy. J. Robot. Surg. 2025, 19, 257. [Google Scholar] [CrossRef] [PubMed]
- Chatterjee, S.; Das, S.; Ganguly, K.; Mukherjee, S. Advancements in Robotic Surgery: Innovations, Challenges and Future Prospects. J. Robot Surg. 2024, 18, 28. Available online: https://link.springer.com/article/10.1007/s11701-023-01801-w (accessed on 17 July 2025). [CrossRef] [PubMed]
- Quero, G.; Mascagni, P.; Kolbinger, F.R.; Fiorillo, C.; De Sio, D.; Longo, F.; Alberto Schena, C.; Laterza, V.; Rosa, F.; Menghi, R.; et al. Artificial Intelligence in Colorectal Cancer Surgery: Present and Future Perspectives. Cancers 2022, 14, 3803. [Google Scholar] [CrossRef]
Study | Study Characteristics | Clinical Context | AI Application | Model Evaluation | Clinical Relevance |
---|---|---|---|---|---|
Preoperative planning | |||||
Emile et al. (2024) [44] | Retrospective case–control; demographic, clinical, surgical data; internal validation (NCDB) | Conversion from MIS to open colectomy; 30-/90-day mortality, LOS, readmission, OS; 26,546 stage I–III colon cancer patients | ChatGPT-generated R code; multivariate logistic regression; OR-based model; VIF; R code available | OR up to 17.8 (high-risk); reduced to 8.9 with robotics; AUC not reported | May assist in surgical planning and platform selection |
Huang et al. (2024) [45] | Retrospective cohort; demographic, perioperative, CT imaging; internal only | RAPN; Tetrafecta (WIT < 25 min, negative margins, no major complications, preserved renal function); 141 patients | AI-based segmentation + 3D reconstruction (Yorktal IPS); automated SPARE score + Tetrafecta prediction | AUC: 0.854 (3D) vs. 0.755 (2D); categorical 0.658 vs. 0.643 | Improved risk stratification and surgical planning using 3D imaging |
Lu et al. * (2024) [46] | Prospective cohort; anatomical (MRI) and surgical data; internal validation | RARP; operative time, EBL, surgical margin; 219 patients | XGBoost; prediction of prolonged operative time; SHAP explainability | XGBoost outperformed logistic regression (no details reported) | Identifying challenging anatomy may aid surgical planning |
Mei et al. (2025) [47] | Retrospective DL with segmentation; MRI + spatial features; internal and external validation | Surgical difficulty in RARP; EBL and OT; 290 patients with MRI | nnUNet_v2 + modified PointNet; regression of spatial metrics | Dice = 0.8641 (segmentation); mm-level landmark accuracy | New evaluation scheme for preoperative planning |
Saikali et al. (2025) [48] | Retrospective observational (single center); preoperative clinical data; internal comparison only | Prediction of urinary continence and erectile function at 12 months post-RARP; 8524 patients | ANN; prediction of continence and potency; feature importance analysis | AUC: 0.68 (continence), 0.74 (potency) | Patient counseling and care optimization |
Intraoperative support | |||||
Amparore et al. (2024) [49] | Prospective single-center; intraoperative video + clinical data; internal validation only | Robotic nephrectomy; overlay time and procedure safety; 20 patients with renal masses | Computer vision + CNN; automatic 3D model registration; expert visual assessment | Overlay time: CV~7 s, CNN~11 s | Faster accurate AR-assisted surgery |
Shi et al. (2025) [50] | Prospective–retrospective development; preop CT + laparoscopy video; clinical use only | MIPN patients; navigation and dissection standardization; 46 patients | Augmented reality with AI overlay; real-time anatomic guidance; 3D visual overlay | Performance not quantified | Improved surgical precision and consistency |
Bannone et al. (2024) [51] | Prospective multicenter; hyperspectral + RGB images; internal + external (inter-center) | Tissue recognition during surgery; 13 tissue classes; 169 patients | CNN; real-time tissue segmentation; expert review only | TPR: skin 100%, liver 97%; Dice > 80% | Improved intraoperative tissue identification |
Mannas et al. (2024) [52] | Prospective pilot; 121 intraoperative SRH images; tested on 10 patients | Surgical margin interpretation in RALP; accuracy, sensitivity, specificity vs. pathology; 22 patients | CNN; classify margin status in SRH; no internal explainability methods | Accuracy 98%, sensitivity 83%, specificity 99% (surgeons) | Supports intraoperative decision making; may reduce positive margins |
Chen et al. (2025) [53] | Prospective developmental; 730 RGB images from RARC; retrospective validation on 41 images | Real-time ureter segmentation during RARC; segmentation quality (Dice, IoU, recall, precision); 17 cases | CNN; semantic segmentation of ureter; limited explainability (surgeon only) | Dice 0.71; IoU 0.55; recall 0.90; precision 0.60 | Reduces ureter misidentification; improves safety and training |
Nakamura et al. (2024) [54] | Retrospective image-based; annotated surgical video frames; internal test set | Robot-assisted gastrectomy; pancreas localization; 926 train, 232 val., 80 test images; 10 surgeons | Semantic segmentation (HRNet); visual overlay (mask) | Precision 0.70, recall 0.59, Dice 0.61 | May improve anatomy recognition intraoperatively; reduce POPF |
Furube et al. (2024) [55] | Retrospective multicenter; surgical videos from RAMIE; external validation (8 videos) | Intraoperative RLN identification; IoU, recognition rate improvement; 128 surgeries | Deep learning (CNN assumed); semantic segmentation and localization of RLN | IoU: 0.40 (right), 0.34 (left); accuracy increased from 46.9% to 81.3% | May improve nerve identification and reduce complications |
Postoperative predictions | |||||
Geitenbeek et al. (2025) [56] | Retrospective multicenter cohort; clinical/pathological data; internal cross-validation | Local recurrence after R-TME; 3-year LR (3.8%) prediction; 1039 rectal cancer patients in 6 EU countries | ML (XGBoost, others); SHAP for feature importance; LR prediction | XGBoost: accuracy 77.1%, AUC 0.76 | Supports safe R-TME; helps identify patients at high LR risk |
Ghaffar et al. (2025) [57] | Retrospective video-based cohort; surgical video + clinical data; 4 centers | Nerve-sparing technique vs. erectile recovery; AUC for 12 mo erectile function; 64 patients, 1104 NVB retractions | Computer vision + supervised ML (RF, MLP, XGBoost); gesture-derived visual features | RF: AUC 0.83; MLP: AUC 0.74; XGBoost: AUC 0.78; 5-fold nested CV | Real-time alerts; ICC 0.68–0.76; potential training tool for surgeons |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Leszczyńska, A.; Obuchowicz, R.; Strzelecki, M.; Seweryn, M. The Integration of Artificial Intelligence into Robotic Cancer Surgery: A Systematic Review. J. Clin. Med. 2025, 14, 6181. https://doi.org/10.3390/jcm14176181
Leszczyńska A, Obuchowicz R, Strzelecki M, Seweryn M. The Integration of Artificial Intelligence into Robotic Cancer Surgery: A Systematic Review. Journal of Clinical Medicine. 2025; 14(17):6181. https://doi.org/10.3390/jcm14176181
Chicago/Turabian StyleLeszczyńska, Agnieszka, Rafał Obuchowicz, Michał Strzelecki, and Michał Seweryn. 2025. "The Integration of Artificial Intelligence into Robotic Cancer Surgery: A Systematic Review" Journal of Clinical Medicine 14, no. 17: 6181. https://doi.org/10.3390/jcm14176181
APA StyleLeszczyńska, A., Obuchowicz, R., Strzelecki, M., & Seweryn, M. (2025). The Integration of Artificial Intelligence into Robotic Cancer Surgery: A Systematic Review. Journal of Clinical Medicine, 14(17), 6181. https://doi.org/10.3390/jcm14176181