Previous Article in Journal
Mood and Metabolism: A Bayesian Network Analysis of Depressive Symptoms in Major Depressive Disorder and Metabolic Syndrome
Previous Article in Special Issue
Robotic-Assisted vs. Laparoscopic Splenectomy in Children: A Systematic Review and Up-to-Date Meta-Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Artificial Intelligence in Gastrointestinal Surgery: A Systematic Review of Its Role in Laparoscopic and Robotic Surgery

by
Ludovica Gorini
1,†,
Roberto de la Plaza Llamas
1,2,*,†,
Daniel Alejandro Díaz Candelas
1,
Rodrigo Arellano González
1,
Wenzhong Sun
1,
Jaime García Friginal
1,
María Fra López
1 and
Ignacio Antonio Gemio del Rey
1,2
1
Department of General and Digestive Surgery, Hospital Universitario de Guadalajara, Calle del Donante de Sangre s/n, 19002 Guadalajara, Spain
2
Department of Surgery, Medical and Social Sciences, Faculty of Medicine and Health Sciences, University of Alcalá, Alcalá de Henares, 28871 Madrid, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
J. Pers. Med. 2025, 15(11), 562; https://doi.org/10.3390/jpm15110562
Submission received: 25 August 2025 / Revised: 29 October 2025 / Accepted: 5 November 2025 / Published: 19 November 2025
(This article belongs to the Special Issue Update on Robotic Gastrointestinal Surgery, 2nd Edition)

Abstract

Background: Artificial intelligence (AI) is transforming surgical practice by enhancing training, intraoperative guidance, decision-making, and postoperative assessment. However, its specific role in laparoscopic and robotic general surgery remains to be clearly defined. The objective is to systematically review the current applications of AI in laparoscopic and robotic general surgery and categorize them by function and surgical context. Methods: A systematic search of PubMed and Web of Science was conducted up to 22 June 2025, using predefined search terms. Eligible studies focused on AI applications in laparoscopic or robotic general surgery, excluding urological, gynecological, and obstetric fields. Original articles in English or Spanish were included. Data extraction was performed independently by two reviewers and synthesized descriptively by thematic categories. Results: A total of 152 original studies were included. Most were conducted in laparoscopic settings (n = 125), while 19 focused on robotic surgery and 8 involved both. The majority were technical evaluations or retrospective observational studies. Seven thematic categories were identified: surgical decision support and outcome prediction; skill assessment and training; workflow recognition and intraoperative guidance; object or structure detection; augmented reality and navigation; image enhancement; technical assistance; and surgeon perception and preparedness. Most studies applied deep learning, for classification, prediction, recognition, and real-time guidance in laparoscopic cholecystectomies, colorectal and gastric surgeries. Conclusions: AI has been widely adopted in various domains of laparoscopic and robotic general surgery. While most studies remain in early developmental stages, the evidence suggests increasing maturity and integration into clinical workflows. Standardization of evaluation and reporting frameworks will be essential to translate these innovations into widespread practice.

Graphical Abstract

1. Introduction

Artificial intelligence (AI) is the center of modern technological innovation. Rapidly growing in many fields, it opens us up to opportunities that were unthinkable a decade ago, giving space to creativity and engineering innovation as it expands. Since 1956, when the term was coined, this new technological revolution has not stopped growing, making its way into medicine [1].
General Surgery is a fertile land for AI to grow into a powerful tool for surgeons, and the introduction of AI has had a crucial role, especially in laparoscopic and robotic procedures, where synergy between surgeons and machines has already gone far, implementing technological innovation into surgical tradition.
The role of AI in surgery has many facets, including pattern recognition in standard procedures to improve safety, tissue or tool identification [2], anatomical structure mapping, integration of radiologic imaging during surgery, the use of predictive models to guide instrumentation, surgical training and skill assessment [3,4]. Additionally, it is utilized for surgical decision support, optimization, and outcome prediction [5,6].
AI in surgery is evolving beyond isolated image task-based applications, integrating data from other fields such as radiology or pathology, aiming to a personalized and integrated intraoperative decision-making [7].
The future seems to point toward the integration of AI throughout the patient journey, from preoperative screening, intraoperative assistance, and postoperative monitoring to follow-up [8].
The integration of AI in surgical robotics is paving the way for autonomous systems, with increased precision and more consistent surgical outcomes, reducing reliance on human factors such as fatigue and variability [9].
As has happened in the past when a new element was added to the clinical practice, ethical considerations have been raised, and AI is no exception. These concerns include privacy, potential bias in the systems and algorithms, autonomy and explainability in the decision-making processes, as well as legal responsibility and institutional readiness [9,10,11].
Although previous reviews have examined AI in robotic surgery or focused narrowly on technical skills assessment, few have captured the full clinical spectrum of AI integration in both laparoscopic and robotic general surgery [2,6,12,13].
In this context, the present review provides an integrated overview of how AI is being applied across these two minimally invasive modalities, summarizing diverse technical and clinical perspectives. Unlike previous reviews that have addressed specific subspecialties or technical domains, it brings together evidence from diverse areas to outline current trends, practical applications, and methodological gaps. By doing so, this study contributes to a broader understanding of the current state and potential directions for the clinical translation of AI within minimally invasive surgery.
The primary aim of this review was to address this gap, synthesizing current applications of AI in laparoscopic and robotic general surgery. Secondary objectives were to assess methodological trends and gaps in validation and to actualize the available knowledge in a rapidly growing field.

2. Materials and Methods

A systematic review was conducted following the PRISMA 2020 Guidelines [14].
The search was performed on PubMed and Web of Science databases using the following Boolean string: (“artificial intelligence” OR AI) AND (surgery) AND (“laparoscopic surgery” OR laparoscopy OR “robotic surgery”) NOT (urology OR gynecology OR gynaecology OR urological OR obstetrics). No date restrictions were applied. The last search was conducted on 22 June 2025. Only articles published in English or Spanish and related to the field of surgery were considered.
The research question followed a modified PICO framework, defining the Population as studies in laparoscopic and robotic general surgery, the Intervention as AI-based applications, and the Outcomes as thematic objectives and validation approaches. A formal Comparator was not applicable due to the descriptive scope of the review.
Eligibility criteria included:
-
Articles focusing on laparoscopic or robotic procedures performed in General Surgery settings (Under the term “General Surgery”, the following fields were included: abdominal surgery, including colorectal, upper gastrointestinal, bariatric, endocrine, abdominal wall, oncologic, hepatopancreaticobiliary, minimally invasive surgeries and transplant; trauma and breast surgeries).
-
AI as the focus of the research, applied in a clinical context.
-
Articles presenting original data.
The exclusion criteria applied were:
-
Studies focused on specialties outside General Surgery (e.g., Urology or Gynecology).
-
AI not applied in a clinical setting (e.g., theoretical models or models without a specific surgical application).
-
Reviews, meta-analyses, abstracts.
-
Non-human studies (studies were included if an animal model was employed, for instance, in a training setting, but the final objective was human surgery).
Two reviewers completed the screening process, followed by data extraction, first performing title and abstract screening and second a full text review. If consensus was not reached, a third author was consulted.
We extracted key data from each article including year of publication, country of origin, type of surgery (laparoscopic, robotic, or both), study design, AI methodology, type of AI application (e.g., classification, prediction, segmentation), surgical procedure, and thematic category.
Given the methodological heterogeneity and descriptive scope of this review, a formal risk of bias assessment was not conducted. The aim was to provide a broad overview of current applications rather than to evaluate comparative effectiveness.
A meta-analysis was not possible due to the variability in study designs, outcomes, and AI applications. Results were synthesized narratively by thematic category, and heterogeneity was described qualitatively rather than statistically.
Institutional Review Board approval was not required, as this study analysed only previously published data and did not involve human subjects or confidential patient information.

Studies Classification

According to their type, studies were classified under the following categories:
-
Technical evaluations (evaluation of new AI-based tools or algorithms, often with proof-of-concept validation).
-
Retrospective observational studies.
-
Prospective observational studies.
-
Feasibility studies (preliminary testing of AI applications in real or simulated surgical settings).
-
Clinical trials.
-
Dataset descriptions (description of publicly available or novel surgical datasets annotated for AI development and benchmarking).
-
Simulation-based training assessments.
-
Surveys or expert opinion studies.
Articles were also classified into seven thematic categories according to their primary focus:
-
Surgical decision support and outcome prediction: studies that used AI to assist in preoperative or intraoperative clinical decision-making, risk stratification, or prediction of postoperative outcomes.
-
Skill assessment and training: research focused on evaluating or improving surgical performance, through AI-based performance metrics, simulation platforms, or feedback systems.
-
Workflow recognition and intraoperative guidance: articles addressing the use of AI to identify surgical phases, provide context-aware support, or optimize procedural flow during surgery.
-
Object or structure detection: studies aimed at recognizing anatomical structures, surgical instruments, or landmarks within the operative field.
-
Augmented reality (AR) and navigation: research involving the integration of preoperative imaging and real-time intraoperative views.
-
Image enhancement: studies focused on improving the visual quality of surgical imaging.
-
Surgeon perception, preparedness, and attitudes: Studies exploring surgeons’ knowledge, acceptance, and perceived challenges regarding the integration of AI and digital technologies into surgical practice.
The methodological quality of the included studies was evaluated using validated, design-specific tools: PROBAST for predictive and prognostic models, ROBINS-I for non-randomized comparative studies, QUADAS-AI for intraoperative computer-vision and guidance systems, MINORS for experimental or feasibility studies, and AXIS for survey-based research.
Each study was independently appraised across the main bias domains and then harmonized into a three-level classification (low, moderate, or high risk of bias) to enable cross-domain synthesis.

3. Results

A total of 152 original studies were included in this review. The selection process is detailed in the PRISMA diagram shown in Figure 1.
The number of publications has increased over time, with 49 articles published in 2024, 34 in 2023, and 28 in 2022. Most studies were conducted in laparoscopic surgery settings (n = 125), while 19 focused on robotic surgery and 8 involved both approaches.
Most studies were classified as technical evaluations (n = 65) or retrospective observational studies (n = 35). Feasibility studies (n = 14) and prospective observational studies (n = 13) were reported in similar numbers. Clinical trials, dataset descriptions, simulation-based training assessments, and survey-based studies were less common, each represented by fewer than ten publications.
Most studies employed deep learning-based approaches, particularly convolutional neural networks (CNNs). AI methods were applied for classification, prediction, real-time guidance, and anatomical recognition. Specifically, 31 studies focused on recognition, 22 involved real-time AI applications, 20 addressed prediction, and 18 focused on classification. Detection tasks and segmentation were less frequently studied, appearing in 16 and 4 studies, respectively.
Regarding the clinical focus, the most frequently studied procedures or surgical fields were laparoscopic cholecystectomy (n = 27), gastrectomy (n = 12), and rectal surgery (n = 12), followed by other procedures in colorectal surgery (n = 14) and hernia repair (n = 5).
Descriptive data are shown in Table 1 and Figure 2.
Articles were categorized into seven thematic groups based on their primary focus. The number of studies and a brief description of each category are detailed below.

3.1. AI for Object or Structure Detection

A total of 51 studies focused on the use of artificial intelligence for the identification and localization of anatomical structures or surgical tools within the operative field. This category included 37 technical evaluations, 7 feasibility studies, 6 prospective observational studies and one dataset description. Most studies were conducted in laparoscopic surgery (n = 45, 84%), while only a small minority focused exclusively on robotic approaches (n = 2). Four studies included both laparoscopic and robotic data, mainly in colorectal and liver resections.
The most frequent applications were related to laparoscopic cholecystectomy (n = 17), particularly for the automatic identification of the critical view of safety, and recognition of biliary or vascular anatomy. Colorectal surgery (n = 8) and gastrectomy (n = 2) focused on vascular or autonomic nerve identification to reduce intraoperative injury. More recent studies (2023–2025) have increasingly explored bleeding detection, nerve recognition, and real-time safety checks in laparoscopic colectomy and rectal cancer surgery, reflecting a shift toward intraoperative decision support beyond static anatomical segmentation.
The applications primarily leveraged on computer vision techniques (CNNs), to detect and segment organs, vessels, anatomical landmarks, and instruments in laparoscopic or robotic video frames. Several studies developed real-time models capable of highlighting key structures such as anatomical elements or nerves during surgery to prevent complications, with reported accuracies between 80–90%. Other investigations aimed to improve instrument tracking, recognize energy device activation, or monitor tool presence outside the camera view, achieving detection accuracies of 85–95% in most reports. These AI-based detection tools aimed to enhance intraoperative safety, support training, and enable automated annotation of surgical videos for post-operative analysis.
Importantly, only 6 studies evaluated algorithms in intraoperative conditions, while the majority relied on retrospective datasets or simulation environments. External validation was reported in 3 studies, underscoring the limited generalizability of current models.
Several studies have demonstrated AI’s utility in real-time anatomical recognition and intraoperative guidance, such as automatic identification of the critical view of safety during laparoscopic cholecystectomy [15], detection of peritoneal surface metastases in staging laparoscopy [16], and recognition of perigastric vessels [17], intrahepatic vascular structures [18] or autonomic nerves during colorectal surgery [19] to reduce intraoperative injury.

3.2. AI for Surgical Skill Assessment and Training

A total of 37 studies were included in this category, focusing on the development and application of AI tools to evaluate technical performance and facilitate surgical training. These studies predominantly utilized video recordings of simulated or real procedures to assess skills such as suturing, dissection, and instrument handling, aiming to automate performance scoring based on standardized metrics and provide personalized feedback for trainees. CNNs and other deep learning frameworks were frequently applied to extract features from motion trajectories, tool usage, or procedural steps.
Twenty-one studies were technical evaluations, 6 were observational studies, 7 were simulation-based training assessments, and the rest included feasibility studies (n = 1), clinical trials (n = 1), and dataset descriptions (n = 1). Most studies were performed in laparoscopic settings (n = 26, 70%), while robotic applications accounted for a smaller but substantial proportion (n = 11, 30%), primarily focusing on suturing skills, task classification, and stress prediction.
Most of these relied on simulators or retrospective video data, while only about five studies (15%) assessed intraoperative material in real surgical procedures, such as laparoscopic gastrectomy, sigmoidectomy, or hernia repair. External or multicenter validation was rare, reported in only 3 studies.
In this context, Wu et al. [20] proposed video-based AI systems to objectively assess surgical performance, during laparoscopic cholecystectomy. Other studies like Halperin et al. [21] and Chen et al. [22] proposed real-time feedback on intracorporeal suture for laparoscopic and robotic training.
Overall, reported accuracies for automated skill classification ranged from 75–90%, with some systems distinguishing between novice, intermediate, and expert performance levels. More recent studies increasingly focused on real-time feedback tools and multicenter datasets, suggesting a gradual move toward clinically applicable systems.

3.3. AI for Workflow Recognition and Intraoperative Guidance

This category included 28 studies that explored the use of AI to recognize surgical workflows, identify procedural phases, and provide intraoperative assistance.
Twenty-two studies were technical evaluations, three were dataset descriptions and one was a feasibility study.
These studies relied on video-based inputs from laparoscopic or robotic procedures, annotated for phases, actions, or tool usage. Deep learning models (CNNs) were trained to recognize distinct steps of procedures such as cholecystectomy, gastrectomy, and colorectal resections as well as to estimate procedure duration or detect deviations from standard workflows.
Most studies (n = 23, 82%) were conducted in laparoscopic settings, while only three focused exclusively on robotic surgery (n = 11%) and one included both approaches. Six studies (21%) tested their models intraoperatively, while the majority remained retrospective or simulation based. External or multicenter validation was reported in four studies, often using large cholecystectomy datasets.
The procedures most frequently studied were laparoscopic cholecystectomy (n = 9), followed by other hepatopancreatobiliary procedures (n = 7), gastric surgery (n = 6), colorectal surgery (n = 2), and hernia repair (n = 1). The remaining studies focused on general laparoscopic tasks without procedure-specific targeting.
A subset of studies focused on real-time phase recognition to support context-aware surgical systems. All studies aimed to enable advanced decision support and integration into autonomous or semi-autonomous surgical platforms.
Some examples of AI models applied to surgical workflow recognition are related to enabling automatic phase identification, operative step classification and predict complexity in procedures such as laparoscopic sleeve gastrectomy [23] or robotic distal gastrectomy [24], supporting intraoperative guidance and process optimization.

3.4. AI for Surgical Decision Support and Outcome Prediction

This category encompassed 14 studies focused on leveraging AI to support preoperative or intraoperative decision-making and to predict clinical outcomes. The most targeted outcomes included postoperative complications, surgical complexity, and length of hospital stay. Most of these studies (n = 11) were retrospective and centered on model development using clinical data, imaging, or a combination of both. The rest of the studies included an algorithm development study, a feasibility study, and a comparative model analysis. Only two studies incorporated intraoperative evaluations, while three reported multicenter or multi-institutional validation, mainly in liver and gastric surgery. Deep learning methods, particularly CNNs, were employed to develop prediction algorithms. The models used integrated multimodal data sources, such as operative videos and radiological imaging, to enhance predictive accuracy. Reported predictive performance varied, with sensitivity and specificity ranging from 70% to 90% when predicting complications, conversion, or anastomotic risk.
These tools were primarily designed to assist in stratifying risks and optimizing surgical planning or intraoperative decisions. Most studies (n = 11, 79%) were laparoscopic, while one was robotic, and two included both approaches. Colorectal surgery was the most represented subspecialty (n = 7), followed by gastric (n = 2), liver (n = 2), and fewer isolated studies in cholecystectomy and appendectomy.
Notable examples include models that predict surgical complexity and postoperative outcomes in laparoscopic liver surgery [25] and tools to assess the risk of postoperative complications in patients undergoing laparoscopic radical gastrectomy [26].

3.5. AI for Augmented Reality and Navigation

A total of 11 studies explored the integration of artificial intelligence with AR systems or intraoperative navigation tools. Six studies were technical evaluations, four were feasibility studies and one a dataset description. All studies in this category were performed in laparoscopic settings.
These studies primarily addressed the fusion of preoperative imaging, such as computed tomography (CT) or magnetic resonance imaging (MRI), with real-time laparoscopic video to enhance anatomical orientation during surgery. AI methods were used to automate landmark detection, register 3D anatomical models of the intraoperative view, and improve surgeon perception of depth and spatial relationships. Four studies included intraoperative feasibility testing, but none performed external or multicenter validation. Several investigations also proposed AR-based systems to support surgical planning or intraoperative decision-making, particularly in liver, pancreatic, and colorectal surgery. Liver resection and portal mapping (n = 2) and colorectal applications (n = 2) were the most frequent, followed by single studies in gastrectomy and other procedures. These tools aimed to assist with tumor localization, vascular mapping, or dissection plane guidance, improving precision and potentially reducing intraoperative risks.
Recent developments include the integration of AI-enhanced AR systems for intraoperative navigation, such as real-time anatomical overlay during laparoscopic liver resection [27] and the identification of anatomical landmarks associated with postoperative pancreatic fistula during laparoscopic gastrectomy [28].

3.6. AI for Image Enhancement

A total of 8 studies focused on improving the visual quality of laparoscopic or robotic video through artificial intelligence-based image enhancement techniques. Four technical evaluation studies were included, along with 3 feasibility studies and 1 prospective observational study. These approaches aimed to address common intraoperative visibility challenges such as surgical smoke, lens fogging, image blur, and poor lighting. AI models were developed to process and clarify real-time video feeds, remove visual artifacts, and enhance image quality. Most studies were conducted in laparoscopic settings (n = 8), while only one targeted robotic surgery. Three studies evaluated their systems in intraoperative conditions, but none included multicenter validation.
Some studies proposed context-aware systems capable of selectively displaying relevant structures or reducing visual overload by adapting output to the surgeon’s needs. Procedure-specific applications included vascular detection during laparoscopic cholecystectomy, perfusion assessment in colorectal surgery, and image enhancement during robotic gastric surgery. These enhancements aimed to improve intraoperative safety, efficiency, and diagnostic accuracy during minimally invasive procedures.
Some examples include real-time arteries detection using image-guided techniques during laparoscopic cholecystectomy [29] and a self-learning cognitive robotic camera system that improved camera guidance efficiency with continued use [30].

3.7. Surgeon Perception, Preparedness, and Attitudes

Three studies explored the perspectives, knowledge, and preparedness of surgeons regarding the integration of artificial intelligence in surgical practice. These works used survey-based methodologies to assess attitudes toward AI, perceived benefits and limitations, and the level of familiarity with digital surgery concepts. The themes investigated included the knowledge gap between surgeons actively engaged in robotic surgery and those with less exposure to technology, with the former group demonstrating higher awareness and interest [31]. Research was conducted regarding cognitive and psychomotor load during training or simulated tasks, using biosignals such as EEG or eye-tracking to understand mental workload [32]. This thematic area highlights the importance of addressing human factors in the adoption of AI tools and the need for structured educational interventions.
A summary of the articles included is shown in Table 2.

3.8. Risk of Bias Assessment

Methodological quality assessment showed that across the included studies, 31.4% were rated at low risk of bias, 58% at moderate, and 10.6% at high.
External validation was reported in 20% of studies, and a multicenter design in 27.1%.
Among thematic domains, Workflow Recognition and Intraoperative Guidance and Augmented Reality and Navigation achieved the highest methodological rigor, frequently incorporating multicentric datasets and external testing.
Conversely, Skill Assessment and Training and Image Enhancement presented the greatest variability and limited validation, reflecting their predominantly pre-clinical or simulation-based nature.
Overall, the quality appraisal indicated a maturing yet still heterogeneous evidence base for artificial intelligence in minimally invasive surgery.
These results are shown in Table 3.

3.9. Quantitative Synthesis by Thematic Domain

To enhance comparability across heterogeneous studies, standardized quantitative metrics were extracted and summarized by thematic category (Table 4). For each domain, representative median values, validation characteristics, and methodological features were compiled. Overall, reported performance metrics were consistently high, with median accuracies ranging between 0.84 and 0.90 for classification tasks and median AUC values of 0.86 for predictive models.
However, external validation remained limited (20–30% of studies) and standardized reporting of evaluation metrics was inconsistent, particularly in pre-clinical or simulation-based works.
Detailed study-level information, including the risk-of-bias assessment tool and overall judgment, as well as AI model characteristics, dataset properties, validation type, and key performance metrics for each included study, are provided in the Supplementary Materials (Supplementary Tables S1 and S2).

4. Discussion

AI is reshaping minimally invasive surgery by addressing key challenges in training, performance assessment, and intraoperative decision-making. The increasing number of studies highlights growing interest, though the prevalence of technical and retrospective studies reflects the field’s exploratory stage rather than mature clinical integration.
Despite encouraging technical and performance metrics, few studies have demonstrated a measurable impact on real clinical outcomes such as postoperative complication rates, conversion to open surgery, operative time, or length of stay. Most available reports remain pre-clinical, retrospective, or simulation-based, which limits external validity and generalizability to diverse surgical environments. Moreover, intraoperative AI systems are rarely integrated into decision-making processes in real time, meaning their potential to enhance safety or efficiency has not yet been fully verified in patient populations. Translational progress will require robust prospective and multicenter studies explicitly designed to evaluate not only algorithmic accuracy but also clinical endpoints and workflow integration. Only through such validation can AI move from proof-of-concept to demonstrable improvement in surgical outcomes and patient care.
Standardized training and validation protocols are essential for safe and effective integration, as emphasized in recent literature [11,163].
Most AI applications have been developed in laparoscopic surgery, while robotic procedures remain less explored. This gap reflects the wider global use and data availability of laparoscopy, suggesting that as robotic adoption expands, future work should address its specific integration with AI.
In surgical skill assessment and training, AI has shown potential to automate evaluation and improve objectivity. Several models successfully classify surgical expertise levels or offer real-time feedback on instrument movements [20,21,99]. However, most studies rely on simulators and retrospective video analysis, which constrains external validity and limits translation into routine practice [164].
Intraoperative workflow recognition and guidance studies focus on phase detection, action recognition, or remaining surgery duration. While some models demonstrate real-time application potential [23,24,156], substantial variability in annotation standards, surgical procedures, and evaluation metrics continues to limit comparability and widespread clinical adoption.
Artificial intelligence applied to object and structure recognition has yielded encouraging results, particularly in enhancing intraoperative safety through the identification of anatomical landmarks and surgical instruments. Recent systems can recognize critical anatomy during cholecystectomy or identify nerves and vessels in complex procedures [15,16,17,19,56,57,58,59,123]. Despite these advances, generalizability across institutions, patient populations, and imaging modalities remains limited, and external validation is needed [165].
AR and navigation tools have incorporated AI to align preoperative imaging with intraoperative video and improve spatial orientation. Studies demonstrate feasibility in laparoscopic liver and colorectal surgery [28,148], though robust, real-time accuracy under diverse intraoperative conditions has yet to be achieved.
Efforts to enhance image quality show improved clarity and performance using AI-based systems in robotic and laparoscopic settings [29,30]. Still, real-world adoption remains limited by challenges in hardware integration, workflow disruption, and regulatory approval pathways.
AI applications for decision support and outcome prediction remain limited but are growing with few models supporting intraoperative decisions such as resection margins or anastomotic risk, while others integrate perioperative data to predict complications or long-term outcomes [25,26]. Nevertheless, these remain largely retrospective modeling efforts, underscoring the need for prospective validation in heterogeneous patient cohorts.
Finally, studies exploring surgeons’ perceptions toward AI reveal varying levels of knowledge, with robotic-trained surgeons showing greater awareness. Surgeons generally support AI’s integration but stress the importance of explainability and training [163,166,167]. These findings emphasize that end-user perspectives must inform system design and implementation strategies to facilitate acceptance and adoption.
From a clinical perspective, AI systems show the greatest immediate utility in recognition of anatomical landmarks, skill assessment in training programs, and workflow optimization, especially in the most common procedures, such as cholecystectomies and colorectal and gastric surgeries. Areas requiring further supervision and refinement include outcome prediction models, AR-guided navigation, and autonomous camera control, where evidence remains preliminary.
Together, these findings suggest that while AI in surgery is advancing rapidly, its routine clinical adoption depends on standardization of reporting frameworks, prospective multicenter validation, seamless integration into surgical workflows, and responsiveness to surgeons’ practical needs.
The risk-of-bias assessment revealed a field that is technically advanced but methodologically uneven. Most studies demonstrated moderate risk, primarily due to limited dataset diversity, lack of prospective design, and incomplete external validation.
Encouragingly, the proportion of multicenter and prospectively validated works has grown in recent years, indicating a gradual shift toward clinically oriented and reproducible research. Nevertheless, progress toward clinical adoption will depend on standardization of performance metrics, transparent model reporting, and the design of multicenter prospective trials evaluating true patient-level outcomes.
These findings underscore that while AI in minimally invasive surgery is rapidly evolving, robust methodological frameworks remain essential to translate innovation into measurable improvements in surgical safety and effectiveness.
Beyond technical validation, the implementation of AI raises important ethical and legal considerations. These include issues of informed consent for AI-assisted decision-making, accountability in case of adverse outcomes, algorithmic bias, and equitable access. Establishing clear regulatory pathways and institutional policies will be essential to ensure safe and ethical adoption.
Although most studies remain in an early exploratory stage, the evidence summarized in this review already outlines what can be expected from the integration of artificial intelligence in surgical practice. The impact on intraoperative decision-making is particularly evident, as AI systems demonstrate the capacity to enhance safety through the recognition of critical anatomy, anticipation of adverse events, and real-time guidance during complex manoeuvres. Beyond the operating room, the studies analysed suggest that AI will likely become an integral part of the entire patient pathway, supporting preoperative risk assessment, individualized surgical planning, and postoperative monitoring. By linking these stages, AI has the potential to establish a continuous feedback cycle that informs surgical judgment, optimizes patient outcomes, and advances the overall quality and consistency of care.
This study has several limitations. The included studies exhibited considerable heterogeneity in design, outcome metrics, and validation strategies, which precluded quantitative synthesis. Most were single-center, retrospective, and lacked external or prospective validation. Laparoscopic procedures, particularly cholecystectomy, were also overrepresented compared with other surgical domains, which may limit generalizability. In addition, publication bias may have favoured technically successful or positive studies. Finally, the review was not pre-registered. Nevertheless, these limitations reflect the current state of the literature rather than methodological flaws of this review and underscore the need for future prospective, standardized, and multicenter investigations.

5. Conclusions

As AI continues to evolve, its successful implementation will depend not only on technological advancement but also on surgeon training and trust in these systems.
AI in robotic and laparoscopic surgery has moved beyond the experimental stage of prototypes and theoretical models; it is beginning to find practical clinical applications. Even if its use is still uneven, this review shows that current applications already coexist with numerous emerging opportunities for investigation. It also stresses the importance for surgeons to remain engaged with ongoing advances in AI, while critically addressing ethical, legal, and regulatory challenges that may arise.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/jpm15110562/s1, Table S1: Study-level Risk of Bias Assessment; Table S2: Study-level AI Model Characteristics and Performance Metrics.

Author Contributions

Conceptualization, L.G. and R.d.l.P.L.; methodology, L.G., R.d.l.P.L. and R.A.G.; software, R.A.G.; validation, W.S., I.A.G.d.R. and D.A.D.C.; investigation, M.F.L., D.A.D.C., J.G.F. and L.G.; data curation, R.A.G., I.A.G.d.R. and W.S.; writing—original draft preparation, L.G. and R.d.l.P.L.; writing—review and editing, L.G., R.d.l.P.L. and D.A.D.C.; visualization, J.G.F. and M.F.L. supervision, I.A.G.d.R., R.d.l.P.L. and L.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

During the preparation of this manuscript/study, the authors used ChatGPT (GPT-5), OpenAI, San Francisco, CA, USA for the purposes of assisting with language editing and creating the tables. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIartificial intelligence;
CNNConvolutional Neural Network;
ARAugmented Reality;
TMETotal Mesorectal Excision;
TaTMETransanal Total Mesorectal Excision;
CVSCritical View of Safety;
LLRLaparoscopic Liver Resection;
TAPPTransabdominal Preperitoneal hernia repair;
LCLaparoscopic Cholecystectomy;
TEPTotally Extraperitoneal;
VRVirtual Reality;
RALIHRRobotic-Assisted Laparoscopic Inguinal Hernia Repair;
NIRNear-Infrared;
MISMinimally Invasive Surgery;
IoUintersection over union;
DiceDice similarity coefficient;
F1F1-score;
AUCarea under the receiver operating characteristic curve;
TREtarget registration error;
PSNRpeak signal-to-noise ratio;
SSIMstructural similarity index;
FPSframes per second.

References

  1. Revolutionizing Patient Care: The Harmonious Blend of Artificial Intelligence and Surgical Tradition-All Databases. Available online: https://www.webofscience.com/wos/alldb/full-record/WOS:001179111800002 (accessed on 6 June 2025).
  2. Liao, W.; Zhu, Y.; Zhang, H.; Wang, D.; Zhang, L.; Chen, T.; Zhou, R.; Ye, Z. Artificial Intelligence-Assisted Phase Recognition and Skill Assessment in Laparoscopic Surgery: A Systematic Review. Front. Surg. 2025, 12, 1551838. [Google Scholar] [CrossRef]
  3. Hatcher, A.J.; Beneville, B.T.; Awad, M.M. The Evolution of Surgical Skills Simulation Education: Robotic Skills. Surgery 2025, 181, 109173. [Google Scholar] [CrossRef]
  4. Boal, M.W.E.; Anastasiou, D.; Tesfai, F.; Ghamrawi, W.; Mazomenos, E.; Curtis, N.; Collins, J.W.; Sridhar, A.; Kelly, J.; Stoyanov, D.; et al. Evaluation of Objective Tools and Artificial Intelligence in Robotic Surgery Technical Skills Assessment: A Systematic Review. Br. J. Surg. 2024, 111, znad331. [Google Scholar] [CrossRef] [PubMed]
  5. Knudsen, J.E.; Ghaffar, U.; Ma, R.; Hung, A.J. Clinical Applications of Artificial Intelligence in Robotic Surgery. J. Robot Surg. 2024, 18, 1–10. [Google Scholar] [CrossRef] [PubMed]
  6. Zhang, C.; Hallbeck, M.S.; Salehinejad, H.; Thiels, C. The Integration of Artificial Intelligence in Robotic Surgery: A Narrative Review. Surgery 2024, 176, 552–557. [Google Scholar] [CrossRef] [PubMed]
  7. Gumbs, A.A.; Croner, R.; Abu-Hilal, M.; Bannone, E.; Ishizawa, T.; Spolverato, G.; Frigerio, I.; Siriwardena, A.; Messaoudi, N. Surgomics and the Artificial Intelligence, Radiomics, Genomics, Oncopathomics and Surgomics (AiRGOS) Project. Art. Int. Surg. 2023, 3, 180–185. [Google Scholar] [CrossRef]
  8. Guni, A.; Varma, P.; Zhang, J.; Fehervari, M.; Ashrafian, H. Artificial Intelligence in Surgery: The Future Is Now. Eur. Surg. Res. 2024, 65, 22–39. [Google Scholar] [CrossRef]
  9. Panesar, S.; Cagle, Y.; Chander, D.; Morey, J.; Fernandez-Miranda, J.; Kliot, M. Artificial Intelligence and the Future of Surgical Robotics. Ann. Surg. 2019, 270, 223–226. [Google Scholar] [CrossRef]
  10. Schijven, M.P.; Kroh, M. Editorial: Harnessing the Power of AI in Health Care: Benefits, Risks, and Preparation. Surg. Innov. 2023, 30, 417–418. [Google Scholar] [CrossRef]
  11. O’Sullivan, S.; Leonard, S.; Holzinger, A.; Allen, C.; Battaglia, F.; Nevejans, N.; van Leeuwen, F.W.B.; Sajid, M.I.; Friebe, M.; Ashrafian, H.; et al. Operational Framework and Training Standard Requirements for AI-Empowered Robotic Surgery. Int. J. Med. Robot. Comput. Assist. Surg. 2020, 16, 1–13. [Google Scholar] [CrossRef]
  12. Vasey, B.; Lippert, K.A.N.; Khan, D.Z.; Ibrahim, M.; Koh, C.H.; Layard Horsfall, H.; Lee, K.S.; Williams, S.; Marcus, H.J.; McCulloch, P. Intraoperative Applications of Artificial Intelligence in Robotic Surgery: A Scoping Review of Current Development Stages and Levels of Autonomy. Ann. Surg. 2023, 278, 896–903. [Google Scholar] [CrossRef]
  13. Moglia, A.; Georgiou, K.; Georgiou, E.; Satava, R.M.; Cuschieri, A. A Systematic Review on Artificial Intelligence in Robot-Assisted Surgery. Int. J. Surg. 2021, 95, 106151. [Google Scholar] [CrossRef] [PubMed]
  14. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 Statement: An Updated Guideline for Reporting Systematic Reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef] [PubMed]
  15. Petracchi, E.J.; Olivieri, S.E.; Varela, J.; Canullan, C.M.; Zandalazini, H.; Ocampo, C.; Quesada, B.M. Use of Artificial Intelligence in the Detection of the Critical View of Safety During Laparoscopic Cholecystectomy. J. Gastrointest. Surg. 2024, 28, 877–879. [Google Scholar] [CrossRef] [PubMed]
  16. Schnelldorfer, T.; Castro, J.; Goldar-Najafi, A.; Liu, L. Development of a Deep Learning System for Intra-Operative Identification of Cancer Metastases. Ann. Surg. 2024, 280, 1006–1013. [Google Scholar] [CrossRef]
  17. Chen, G.; Xie, Y.; Yang, B.; Tan, J.N.; Zhong, G.; Zhong, L.; Zhou, S.; Han, F. Artificial Intelligence Model for Perigastric Blood Vessel Recognition During Laparoscopic Radical Gastrectomy with D2 Lymphadenectomy in Locally Advanced Gastric Cancer. BJS Open 2025, 9, zrae158. [Google Scholar] [CrossRef]
  18. Tashiro, Y.; Aoki, T.; Kobayashi, N.; Tomioka, K.; Kumazu, Y.; Akabane, M.; Shibata, H.; Hirai, T.; Matsuda, K.; Kusano, T. Color-Coded Laparoscopic Liver Resection Using Artificial Intelligence: A Preliminary Study. J. Hepatobiliary Pancreat Sci. 2024, 31, 67–68. [Google Scholar] [CrossRef]
  19. Ryu, S.; Goto, K.; Kitagawa, T.; Kobayashi, T.; Shimada, J.; Ito, R.; Nakabayashi, Y. Real-Time Artificial Intelligence Navigation-Assisted Anatomical Recognition in Laparoscopic Colorectal Surgery. J. Gastrointest. Surg. 2023, 27, 3080–3082. [Google Scholar] [CrossRef]
  20. Wu, S.; Tang, M.; Liu, J.; Qin, D.; Wang, Y.; Zhai, S.; Bi, E.; Li, Y.; Wang, C.; Xiong, Y.; et al. Impact of an AI-Based Laparoscopic Cholecystectomy Coaching Program on the Surgical Performance: A Randomized Controlled Trial. Int. J. Surg. 2024, 110, 7816–7823. [Google Scholar] [CrossRef]
  21. Halperin, L.; Sroka, G.; Zuckerman, I.; Laufer, S. Automatic Performance Evaluation of the Intracorporeal Suture Exercise. Int. J. Comput. Assist. Radiol. Surg. 2023, 19, 83–86. [Google Scholar] [CrossRef]
  22. Chen, G.; Li, L.; Hubert, J.; Luo, B.; Yang, K.; Wang, X. Effectiveness of a Vision-Based Handle Trajectory Monitoring System in Studying Robotic Suture Operation. J. Robot Surg. 2023, 17, 2791–2798. [Google Scholar] [CrossRef]
  23. Hashimoto, D.A.; Rosman, G.; Witkowski, E.R.; Stafford, C.; Navarette-Welton, A.J.; Rattner, D.W.; Lillemoe, K.D.; Rus, D.L.; Meireles, O.R. Computer Vision Analysis of Intraoperative Video: Automated Recognition of Operative Steps in Laparoscopic Sleeve Gastrectomy. Ann. Surg. 2019, 270, 414–421. [Google Scholar] [CrossRef] [PubMed]
  24. Takeuchi, M.; Kawakubo, H.; Tsuji, T.; Maeda, Y.; Matsuda, S.; Fukuda, K.; Nakamura, R.; Kitagawa, Y. Evaluation of Surgical Complexity by Automated Surgical Process Recognition in Robotic Distal Gastrectomy Using Artificial Intelligence. Surg. Endosc. 2023, 37, 4517–4524. [Google Scholar] [CrossRef] [PubMed]
  25. Lopez-Lopez, V.; Morise, Z.; Albaladejo-González, M.; Gavara, C.G.; Goh, B.K.P.; Koh, Y.X.; Paul, S.J.; Hilal, M.A.; Mishima, K.; Krürger, J.A.P.; et al. Explainable Artificial Intelligence Prediction-Based Model in Laparoscopic Liver Surgery for Segments 7 and 8: An International Multicenter Study. Surg. Endosc. 2024, 38, 2411–2422. [Google Scholar] [CrossRef] [PubMed]
  26. Wang, H.N.; An, J.H.; Zong, L. Advances in Artificial Intelligence for Predicting Complication Risks Post-Laparoscopic Radical Gastrectomy for Gastric Cancer: A Significant Leap Forward. World J. Gastroenterol. 2024, 30, 4669–4671. [Google Scholar] [CrossRef]
  27. Guan, P.; Luo, H.; Guo, J.; Zhang, Y.; Jia, F. Intraoperative Laparoscopic Liver Surface Registration with Preoperative CT Using Mixing Features and Overlapping Region Masks. Int. J. Comput. Assist. Radiol. Surg. 2023, 18, 1521–1531. [Google Scholar] [CrossRef]
  28. Aoyama, Y.; Matsunobu, Y.; Etoh, T.; Suzuki, K.; Fujita, S.; Aiba, T.; Fujishima, H.; Empuku, S.; Kono, Y.; Endo, Y.; et al. Correction: Artificial Intelligence for Surgical Safety During Laparoscopic Gastrectomy for Gastric Cancer: Indication of Anatomical Landmarks Related to Postoperative Pancreatic Fistula Using Deep Learning. Surg. Endosc. 2024, 38, 6203. [Google Scholar] [CrossRef]
  29. Akbari, H.; Kosugi, Y.; Khorgami, Z. Image-Guided Preparation of the Calot’s Triangle in Laparoscopic Cholecystectomy. In Proceedings of the 31st Annual International Conference of the IEEE Engineering in Medicine and Biology Society: Engineering the Future of Biomedicine, EMBC 2009, Minneapolis, MN, USA, 3–6 September 2009; pp. 5649–5652. [Google Scholar] [CrossRef]
  30. Wagner, M.; Bihlmaier, A.; Kenngott, H.G.; Mietkowski, P.; Scheikl, P.M.; Bodenstedt, S.; Schiepe-Tiska, A.; Vetter, J.; Nickel, F.; Speidel, S.; et al. A Learning Robot for Cognitive Camera Control in Minimally Invasive Surgery. Surg. Endosc. 2021, 35, 5365–5374. [Google Scholar] [CrossRef]
  31. Acosta-Mérida, M.A.; Sánchez-Guillén, L.; Álvarez Gallego, M.; Barber, X.; Bellido Luque, J.A.; Sánchez Ramos, A. Encuesta Nacional Sobre La Gobernanza de Datos y Cirugía Digital: Desafíos y Oportunidades de Los Cirujanos En La Era de La Inteligencia Artificial. Cir. Esp. 2025, 103, 143–152. [Google Scholar] [CrossRef]
  32. Shafiei, S.B.; Shadpour, S.; Mohler, J.L. An Integrated Electroencephalography and Eye-Tracking Analysis Using EXtreme Gradient Boosting for Mental Workload Evaluation in Surgery. Hum. Factors J. Hum. Factors Ergon. Soc. 2025, 67, 464–484. [Google Scholar] [CrossRef]
  33. Khalid, M.U.; Laplante, S.; Masino, C.; Alseidi, A.; Jayaraman, S.; Zhang, H.; Mashouri, P.; Protserov, S.; Hunter, J.; Brudno, M.; et al. Use of Artificial Intelligence for Decision-Support to Avoid High-Risk Behaviors During Laparoscopic Cholecystectomy. Surg. Endosc. 2023, 37, 9467–9475. [Google Scholar] [CrossRef] [PubMed]
  34. Ward, T.M.; Hashimoto, D.A.; Ban, Y.; Rosman, G.; Meireles, O.R. Artificial Intelligence Prediction of Cholecystectomy Operative Course from Automated Identification of Gallbladder Inflammation. Surg. Endosc. 2022, 36, 6832–6840. [Google Scholar] [CrossRef]
  35. Orimoto, H.; Hirashita, T.; Ikeda, S.; Amano, S.; Kawamura, M.; Kawano, Y.; Takayama, H.; Masuda, T.; Endo, Y.; Matsunobu, Y.; et al. Development of an Artificial Intelligence System to Indicate Intraoperative Findings of Scarring in Laparoscopic Cholecystectomy for Cholecystitis. Surg. Endosc. 2025, 39, 1379–1387. [Google Scholar] [CrossRef] [PubMed]
  36. Kolbinger, F.R.; Rinner, F.M.; Jenke, A.C.; Carstens, M.; Krell, S.; Leger, S.; Distler, M.; Weitz, J.; Speidel, S.; Bodenstedt, S. Anatomy Segmentation in Laparoscopic Surgery: Comparison of Machine Learning and Human Expertise—An Experimental Study. Int. J. Surg. 2023, 109, 2962–2974. [Google Scholar] [CrossRef] [PubMed]
  37. Sato, Y.; Sese, J.; Matsuyama, T.; Onuki, M.; Mase, S.; Okuno, K.; Saito, K.; Fujiwara, N.; Hoshino, A.; Kawada, K.; et al. Preliminary Study for Developing a Navigation System for Gastric Cancer Surgery Using Artificial Intelligence. Surg. Today 2022, 52, 1753–1758. [Google Scholar] [CrossRef]
  38. Igaki, T.; Kitaguchi, D.; Kojima, S.; Hasegawa, H.; Takeshita, N.; Mori, K.; Kinugasa, Y.; Ito, M. Artificial Intelligence-Based Total Mesorectal Excision Plane Navigation in Laparoscopic Colorectal Surgery. Dis. Colon Rectum 2022, 65, E329–E333. [Google Scholar] [CrossRef]
  39. Jearanai, S.; Wangkulangkul, P.; Sae-Lim, W.; Cheewatanakornkul, S. Development of a Deep Learning Model for Safe Direct Optical Trocar Insertion in Minimally Invasive Surgery: An Innovative Method to Prevent Trocar Injuries. Surg. Endosc. 2023, 37, 7295–7304. [Google Scholar] [CrossRef]
  40. Oh, N.; Kim, B.; Kim, T.; Rhu, J.; Kim, J.; Choi, G.S. Real-Time Segmentation of Biliary Structure in Pure Laparoscopic Donor Hepatectomy. Sci. Rep. 2024, 14, 22508. [Google Scholar] [CrossRef]
  41. Benavides, D.; Cisnal, A.; Fontúrbel, C.; de la Fuente, E.; Fraile, J.C. Real-Time Tool Localization for Laparoscopic Surgery Using Convolutional Neural Network. Sensors 2024, 24, 4191. [Google Scholar] [CrossRef]
  42. Gazis, A.; Karaiskos, P.; Loukas, C. Surgical Gesture Recognition in Laparoscopic Tasks Based on the Transformer Network and Self-Supervised Learning. Bioengineering 2022, 9, 737. [Google Scholar] [CrossRef]
  43. Tomioka, K.; Aoki, T.; Kobayashi, N.; Tashiro, Y.; Kumazu, Y.; Shibata, H.; Hirai, T.; Yamazaki, T.; Saito, K.; Yamazaki, K.; et al. Development of a Novel Artificial Intelligence System for Laparoscopic Hepatectomy. Anticancer. Res. 2023, 43, 5235–5243. [Google Scholar] [CrossRef]
  44. Cui, P.; Zhao, S.; Chen, W. Identification of the Vas Deferens in Laparoscopic Inguinal Hernia Repair Surgery Using the Convolutional Neural Network. J. Healthc. Eng. 2021, 2021, 1–10. [Google Scholar] [CrossRef] [PubMed]
  45. Memida, S.; Miura, S. Identification of Surgical Forceps Using YOLACT++ in Different Lighted Environments. In Proceedings of the 2023 45th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Sydney, Australia, 24–27 July 2023; IEEE: New York, NY, USA; pp. 1–4. [Google Scholar]
  46. Wesierski, D.; Jezierska, A. Instrument Detection and Pose Estimation with Rigid Part Mixtures Model in Video-Assisted Surgeries. Med. Image Anal. 2018, 46, 244–265. [Google Scholar] [CrossRef] [PubMed]
  47. Jurosch, F.; Wagner, L.; Jell, A.; Islertas, E.; Wilhelm, D.; Berlet, M. Extra-Abdominal Trocar and Instrument Detection for Enhanced Surgical Workflow Understanding. Int. J. Comput. Assist. Radiol. Surg. 2024, 19, 1939–1945. [Google Scholar] [CrossRef] [PubMed]
  48. Sánchez-Brizuela, G.; Santos-Criado, F.J.; Sanz-Gobernado, D.; de la Fuente-López, E.; Fraile, J.C.; Pérez-Turiel, J.; Cisnal, A. Gauze Detection and Segmentation in Minimally Invasive Surgery Video Using Convolutional Neural Networks. Sensors 2022, 22, 5180. [Google Scholar] [CrossRef]
  49. Lai, S.-L.; Chen, C.-S.; Lin, B.-R.; Chang, R.-F. Intraoperative Detection of Surgical Gauze Using Deep Convolutional Neural Network. Ann. Biomed. Eng. 2023, 51, 352–362. [Google Scholar] [CrossRef]
  50. Ehrlich, J.; Jamzad, A.; Asselin, M.; Rodgers, J.R.; Kaufmann, M.; Haidegger, T.; Rudan, J.; Mousavi, P.; Fichtinger, G.; Ungi, T. Sensor-Based Automated Detection of Electrosurgical Cautery States. Sensors 2022, 22, 5808. [Google Scholar] [CrossRef]
  51. Nwoye, C.I.; Mutter, D.; Marescaux, J.; Padoy, N. Weakly Supervised Convolutional LSTM Approach for Tool Tracking in Laparoscopic Videos. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 1059–1067. [Google Scholar] [CrossRef]
  52. Carstens, M.; Rinner, F.M.; Bodenstedt, S.; Jenke, A.C.; Weitz, J.; Distler, M.; Speidel, S.; Kolbinger, F.R. The Dresden Surgical Anatomy Dataset for Abdominal Organ Segmentation in Surgical Data Science. Sci. Data 2023, 10, 3. [Google Scholar] [CrossRef]
  53. Yin, Y.; Luo, S.; Zhou, J.; Kang, L.; Chen, C.Y.C. LDCNet: Lightweight Dynamic Convolution Network for Laparoscopic Procedures Image Segmentation. Neural. Netw. 2024, 170, 441–452. [Google Scholar] [CrossRef]
  54. Tashiro, Y.; Aoki, T.; Kobayashi, N.; Tomioka, K.; Saito, K.; Matsuda, K.; Kusano, T. Novel Navigation for Laparoscopic Cholecystectomy Fusing Artificial Intelligence and Indocyanine Green Fluorescent Imaging. J. Hepatobiliary Pancreat Sci. 2024, 31, 305–307. [Google Scholar] [CrossRef] [PubMed]
  55. Kitaguchi, D.; Harai, Y.; Kosugi, N.; Hayashi, K.; Kojima, S.; Ishikawa, Y.; Yamada, A.; Hasegawa, H.; Takeshita, N.; Ito, M. Artificial Intelligence for the Recognition of Key Anatomical Structures in Laparoscopic Colorectal Surgery. Br. J. Surg. 2023, 110, 1355–1358. [Google Scholar] [CrossRef] [PubMed]
  56. Han, F.; Zhong, G.; Zhi, S.; Han, N.; Jiang, Y.; Tan, J.; Zhong, L.; Zhou, S. Artificial Intelligence Recognition System of Pelvic Autonomic Nerve During Total Mesorectal Excision. Dis. Colon Rectum 2024, 68, 308–315. [Google Scholar] [CrossRef] [PubMed]
  57. Frey, S.; Facente, F.; Wei, W.; Ekmekci, E.S.; Séjor, E.; Baqué, P.; Durand, M.; Delingette, H.; Bremond, F.; Berthet-Rayne, P.; et al. Optimizing Intraoperative AI: Evaluation of YOLOv8 for Real-Time Recognition of Robotic and Laparoscopic Instruments. J. Robot Surg. 2025, 19, 131. [Google Scholar] [CrossRef]
  58. ElMoaqet, H.; Janini, R.; Ryalat, M.; Al-Refai, G.; Abdulbaki Alshirbaji, T.; Jalal, N.A.; Neumuth, T.; Moeller, K.; Navab, N. Using Masked Image Modelling Transformer Architecture for Laparoscopic Surgical Tool Classification and Localization. Sensors 2025, 25, 3017. [Google Scholar] [CrossRef]
  59. Korndorffer, J.R.; Hawn, M.T.; Spain, D.A.; Knowlton, L.M.; Azagury, D.E.; Nassar, A.K.; Lau, J.N.; Arnow, K.D.; Trickey, A.W.; Pugh, C.M. Situating Artificial Intelligence in Surgery: A Focus on Disease Severity. Ann. Surg. 2020, 272, 523–528. [Google Scholar] [CrossRef]
  60. Park, S.H.; Park, H.M.; Baek, K.R.; Ahn, H.M.; Lee, I.Y.; Son, G.M. Artificial Intelligence Based Real-Time Microcirculation Analysis System for Laparoscopic Colorectal Surgery. World J. Gastroenterol. 2020, 26, 6945–6962. [Google Scholar] [CrossRef]
  61. Ryu, K.; Kitaguchi, D.; Nakajima, K.; Ishikawa, Y.; Harai, Y.; Yamada, A.; Lee, Y.; Hayashi, K.; Kosugi, N.; Hasegawa, H.; et al. Deep Learning-Based Vessel Automatic Recognition for Laparoscopic Right Hemicolectomy. Surg. Endosc. 2024, 38, 171–178. [Google Scholar] [CrossRef]
  62. Zygomalas, A.; Kalles, D.; Katsiakis, N.; Anastasopoulos, A.; Skroubis, G. Artificial Intelligence Assisted Recognition of Anatomical Landmarks and Laparoscopic Instruments in Transabdominal Preperitoneal Inguinal Hernia Repair. Surg. Innov. 2024, 31, 178–184. [Google Scholar] [CrossRef]
  63. Mita, K.; Kobayashi, N.; Takahashi, K.; Sakai, T.; Shimaguchi, M.; Kouno, M.; Toyota, N.; Hatano, M.; Toyota, T.; Sasaki, J. Anatomical Recognition of Dissection Layers, Nerves, Vas Deferens, and Microvessels Using Artificial Intelligence During Transabdominal Preperitoneal Inguinal Hernia Repair. Hernia 2025, 29, 1–5. [Google Scholar] [CrossRef]
  64. Horita, K.; Hida, K.; Itatani, Y.; Fujita, H.; Hidaka, Y.; Yamamoto, G.; Ito, M.; Obama, K. Real-Time Detection of Active Bleeding in Laparoscopic Colectomy Using Artificial Intelligence. Surg. Endosc. 2024, 38, 3461–3469. [Google Scholar] [CrossRef] [PubMed]
  65. Kinoshita, K.; Maruyama, T.; Kobayashi, N.; Imanishi, S.; Maruyama, M.; Ohira, G.; Endo, S.; Tochigi, T.; Kinoshita, M.; Fukui, Y.; et al. An Artificial Intelligence-Based Nerve Recognition Model Is Useful as Surgical Support Technology and as an Educational Tool in Laparoscopic and Robot-Assisted Rectal Cancer Surgery. Surg. Endosc. 2024, 38, 5394–5404. [Google Scholar] [CrossRef] [PubMed]
  66. Takeuchi, M.; Collins, T.; Lipps, C.; Haller, M.; Uwineza, J.; Okamoto, N.; Nkusi, R.; Marescaux, J.; Kawakubo, H.; Kitagawa, Y.; et al. Towards Automatic Verification of the Critical View of the Myopectineal Orifice with Artificial Intelligence. Surg. Endosc. 2023, 37, 4525–4534. [Google Scholar] [CrossRef] [PubMed]
  67. Une, N.; Kobayashi, S.; Kitaguchi, D.; Sunakawa, T.; Sasaki, K.; Ogane, T.; Hayashi, K.; Kosugi, N.; Kudo, M.; Sugimoto, M.; et al. Intraoperative Artificial Intelligence System Identifying Liver Vessels in Laparoscopic Liver Resection: A Retrospective Experimental Study. Surg. Endosc. 2024, 38, 1088–1095. [Google Scholar] [CrossRef]
  68. Kojima, S.; Kitaguchi, D.; Igaki, T.; Nakajima, K.; Ishikawa, Y.; Harai, Y.; Yamada, A.; Lee, Y.; Hayashi, K.; Kosugi, N.; et al. Deep-Learning-Based Semantic Segmentation of Autonomic Nerves from Laparoscopic Images of Colorectal Surgery: An Experimental Pilot Study. Int. J. Surg. 2023, 109, 813–820. [Google Scholar] [CrossRef]
  69. Nakanuma, H.; Endo, Y.; Fujinaga, A.; Kawamura, M.; Kawasaki, T.; Masuda, T.; Hirashita, T.; Etoh, T.; Shinozuka, K.; Matsunobu, Y.; et al. An Intraoperative Artificial Intelligence System Identifying Anatomical Landmarks for Laparoscopic Cholecystectomy: A Prospective Clinical Feasibility Trial (J-SUMMIT-C-01). Surg. Endosc. 2023, 37, 1933–1942. [Google Scholar] [CrossRef]
  70. Loukas, C.; Gazis, A.; Schizas, D. Multiple Instance Convolutional Neural Network for Gallbladder Assessment from Laparoscopic Images. Int. J. Med. Robot. Comput. Assist. Surg. 2022, 18, e2445. [Google Scholar] [CrossRef]
  71. Endo, Y.; Tokuyasu, T.; Mori, Y.; Asai, K.; Umezawa, A.; Kawamura, M.; Fujinaga, A.; Ejima, A.; Kimura, M.; Inomata, M. Impact of AI System on Recognition for Anatomical Landmarks Related to Reducing Bile Duct Injury During Laparoscopic Cholecystectomy. Surg. Endosc. 2023, 37, 5752–5759. [Google Scholar] [CrossRef]
  72. Fried, G.M.; Ortenzi, M.; Dayan, D.; Nizri, E.; Mirkin, Y.; Maril, S.; Asselmann, D.; Wolf, T. Surgical Intelligence Can Lead to Higher Adoption of Best Practices in Minimally Invasive Surgery. Ann. Surg. 2024, 280, 525–534. [Google Scholar] [CrossRef]
  73. Mascagni, P.; Vardazaryan, A.; Alapatt, D.; Urade, T.; Emre, T.; Fiorillo, C.; Pessaux, P.; Mutter, D.; Marescaux, J.; Costamagna, G.; et al. Artificial Intelligence for Surgical Safety Automatic Assessment of the Critical View of Safety in Laparoscopic Cholecystectomy Using Deep Learning. Ann. Surg. 2022, 275, 955–961. [Google Scholar] [CrossRef]
  74. Fujinaga, A.; Endo, Y.; Etoh, T.; Kawamura, M.; Nakanuma, H.; Kawasaki, T.; Masuda, T.; Hirashita, T.; Kimura, M.; Matsunobu, Y.; et al. Development of a Cross-Artificial Intelligence System for Identifying Intraoperative Anatomical Landmarks and Surgical Phases During Laparoscopic Cholecystectomy. Surg. Endosc. 2023, 37, 6118–6128. [Google Scholar] [CrossRef]
  75. Kawamura, M.; Endo, Y.; Fujinaga, A.; Orimoto, H.; Amano, S.; Kawasaki, T.; Kawano, Y.; Masuda, T.; Hirashita, T.; Kimura, M.; et al. Development of an Artificial Intelligence System for Real-Time Intraoperative Assessment of the Critical View of Safety in Laparoscopic Cholecystectomy. Surg. Endosc. 2023, 37, 8755–8763. [Google Scholar] [CrossRef]
  76. Tokuyasu, T.; Iwashita, Y.; Matsunobu, Y.; Kamiyama, T.; Ishikake, M.; Sakaguchi, S.; Ebe, K.; Tada, K.; Endo, Y.; Etoh, T.; et al. Development of an Artificial Intelligence System Using Deep Learning to Indicate Anatomical Landmarks During Laparoscopic Cholecystectomy. Surg. Endosc. 2021, 35, 1651–1658. [Google Scholar] [CrossRef]
  77. Zhang, K.; Qiao, Z.; Yang, L.; Zhang, T.; Liu, F.; Sun, D.; Xie, T.; Guo, L.; Lu, C. Computer-Vision-Based Artificial Intelligence for Detection and Recognition of Instruments and Organs During Radical Laparoscopic Gastrectomy for Gastric Cancer: A Multicenter Study. Chin. J. Gastrointest. Surg. 2024, 27, 464–470. [Google Scholar] [CrossRef]
  78. Ortenzi, M.; Rapoport Ferman, J.; Antolin, A.; Bar, O.; Zohar, M.; Perry, O.; Asselmann, D.; Wolf, T. A Novel High Accuracy Model for Automatic Surgical Workflow Recognition Using Artificial Intelligence in Laparoscopic Totally Extraperitoneal Inguinal Hernia Repair (TEP). Surg. Endosc. 2023, 37, 8818–8828. [Google Scholar] [CrossRef] [PubMed]
  79. Belmar, F.; Gaete, M.I.; Escalona, G.; Carnier, M.; Durán, V.; Villagrán, I.; Asbun, D.; Cortés, M.; Neyem, A.; Crovari, F.; et al. Artificial Intelligence in Laparoscopic Simulation: A Promising Future for Large-Scale Automated Evaluations. Surg. Endosc. 2023, 37, 4942–4946. [Google Scholar] [CrossRef] [PubMed]
  80. Ismail Fawaz, H.; Forestier, G.; Weber, J.; Idoumghar, L.; Muller, P.A. Accurate and Interpretable Evaluation of Surgical Skills from Kinematic Data Using Fully Convolutional Neural Networks. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 1611–1617. [Google Scholar] [CrossRef]
  81. Nguyen, X.A.; Ljuhar, D.; Pacilli, M.; Nataraja, R.M.; Chauhan, S. Surgical Skill Levels: Classification and Analysis Using Deep Neural Network Model and Motion Signals. Comput. Methods Programs Biomed. 2019, 177, 1–8. [Google Scholar] [CrossRef]
  82. SATR-DL: Improving Surgical Skill Assessment and Task Recognition in Robot-Assisted Surgery with Deep Neural Networks-All Databases. Available online: https://www.webofscience.com/wos/alldb/full-record/WOS:000596231902067 (accessed on 25 May 2025).
  83. Funke, I.; Mees, S.T.; Weitz, J.; Speidel, S. Video-Based Surgical Skill Assessment Using 3D Convolutional Neural Networks. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 1217–1225. [Google Scholar] [CrossRef]
  84. Partridge, R.W.; Hughes, M.A.; Brennan, P.M.; Hennessey, I.A.M. Accessible Laparoscopic Instrument Tracking (“InsTrac”): Construct Validity in a Take-Home Box Simulator. J. Laparoendosc. Adv. Surg. Tech. 2014, 24, 578–583. [Google Scholar] [CrossRef]
  85. Derathé, A.; Reche, F.; Guy, S.; Charrière, K.; Trilling, B.; Jannin, P.; Moreau-Gaudry, A.; Gibaud, B.; Voros, S. LapEx: A New Multimodal Dataset for Context Recognition and Practice Assessment in Laparoscopic Surgery. Sci. Data 2025, 12, 342. [Google Scholar] [CrossRef]
  86. Bogar, P.Z.; Virag, M.; Bene, M.; Hardi, P.; Matuz, A.; Schlegl, A.T.; Toth, L.; Molnar, F.; Nagy, B.; Rendeki, S.; et al. Validation of a Novel, Low-Fidelity Virtual Reality Simulator and an Artificial Intelligence Assessment Approach for Peg Transfer Laparoscopic Training. Sci. Rep. 2024, 14, 16702. [Google Scholar] [CrossRef]
  87. Matsumoto, S.; Kawahira, H.; Fukata, K.; Doi, Y.; Kobayashi, N.; Hosoya, Y.; Sata, N. Laparoscopic Distal Gastrectomy Skill Evaluation from Video: A New Artificial Intelligence-Based Instrument Identification System. Sci. Rep. 2024, 14, 12432. [Google Scholar] [CrossRef]
  88. Gillani, M.; Rupji, M.; Paul Olson, T.J.; Sullivan, P.; Shaffer, V.O.; Balch, G.C.; Shields, M.C.; Liu, Y.; Rosen, S.A. Objective Performance Indicators During Robotic Right Colectomy Differ According to Surgeon Skill. J. Surg. Res. 2024, 302, 836–844. [Google Scholar] [CrossRef]
  89. Yang, J.H.; Goodman, E.D.; Dawes, A.J.; Gahagan, J.V.; Esquivel, M.M.; Liebert, C.A.; Kin, C.; Yeung, S.; Gurland, B.H. Using AI and Computer Vision to Analyze Technical Proficiency in Robotic Surgery. Surg. Endosc. 2023, 37, 3010–3017. [Google Scholar] [CrossRef] [PubMed]
  90. Caballero, D.; Pérez-Salazar, M.J.; Sánchez-Margallo, J.A.; Sánchez-Margallo, F.M. Applying Artificial Intelligence on EDA Sensor Data to Predict Stress on Minimally Invasive Robotic-Assisted Surgery. Int. J. Comput. Assist. Radiol. Surg. 2024, 19, 1953–1963. [Google Scholar] [CrossRef] [PubMed]
  91. Yanik, E.; Ainam, J.P.; Fu, Y.; Schwaitzberg, S.; Cavuoto, L.; De, S. Video-Based Skill Acquisition Assessment in Laparoscopic Surgery Using Deep Learning. Glob. Surg. Educ.-J. Assoc. Surg. Educ. 2024, 3, 26. [Google Scholar] [CrossRef]
  92. Nakajima, K.; Kitaguchi, D.; Takenaka, S.; Tanaka, A.; Ryu, K.; Takeshita, N.; Kinugasa, Y.; Ito, M. Automated Surgical Skill Assessment in Colorectal Surgery Using a Deep Learning-Based Surgical Phase Recognition Model. Surg. Endosc. 2024, 38, 6347–6355. [Google Scholar] [CrossRef]
  93. Yamazaki, Y.; Kanaji, S.; Kudo, T.; Takiguchi, G.; Urakawa, N.; Hasegawa, H.; Yamamoto, M.; Matsuda, Y.; Yamashita, K.; Matsuda, T.; et al. Quantitative Comparison of Surgical Device Usage in Laparoscopic Gastrectomy Between Surgeons’ Skill Levels: An Automated Analysis Using a Neural Network. J. Gastrointest. Surg. 2022, 26, 1006–1014. [Google Scholar] [CrossRef]
  94. Allen, B.; Nistor, V.; Dutson, E.; Carman, G.; Lewis, C.; Faloutsos, P. Support Vector Machines Improve the Accuracy of Evaluation for the Performance of Laparoscopic Training Tasks. Surg. Endosc. 2010, 24, 170–178. [Google Scholar] [CrossRef]
  95. Fukuta, A.; Yamashita, S.; Maniwa, J.; Tamaki, A.; Kondo, T.; Kawakubo, N.; Nagata, K.; Matsuura, T.; Tajiri, T. Artificial Intelligence Facilitates the Potential of Simulator Training: An Innovative Laparoscopic Surgical Skill Validation System Using Artificial Intelligence Technology. Int. J. Comput. Assist. Radiol. Surg. 2024, 20, 597–603. [Google Scholar] [CrossRef]
  96. Moglia, A.; Morelli, L.; D’Ischia, R.; Fatucchi, L.M.; Pucci, V.; Berchiolli, R.; Ferrari, M.; Cuschieri, A. Ensemble Deep Learning for the Prediction of Proficiency at a Virtual Simulator for Robot-Assisted Surgery. Surg. Endosc. 2022, 36, 6473–6479. [Google Scholar] [CrossRef]
  97. Ju, S.; Jiang, P.; Jin, Y.; Fu, Y.; Wang, X.; Tan, X.; Han, Y.; Yin, R.; Pu, D.; Li, K. Automatic Gesture Recognition and Evaluation in Peg Transfer Tasks of Laparoscopic Surgery Training. Surg. Endosc. 2025, 39, 3749–3759. [Google Scholar] [CrossRef]
  98. Cruz, E.; Selman, R.; Figueroa, Ú.; Belmar, F.; Jarry, C.; Sanhueza, D.; Escalona, G.; Carnier, M.; Varas, J. A Scalable Solution: Effective AI Implementation in Laparoscopic Simulation Training Assessments. Glob. Surg. Educ. J. Assoc. Surg. Educ. 2025, 4, 46. [Google Scholar] [CrossRef]
  99. Chen, Z.; Yang, D.; Li, A.; Sun, L.; Zhao, J.; Liu, J.; Liu, L.; Zhou, X.; Chen, Y.; Cai, Y.; et al. Decoding Surgical Skill: An Objective and Efficient Algorithm for Surgical Skill Classification Based on Surgical Gesture Features—Experimental Studies. Int. J. Surg. 2024, 110, 1441–1449. [Google Scholar] [CrossRef]
  100. Erlich-Feingold, O.; Anteby, R.; Klang, E.; Soffer, S.; Cordoba, M.; Nachmany, I.; Amiel, I.; Barash, Y. Artificial Intelligence Classifies Surgical Technical Skills in Simulated Laparoscopy: A Pilot Study. Surg. Endosc. 2025, 39, 3592–3599. [Google Scholar] [CrossRef] [PubMed]
  101. Power, D.; Burke, C.; Madden, M.G.; Ullah, I. Automated Assessment of Simulated Laparoscopic Surgical Skill Performance Using Deep Learning. Sci. Rep. 2025, 15, 13591. [Google Scholar] [CrossRef] [PubMed]
  102. Alonso-Silverio, G.A.; Pérez-Escamirosa, F.; Bruno-Sanchez, R.; Ortiz-Simon, J.L.; Muñoz-Guerrero, R.; Minor-Martinez, A.; Alarcón-Paredes, A. Development of a Laparoscopic Box Trainer Based on Open Source Hardware and Artificial Intelligence for Objective Assessment of Surgical Psychomotor Skills. Surg. Innov. 2018, 25, 380–388. [Google Scholar] [CrossRef]
  103. Pan, J.J.; Chang, J.; Yang, X.; Zhang, J.J.; Qureshi, T.; Howell, R.; Hickish, T. Graphic and Haptic Simulation System for Virtual Laparoscopic Rectum Surgery. Int. J. Med. Robot. Comput. Assist. Surg. 2011, 7, 304–317. [Google Scholar] [CrossRef] [PubMed]
  104. Ershad, M.; Rege, R.; Majewicz Fey, A. Automatic and near Real-Time Stylistic Behavior Assessment in Robotic Surgery. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 635–643. [Google Scholar] [CrossRef]
  105. Kowalewski, K.F.; Garrow, C.R.; Schmidt, M.W.; Benner, L.; Müller-Stich, B.P.; Nickel, F. Sensor-Based Machine Learning for Workflow Detection and as Key to Detect Expert Level in Laparoscopic Suturing and Knot-Tying. Surg. Endosc. 2019, 33, 3732–3740. [Google Scholar] [CrossRef] [PubMed]
  106. St John, A.; Khalid, M.U.; Masino, C.; Noroozi, M.; Alseidi, A.; Hashimoto, D.A.; Altieri, M.; Serrot, F.; Kersten-Oertal, M.; Madani, A. LapBot-Safe Chole: Validation of an Artificial Intelligence-Powered Mobile Game App to Teach Safe Cholecystectomy. Surg. Endosc. 2024, 38, 5274–5284. [Google Scholar] [CrossRef] [PubMed]
  107. Yen, H.H.; Hsiao, Y.H.; Yang, M.H.; Huang, J.Y.; Lin, H.T.; Huang, C.C.; Blue, J.; Ho, M.C. Automated Surgical Action Recognition and Competency Assessment in Laparoscopic Cholecystectomy: A Proof-of-Concept Study. Surg. Endosc. 2025, 39, 3006–3016. [Google Scholar] [CrossRef] [PubMed]
  108. Nakajima, K.; Takenaka, S.; Kitaguchi, D.; Tanaka, A.; Ryu, K.; Takeshita, N.; Kinugasa, Y.; Ito, M. Artificial Intelligence Assessment of Tissue-Dissection Efficiency in Laparoscopic Colorectal Surgery. Langenbecks Arch. Surg. 2025, 410, 80. [Google Scholar] [CrossRef]
  109. Igaki, T.; Kitaguchi, D.; Matsuzaki, H.; Nakajima, K.; Kojima, S.; Hasegawa, H.; Takeshita, N.; Kinugasa, Y.; Ito, M. Automatic Surgical Skill Assessment System Based on Concordance of Standardized Surgical Field Development Using Artificial Intelligence. JAMA Surg. 2023, 158, E231131. [Google Scholar] [CrossRef]
  110. Smith, R.; Julian, D.; Dubin, A. Deep Neural Networks Are Effective Tools for Assessing Performance During Surgical Training. J. Robot Surg. 2022, 16, 559–562. [Google Scholar] [CrossRef]
  111. Loukas, C.; Seimenis, I.; Prevezanou, K.; Schizas, D. Prediction of Remaining Surgery Duration in Laparoscopic Videos Based on Visual Saliency and the Transformer Network. Int. J. Med. Robot. Comput. Assist. Surg. 2024, 20, e2632. [Google Scholar] [CrossRef]
  112. Wagner, M.; Müller-Stich, B.P.; Kisilenko, A.; Tran, D.; Heger, P.; Mündermann, L.; Lubotsky, D.M.; Müller, B.; Davitashvili, T.; Capek, M.; et al. Comparative Validation of Machine Learning Algorithms for Surgical Workflow and Skill Analysis with the HeiChole Benchmark. Med. Image Anal. 2023, 86, 102770. [Google Scholar] [CrossRef]
  113. Zhang, B.; Goel, B.; Sarhan, M.H.; Goel, V.K.; Abukhalil, R.; Kalesan, B.; Stottler, N.; Petculescu, S. Surgical Workflow Recognition with Temporal Convolution and Transformer for Action Segmentation. Int. J. Comput. Assist. Radiol. Surg. 2023, 18, 785–794. [Google Scholar] [CrossRef]
  114. Park, B.; Chi, H.; Park, B.; Lee, J.; Jin, H.S.; Park, S.; Hyung, W.J.; Choi, M.K. Visual Modalities-Based Multimodal Fusion for Surgical Phase Recognition. Comput. Biol. Med. 2023, 166, 107453. [Google Scholar] [CrossRef]
  115. Twinanda, A.P.; Yengera, G.; Mutter, D.; Marescaux, J.; Padoy, N. RSDNet: Learning to Predict Remaining Surgery Duration from Laparoscopic Videos Without Manual Annotations. IEEE Trans. Med. Imaging 2019, 38, 1069–1078. [Google Scholar] [CrossRef]
  116. Zang, C.; Turkcan, M.K.; Narasimhan, S.; Cao, Y.; Yarali, K.; Xiang, Z.; Szot, S.; Ahmad, F.; Choksi, S.; Bitner, D.P.; et al. Surgical Phase Recognition in Inguinal Hernia Repair—AI-Based Confirmatory Baseline and Exploration of Competitive Models. Bioengineering 2023, 10, 654. [Google Scholar] [CrossRef]
  117. Cartucho, J.; Weld, A.; Tukra, S.; Xu, H.; Matsuzaki, H.; Ishikawa, T.; Kwon, M.; Jang, Y.E.; Kim, K.J.; Lee, G.; et al. SurgT Challenge: Benchmark of Soft-Tissue Trackers for Robotic Surgery. Med. Image Anal. 2024, 91, 102985. [Google Scholar] [CrossRef]
  118. Zheng, Y.; Leonard, G.; Zeh, H.; Fey, A.M. Frame-Wise Detection of Surgeon Stress Levels During Laparoscopic Training Using Kinematic Data. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 785–794. [Google Scholar] [CrossRef]
  119. Zhai, Y.; Chen, Z.; Zheng, Z.; Wang, X.; Yan, X.; Liu, X.; Yin, J.; Wang, J.; Zhang, J. Artificial Intelligence for Automatic Surgical Phase Recognition of Laparoscopic Gastrectomy in Gastric Cancer. Int. J. Comput. Assist. Radiol. Surg. 2024, 19, 345–353. [Google Scholar] [CrossRef]
  120. You, J.; Cai, H.; Wang, Y.; Bian, A.; Cheng, K.; Meng, L.; Wang, X.; Gao, P.; Chen, S.; Cai, Y.; et al. Artificial Intelligence Automated Surgical Phases Recognition in Intraoperative Videos of Laparoscopic Pancreatoduodenectomy. Surg. Endosc. 2024, 38, 4894–4905. [Google Scholar] [CrossRef] [PubMed]
  121. Zheng, Q.; Yang, R.; Yang, S.; Ni, X.; Li, Y.; Jiang, Z.; Wang, X.; Wang, L.; Chen, Z.; Liu, X. Development and Validation of a Deep-Learning Based Assistance System for Enhancing Laparoscopic Control Level. Int. J. Med. Robot. Comput. Assist. Surg. 2023, 19, e2449. [Google Scholar] [CrossRef] [PubMed]
  122. Dayan, D. Implementation of Artificial Intelligence–Based Computer Vision Model for Sleeve Gastrectomy: Experience in One Tertiary Center. Obes. Surg. 2024, 34, 330–336. [Google Scholar] [CrossRef] [PubMed]
  123. Kitaguchi, D.; Takeshita, N.; Matsuzaki, H.; Oda, T.; Watanabe, M.; Mori, K.; Kobayashi, E.; Ito, M. Automated Laparoscopic Colorectal Surgery Workflow Recognition Using Artificial Intelligence: Experimental Research. Int. J. Surg. 2020, 79, 88–94. [Google Scholar] [CrossRef]
  124. Yoshida, M.; Kitaguchi, D.; Takeshita, N.; Matsuzaki, H.; Ishikawa, Y.; Yura, M.; Akimoto, T.; Kinoshita, T.; Ito, M. Surgical Step Recognition in Laparoscopic Distal Gastrectomy Using Artificial Intelligence: A Proof-of-Concept Study. Langenbecks Arch. Surg. 2024, 409, 213. [Google Scholar] [CrossRef]
  125. Fer, D.; Zhang, B.; Abukhalil, R.; Goel, V.; Goel, B.; Barker, J.; Kalesan, B.; Barragan, I.; Gaddis, M.L.; Kilroy, P.G. An Artificial Intelligence Model That Automatically Labels Roux-En-Y Gastric Bypasses, a Comparison to Trained Surgeon Annotators. Surg. Endosc. 2023, 37, 5665–5672. [Google Scholar] [CrossRef]
  126. Liu, Y.; Zhao, S.; Zhang, G.; Zhang, X.; Hu, M.; Zhang, X.; Li, C.; Zhou, S.K.; Liu, R. Multilevel Effective Surgical Workflow Recognition in Robotic Left Lateral Sectionectomy with Deep Learning: Experimental Research. Int. J. Surg. 2023, 109, 2941–2952. [Google Scholar] [CrossRef] [PubMed]
  127. Khojah, B.; Enani, G.; Saleem, A.; Malibary, N.; Sabbagh, A.; Malibari, A.; Alhalabi, W. Deep Learning-Based Intraoperative Visual Guidance Model for Ureter Identification in Laparoscopic Sigmoidectomy. Surg. Endosc. 2025, 39, 3610–3623. [Google Scholar] [CrossRef] [PubMed]
  128. Lavanchy, J.L.; Ramesh, S.; Dall’Alba, D.; Gonzalez, C.; Fiorini, P.; Müller-Stich, B.P.; Nett, P.C.; Marescaux, J.; Mutter, D.; Padoy, N. Challenges in Multi-Centric Generalization: Phase and Step Recognition in Roux-En-Y Gastric Bypass Surgery. Int. J. Comput. Assist. Radiol. Surg. 2024, 19, 2249–2257. [Google Scholar] [CrossRef] [PubMed]
  129. Komatsu, M.; Kitaguchi, D.; Yura, M.; Takeshita, N.; Yoshida, M.; Yamaguchi, M.; Kondo, H.; Kinoshita, T.; Ito, M. Automatic Surgical Phase Recognition-Based Skill Assessment in Laparoscopic Distal Gastrectomy Using Multicenter Videos. Gastric Cancer 2024, 27, 187–196. [Google Scholar] [CrossRef]
  130. Sasaki, K.; Ito, M.; Kobayashi, S.; Kitaguchi, D.; Matsuzaki, H.; Kudo, M.; Hasegawa, H.; Takeshita, N.; Sugimoto, M.; Mitsunaga, S.; et al. Automated Surgical Workflow Identification by Artificial Intelligence in Laparoscopic Hepatectomy: Experimental Research. Int. J. Surg. 2022, 105, 106856. [Google Scholar] [CrossRef]
  131. Madani, A.; Namazi, B.; Altieri, M.S.; Hashimoto, D.A.; Rivera, A.M.; Pucher, P.H.; Navarrete-Welton, A.; Sankaranarayanan, G.; Brunt, L.M.; Okrainec, A.; et al. Artificial Intelligence for Intraoperative Guidance: Using Semantic Segmentation to Identify Surgical Anatomy during Laparoscopic Cholecystectomy. Ann. Surg. 2022, 276, 363–369. [Google Scholar] [CrossRef]
  132. Cheng, K.; You, J.; Wu, S.; Chen, Z.; Zhou, Z.; Guan, J.; Peng, B.; Wang, X. Artificial Intelligence-Based Automated Laparoscopic Cholecystectomy Surgical Phase Recognition and Analysis. Surg. Endosc. 2022, 36, 3160–3168. [Google Scholar] [CrossRef]
  133. Golany, T.; Aides, A.; Freedman, D.; Rabani, N.; Liu, Y.; Rivlin, E.; Corrado, G.S.; Matias, Y.; Khoury, W.; Kashtan, H.; et al. Artificial Intelligence for Phase Recognition in Complex Laparoscopic Cholecystectomy. Surg. Endosc. 2022, 36, 9215–9223. [Google Scholar] [CrossRef]
  134. Shinozuka, K.; Turuda, S.; Fujinaga, A.; Nakanuma, H.; Kawamura, M.; Matsunobu, Y.; Tanaka, Y.; Kamiyama, T.; Ebe, K.; Endo, Y.; et al. Artificial Intelligence Software Available for Medical Devices: Surgical Phase Recognition in Laparoscopic Cholecystectomy. Surg. Endosc. 2022, 36, 7444–7452. [Google Scholar] [CrossRef]
  135. Masum, S.; Hopgood, A.; Stefan, S.; Flashman, K.; Khan, J. Data Analytics and Artificial Intelligence in Predicting Length of Stay, Readmission, and Mortality: A Population-Based Study of Surgical Management of Colorectal Cancer. Discov. Oncol. 2022, 13, 11. [Google Scholar] [CrossRef]
  136. Lopez-Lopez, V.; Maupoey, J.; López-Andujar, R.; Ramos, E.; Mils, K.; Martinez, P.A.; Valdivieso, A.; Garcés-Albir, M.; Sabater, L.; Valladares, L.D.; et al. Machine Learning-Based Analysis in the Management of Iatrogenic Bile Duct Injury During Cholecystectomy: A Nationwide Multicenter Study. J. Gastrointest. Surg. 2022, 26, 1713–1723. [Google Scholar] [CrossRef]
  137. Cai, Z.H.; Zhang, Q.; Fu, Z.W.; Fingerhut, A.; Tan, J.W.; Zang, L.; Dong, F.; Li, S.C.; Wang, S.L.; Ma, J.J. Magnetic Resonance Imaging-Based Deep Learning Model to Predict Multiple Firings in Double-Stapled Colorectal Anastomosis. World J. Gastroenterol. 2023, 29, 536–548. [Google Scholar] [CrossRef]
  138. Dayan, D.; Dvir, N.; Agbariya, H.; Nizri, E. Implementation of Artificial Intelligence-Based Computer Vision Model in Laparoscopic Appendectomy: Validation, Reliability, and Clinical Correlation. Surg. Endosc. 2024, 38, 3310–3319. [Google Scholar] [CrossRef] [PubMed]
  139. Arpaia, P.; Bracale, U.; Corcione, F.; De Benedetto, E.; Di Bernardo, A.; Di Capua, V.; Duraccio, L.; Peltrini, R.; Prevete, R. Assessment of Blood Perfusion Quality in Laparoscopic Colorectal Surgery by Means of Machine Learning. Sci. Rep. 2022, 12, 14682. [Google Scholar] [CrossRef] [PubMed]
  140. Gillani, M.; Rupji, M.; Paul Olson, T.J.; Sullivan, P.; Shaffer, V.O.; Balch, G.C.; Shields, M.C.; Liu, Y.; Rosen, S.A. Objective Performance Indicators Differ in Obese and Nonobese Patients during Robotic Proctectomy. Surgery 2024, 176, 1591–1597. [Google Scholar] [CrossRef] [PubMed]
  141. Emile, S.H.; Horesh, N.; Garoufalia, Z.; Gefen, R.; Rogers, P.; Wexner, S.D. An Artificial Intelligence-Designed Predictive Calculator of Conversion from Minimally Invasive to Open Colectomy in Colon Cancer. Updates Surg. 2024, 76, 1321–1330. [Google Scholar] [CrossRef]
  142. Velmahos, C.S.; Paschalidis, A.; Paranjape, C.N. The Not-So-Distant Future or Just Hype? Utilizing Machine Learning to Predict 30-Day Post-Operative Complications in Laparoscopic Colectomy Patients. Am. Surg. 2023, 89, 5648–5654. [Google Scholar] [CrossRef]
  143. Jo, S.J.; Rhu, J.; Kim, J.; Choi, G.-S.; Joh, J.W. Indication Model for Laparoscopic Repeat Liver Resection in the Era of Artificial Intelligence: Machine Learning Prediction of Surgical Indication. HPB 2025, 27, 832–843. [Google Scholar] [CrossRef]
  144. Li, Y.; Su, Y.; Shao, S.; Wang, T.; Liu, X.; Qin, J. Machine Learning–Based Prediction of Duodenal Stump Leakage Following Laparoscopic Gastrectomy for Gastric Cancer. Surgery 2025, 180, 108999. [Google Scholar] [CrossRef]
  145. Lippenberger, F.; Ziegelmayer, S.; Berlet, M.; Feussner, H.; Makowski, M.; Neumann, P.-A.; Graf, M.; Kaissis, G.; Wilhelm, D.; Braren, R.; et al. Development of an Image-Based Random Forest Classifier for Prediction of Surgery Duration of Laparoscopic Sigmoid Resections. Int. J. Color. Dis. 2024, 39, 21. [Google Scholar] [CrossRef]
  146. Zhou, C.M.; Li, H.J.; Xue, Q.; Yang, J.J.; Zhu, Y. Artificial Intelligence Algorithms for Predicting Post-Operative Ileus after Laparoscopic Surgery. Heliyon 2024, 10, e26580. [Google Scholar] [CrossRef]
  147. Du, C.; Li, J.; Zhang, B.; Feng, W.; Zhang, T.; Li, D. Intraoperative Navigation System with a Multi-Modality Fusion of 3D Virtual Model and Laparoscopic Real-Time Images in Laparoscopic Pancreatic Surgery: A Preclinical Study. BMC Surg. 2022, 22, 139. [Google Scholar] [CrossRef] [PubMed]
  148. Kasai, M.; Uchiyama, H.; Aihara, T.; Ikuta, S.; Yamanaka, N. Laparoscopic Projection Mapping of the Liver Portal Segment, Based on Augmented Reality Combined with Artificial Intelligence, for Laparoscopic Anatomical Liver Resection. Cureus 2023, 15, e48450. [Google Scholar] [CrossRef] [PubMed]
  149. Ryu, S.; Imaizumi, Y.; Goto, K.; Iwauchi, S.; Kobayashi, T.; Ito, R.; Nakabayashi, Y. Feasibility of Simultaneous Artificial Intelligence-Assisted and NIR Fluorescence Navigation for Anatomical Recognition in Laparoscopic Colorectal Surgery. J. Fluoresc. 2024, 35, 6755–6761. [Google Scholar] [CrossRef] [PubMed]
  150. Garcia-Granero, A.; Jerí Mc-Farlane, S.; Gamundí Cuesta, M.; González-Argente, F.X. Application of 3D-Reconstruction and Artificial Intelligence for Complete Mesocolic Excision and D3 Lymphadenectomy in Colon Cancer. Cir. Esp. 2023, 101, 359–368. [Google Scholar] [CrossRef]
  151. Ali, S.; Espinel, Y.; Jin, Y.; Liu, P.; Güttner, B.; Zhang, X.; Zhang, L.; Dowrick, T.; Clarkson, M.J.; Xiao, S.; et al. An Objective Comparison of Methods for Augmented Reality in Laparoscopic Liver Resection by Preoperative-to-Intraoperative Image Fusion from the MICCAI2022 Challenge. Med. Image Anal. 2025, 99, 103371. [Google Scholar] [CrossRef]
  152. Robu, M.R.; Edwards, P.; Ramalhinho, J.; Thompson, S.; Davidson, B.; Hawkes, D.; Stoyanov, D.; Clarkson, M.J. Intelligent Viewpoint Selection for Efficient CT to Video Registration in Laparoscopic Liver Surgery. Int. J. Comput. Assist. Radiol. Surg. 2017, 12, 1079–1088. [Google Scholar] [CrossRef]
  153. Wei, R.; Li, B.; Mo, H.; Lu, B.; Long, Y.; Yang, B.; Dou, Q.; Liu, Y.; Sun, D. Stereo Dense Scene Reconstruction and Accurate Localization for Learning-Based Navigation of Laparoscope in Minimally Invasive Surgery. IEEE Trans. Biomed. Eng. 2023, 70, 488–500. [Google Scholar] [CrossRef]
  154. Nicolaou, M.; James, A.; Lo, B.P.L.; Darzi, A.; Yang, G.-Z. Invisible Shadow for Navigation and Planning in Minimal Invasive Surgery. Med. Image Comput. Comput. Assist. Interv. 2005, 8, 25–32. [Google Scholar]
  155. Calinon, S.; Bruno, D.; Malekzadeh, M.S.; Nanayakkara, T.; Caldwell, D.G. Human-Robot Skills Transfer Interfaces for a Flexible Surgical Robot. Comput. Methods Programs Biomed. 2014, 116, 81–96. [Google Scholar] [CrossRef]
  156. Zheng, Q.; Yang, R.; Ni, X.; Yang, S.; Jiang, Z.; Wang, L.; Chen, Z.; Liu, X. Development and Validation of a Deep Learning-Based Laparoscopic System for Improving Video Quality. Int. J. Comput. Assist. Radiol. Surg. 2022, 18, 257–268. [Google Scholar] [CrossRef]
  157. Cheng, Q.; Dong, Y. Da Vinci Robot-Assisted Video Image Processing under Artificial Intelligence Vision Processing Technology. Comput. Math. Methods Med. 2022, 2022, 2752444. [Google Scholar] [CrossRef] [PubMed]
  158. Katić, D.; Wekerle, A.-L.; Görtler, J.; Spengler, P.; Bodenstedt, S.; Röhl, S.; Suwelack, S.; Kenngott, H.G.; Wagner, M.; Müller-Stich, B.P.; et al. Context-Aware Augmented Reality in Laparoscopic Surgery. Comput. Med. Imaging Graph. 2013, 37, 174–182. [Google Scholar] [CrossRef] [PubMed]
  159. Beyersdorffer, P.; Kunert, W.; Jansen, K.; Miller, J.; Wilhelm, P.; Burgert, O.; Kirschniak, A.; Rolinger, J. Detection of Adverse Events Leading to Inadvertent Injury during Laparoscopic Cholecystectomy Using Convolutional Neural Networks. Biomed. Tech. 2021, 66, 413–421. [Google Scholar] [CrossRef] [PubMed]
  160. Salazar-Colores, S.; Moreno, H.A.; Moya, U.; Ortiz-Echeverri, C.J.; Tavares de la Paz, L.A.; Flores, G. Removal of Smoke Effects in Laparoscopic Surgery via Adversarial Neural Network and the Dark Channel Prior. Cir. Y Cir. (Engl. Ed.) 2022, 90, 74–83. [Google Scholar] [CrossRef]
  161. He, W.; Zhu, H.; Rao, X.; Yang, Q.; Luo, H.; Wu, X.; Gao, Y. Biophysical Modeling and Artificial Intelligence for Quantitative Assessment of Anastomotic Blood Supply in Laparoscopic Low Anterior Rectal Resection. Surg. Endosc. 2025, 39, 3412–3421. [Google Scholar] [CrossRef]
  162. Lünse, S.; Wisotzky, E.L.; Beckmann, S.; Paasch, C.; Hunger, R.; Mantke, R. Technological Advancements in Surgical Laparoscopy Considering Artificial Intelligence: A Survey among Surgeons in Germany. Langenbecks Arch. Surg. 2023, 408, 405. [Google Scholar] [CrossRef]
  163. Iftikhar, M.; Saqib, M.; Zareen, M.; Mumtaz, H. Artificial Intelligence: Revolutionizing Robotic Surgery: Review. Ann. Med. Surg. 2024, 86, 5401–5409. [Google Scholar] [CrossRef]
  164. Chatterjee, S.; Das, S.; Ganguly, K.; Mandal, D. Advancements in Robotic Surgery: Innovations, Challenges and Future Prospects. J. Robot Surg. 2024, 18, 28. [Google Scholar] [CrossRef]
  165. Reza, T.; Bokhari, S.F.H. Partnering with Technology: Advancing Laparoscopy with Artificial Intelligence and Machine Learning. Cureus 2024, 16, e56076. [Google Scholar] [CrossRef]
  166. Khanam, M.; Akther, S.; Mizan, I.; Islam, F.; Chowdhury, S.; Ahsan, N.M.; Barua, D.; Hasan, S.K. The Potential of Artificial Intelligence in Unveiling Healthcare’s Future. Cureus 2024, 16, e71625. [Google Scholar] [CrossRef]
  167. Hamilton, A. The Future of Artificial Intelligence in Surgery. Cureus 2024, 16, e63699. [Google Scholar] [CrossRef]
Figure 1. PRISMA diagram detailing the review process.
Figure 1. PRISMA diagram detailing the review process.
Jpm 15 00562 g001
Figure 2. Time-series distribution of included studies by category from 2015 to 2025.
Figure 2. Time-series distribution of included studies by category from 2015 to 2025.
Jpm 15 00562 g002
Table 1. Descriptive analysis of the reviewed articles.
Table 1. Descriptive analysis of the reviewed articles.
CharacteristicNumber of Studies (Total n = 152)
Year of Publication
Articles published before 20157
Articles published in between 2015 and 202018
Articles published in 20215
Articles published in 202228
Articles published in 202334
Articles published in 202449
Articles published in 202511
Type of surgery
Laparoscopic surgery125
Robotic surgery19
Laparoscopic and Robotic surgery8
Study Type
Technical evaluations65
Retrospective observational studies:35
Prospective observational studies14
Feasibility studies13
Clinical trials8
Dataset descriptions7
Simulation-based training assessments6
Surveys or expert opinion studies4
Study Categories
Object or structure detection51
Skill assessment and training37
Workflow recognition and intraoperative guidance28
Surgical decision support and outcome prediction14
Augmented reality and navigation11
Image enhancement8
Surgeon perception, preparedness, and attitudes3
Table 2. Included articles, summarized. TME: Total Mesorectal Excision; TaTME: Transanal Total Mesorectal Excision; CVS: Critical View of Safety; LLR: Laparoscopic Liver Resection; TAPP: Transabdominal Preperitoneal hernia repair; LC: Laparoscopic Cholecystectomy; AR: Augmented Reality; TEP: Totally Extraperitoneal; VR: Virtual Reality; RALIHR: Robotic-Assisted Laparoscopic Inguinal Hernia Repair; NIR: Near-Infrared; MIS: Minimally Invasive Surgery.
Table 2. Included articles, summarized. TME: Total Mesorectal Excision; TaTME: Transanal Total Mesorectal Excision; CVS: Critical View of Safety; LLR: Laparoscopic Liver Resection; TAPP: Transabdominal Preperitoneal hernia repair; LC: Laparoscopic Cholecystectomy; AR: Augmented Reality; TEP: Totally Extraperitoneal; VR: Virtual Reality; RALIHR: Robotic-Assisted Laparoscopic Inguinal Hernia Repair; NIR: Near-Infrared; MIS: Minimally Invasive Surgery.
First Author and YearDOIType of SurgeryStudy TypeStudy Objective
Object or Structure Detection
Khalid 2023 [33]10.1007/s00464-023-10403-4LaparoscopicRetrospective validation studyPrediction of safe and unsafe dissection zones during laparoscopic cholecystectomy.
Ward 2022 [34]10.1007/s00464-022-09009-zLaparoscopicRetrospective modelTo classify inflammation based on gallbladder images.
Orimoto 2025 [35]10.1007/s00464-024-11514-2LaparoscopicRetrospective modelIdentify intraoperative scarring in laparoscopic cholecystectomy for acute cholecystitis.
Kolbinger 2023 [36]10.1097/JS9.0000000000000595LaparoscopicAlgorithm development + comparison studySegmentation of abdominal anatomy.
Sato 2022 [37]10.1007/s00595-022-02508-5Laparoscopic and RoboticFeasibility studyPancreas contouring for navigation in lymphadenectomy.
Igaki 2022 [38] 10.1097/DCR.0000000000002393LaparoscopicSingle-center feasibility studySegmentation-based image-guided navigation system for TME dissection.
Jearanai 2023 [39]10.1007/s00464-023-10309-1LaparoscopicTechnical development + validationDetect abdominal wall layers during trocar insertion.
Oh 2024 [40]10.1038/s41598-024-73434-4LaparoscopicAlgorithm development + intraoperative supportIdentify biliary structures.
Benavides 2024 [41]10.3390/s2413419LaparoscopicTechnical developmentLocalization of surgical tools in laparoscopic surgery.
Gazis 2022 [42]10.3390/bioengineering9120737LaparoscopicTechnical developmentTo recognize surgical gestures.
Tomioka 2023 [43]10.21873/anticanres.16725LaparoscopicTechnical developmentRecognition of hepatic veins and Glissonean pedicle.
Cui 2021 [44]10.1155/2021/5578089LaparoscopicTechnical developmentTo detect vas deferens in laparoscopic inguinal hernia repair.
Memida 2023 [45]10.1109/EMBC40787.2023.10341025LaparoscopicTechnical developmentTo identify surgical instruments in laparoscopic procedures.
Wesierski 2018 [46]10.1016/j.media.2018.03.012Robotic Technical developmentTo estimate the pose of multiple non-rigid and robotic surgical tools.
Jurosch 2024 [47]10.1007/s11548-024-03220-0LaparoscopicTechnical developmentTo detect trocars and assess their occupancy.
Sánchez-Brizuela 2022 [48]10.3390/s22145180LaparoscopicTechnical developmentTo identify surgical gauze in real time.
Lai 2023 [49]10.1007/s10439-022-03033-9LaparoscopicTechnical developmentTo detect surgical gauze in real time.
Ehrlich 2022 [50]10.3390/s22155808RoboticTechnical developmentTo detect energy events from electrosurgical tools.
Nwoye 2019 [51]10.1007/s11548-019-01958-6LaparoscopicMethodological innovationTo enable real-time tool tracking.
Carstens 2023 [52]10.1038/s41597-022-01719-2LaparoscopicDataset publicationSemantic segmentations of abdominal organs and vessels.
Yin 2024 [53]10.1016/j.neunet.2023.11.055LaparoscopicTechnical developmentSegmentation model for TaTME procedures.
Tashiro 2024 [54]10.1002/jhbp.1422LaparoscopicRetrospective analysisTo recognize and color-code loose connective tissue.
Petracchi 2024 [15]10.1016/j.gassur.2024.03.018LaparoscopicProspective observational studyTo detect the critical view of safety during elective LC.
Schnelldorfer 2024 [16]10.1097/SLA.0000000000006294LaparoscopicPrototype developmentGuidance system for identifying peritoneal metastases.
Kitaguchi 2023 [55]10.1093/bjs/znad249LaparoscopicProspective observational studyOrgan recognition models.
Chen 2025 [17]10.1093/bjsopen/zrae158LaparoscopicRetrospective studyPerigastric vessel recognition.
Han 2024 [56]10.1097/DCR.0000000000003547LaparoscopicModel developmentNeurorecognition during total mesorectal excision.
Tashiro 2024 [18]10.1002/jhbp.1388Laparoscopic and RoboticProof-of-concept demonstrationIdentification of intrahepatic vascular structures during liver resection.
Frey 2025 [57]10.1007/s11701-025-02284-7Laparoscopic and RoboticModel evaluationDetecting instruments in robot-assisted abdominal surgeries.
El Moaqet 2025 [58]10.3390/s25103017LaparoscopicModel development and evaluationTo classify and localize surgical tools.
Korndorffer 2020 [59]10.1097/SLA.0000000000004207LaparoscopicObservational studyAssessing critical view of safety and intraoperative events during LC.
Shunjin Ryu 2023 [19]10.1007/s11605-023-05819-1LaparoscopicProspective observational studyRecognition of nerves during colorectal surgery.
Park 2020 [60]10.3748/wjg.v26.i44.6945LaparoscopicFeasibility studyAnalysis of microperfusion for predicting anastomotic complications in laparoscopic colorectal cancer surgery.
Ryu 2024 [61]10.1007/s00464-023-10524-wLaparoscopicModel developmentRecognition and visualization of major blood vessels during laparoscopic right hemicolectomy.
Zygomalas 2024 [62]10.1177/15533506241226502LaparoscopicFeasibility studyRecognition of anatomical landmarks and tools in TAPP hernia repair.
Mita 2024 [63]10.1007/s10029-024-03223-5LaparoscopicValidation studyAnatomical recognition during TAPP hernia repair.
Horita 2024 [64]10.1007/s00464-024-10874-zLaparoscopicModel developmentTo detect active intraoperative bleeding during laparoscopic colectomy.
Kinoshita 2024 [65]10.1007/s00464-024-10939-zLaparoscopic and RoboticValidation studyNerve recognition in rectal cancer surgery.
Takeuchi 2023 [66]10.1007/s00464-023-09934-7LaparoscopicModel developmentLandmarks recognition in TAPP hernia repair.
Une 2024 [67]10.1007/s00464-023-10637-2LaparoscopicFeasibility studyLiver vessel recognition during parenchymal dissection in LLR.
Kojima 2023 [68]10.1097/JS9.0000000000000317LaparoscopicModel developmentSegmentation of autonomic nerves during colorectal surgery.
Nakanuma 2023 [69]10.1007/s00464-022-09678-wLaparoscopicClinical feasibility studyDetecting landmarks during laparoscopic cholecystectomy.
Loukas 2022 [70]10.1002/rcs.2445LaparoscopicModel developmentTo classify vascularity of gallbladder wall.
Endo 2023 [71]10.1007/s00464-023-10224-5LaparoscopicProspective experimental studyAnatomical landmark identification during laparoscopic cholecystectomy.
Fried 2024 [72] 10.1097/SLA.0000000000006377LaparoscopicImplementation studyMonitoring superior vena cava in laparoscopic cholecystectomy.
Mascagni 2022 [73]10.1097/SLA.0000000000004351LaparoscopicModel developmentTo segment hepatocystic anatomy.
Fujinaga 2023 [74]10.1007/s00464-023-10097-8LaparoscopicClinical feasibility studyLandmark recognition to reduce bile duct injury.
Kawamura 2023 [75]10.1007/s00464-023-10328-yLaparoscopicModel developmentAutomatically score CVS criteria.
Tokuyasu 2021 [76]10.1007/s00464-020-07548-xLaparoscopicModel development and validationAnatomical landmarks during laparoscopic cholecystectomy.
Zhang 2024 [77]10.3760/cma.j.cn441530-20240125-00041LaparoscopicModel development and validationDetect organs and instruments in laparoscopic radical gastrectomy.
Skill Assessment and Training
Ortenzi 2023 [78]10.1007/s00464-023-10375-5LaparoscopicModel development and validation Surgical steps recognition in totally extraperitoneal (TEP) inguinal hernia repairs.
Wu 2024 [20]10.1097/JS9.0000000000001798LaparoscopicMulticenter randomized controlled trialAI-based surgical coaching program for laparoscopic cholecystectomy.
Belmar 2023 [79]10.1007/s00464-022-09576-1LaparoscopicValidation studyAssessing basic laparoscopic simulation training exercises.
Halperin 2023 [21]10.1007/s11548-023-02963-6LaparoscopicProspective validation study (simulator-based)Automated feedback on intracorporeal suture performance.
Chen 2023 [22]10.1007/s11701-023-01713-9RoboticProspective observational study (simulator-based)To evaluate robotic suturing skills.
Fawaz 2019 [80]10.1007/s11548-019-02039-4RoboticRetrospective model developmentClassify robotic surgical skill levels and personalized feedback.
Nguyen 2019 [81]10.1016/j.cmpb.2019.05.008Robotic Model developmentObjective surgical skill assessment.
Wang 2025 [82]10.1109/EMBC.2018.8512575RoboticModel developmentRecognize surgical tasks and assess surgeon skill levels in robot-assisted training.
Funke 2019 [83]10.1007/s11548-019-01995-1RoboticModel developmentSurgical skill assessment system.
Partridge 2014 [84]10.1089/lap.2014.0015LaparoscopicTool development and validationTo track instrument movement for performance feedback in laparoscopic simulators.
Dereathe 2025 [85]10.1038/s41597-025-04588-7LaparoscopicDataset development and evaluationTo assess quality of field exposure in sleeve gastrectomy.
Bogar 2024 [86]10.1038/s41598-024-67435-6LaparoscopicSimulation studyCustom VR simulator and AI-based peg transfer evaluator.
Matsumoto 2024 [87]10.1038/s41598-024-63388-yLaparoscopicKinematic analysisTo analyze kinematic differences in laparoscopic distal gastrectomy by surgical skill level.
Gilliani 2024 [88]10.1016/j.jss.2024.07.103RoboticSkill classification studyTo distinguish expert, intermediate, and novice surgeons in robotic right colectomy.
Yang 2023 [89]10.1007/s00464-022-09781-yRoboticAlgorithm validationSurgical skill grading in colorectal robotic surgery.
Caballero 2024 [90]10.1007/s11548-024-03218-8RoboticObservational study To predict surgeon stress levels during robotic surgery using ergonomic and physiological parameters.
Yanik 2024 [91]10.1007/s44186-023-00223-4LaparoscopicLearning curve analysisTo predict surgical skill acquisition through self-supervised video-based learning.
Nakajima 2024 [92]10.1007/s00464-024-11208-9LaparoscopicRetrospective analysis To validate automated surgical skill assessment in sigmoidectomy.
Yamazaki 2022 [93]10.1007/s11605-021-05161-4LaparoscopicRetrospective analysis To compare surgical device usage patterns during laparoscopic gastrectomy by surgeon skill level.
Allen 2010 [94]10.1007/s00464-009-0556-6LaparoscopicExperimental studyAutomatic evaluation of laparoscopic skills.
Fukuta 2024 [95]10.1007/s11548-024-03253-5LaparoscopicDevelopment studyTo assess laparoscopic surgical skills.
Moglia 2022 [96]10.1007/s00464-021-08999-6RoboticObservational studyTo predict proficiency acquisition rates in robotic-assisted surgery trainees.
Ju 2025 [97]10.1007/s00464-025-11730-4LaparoscopicDevelopment studyAutomatic gesture recognition model for laparoscopic training.
Cruz 2025 [98]10.1007/s44186-025-00355-9LaparoscopicValidation studySkill assessment in laparoscopic simulation training.
Chen 2024 [99]10.1097/JS9.0000000000000975Laparoscopic Development studyEvaluate surgical skills based on surgical gestures.
Erlich-Feingold 2025 [100]10.1007/s00464-025-11715-3LaparoscopicDevelopment studyTo classify basic laparoscopic skills (precision cutting tasks).
Power 2025 [101]10.1038/s41598-025-96336-5LaparoscopicDevelopment studyEvaluate laparoscopic surgical skill across expertise levels.
Alonso-Silverio 2018 [102]10.1177/1553350618777045LaparoscopicDevelopment and evaluation studyAffordable laparoscopic trainer with AI, CV, and AR for online surgical skills assessment.
Pan 2011 [103]10.1002/rcs.399LaparoscopicDevelopment studyLaparoscopic rectal surgery training.
Ershad 2019 [104]10.1007/s11548-019-01920-6RoboticDevelopment studyAutomatic stylistic behaviour recognition using joint position data in robotic surgery.
Kowalewski 2019 [105]10.1007/s00464-019-06667-4LaparoscopicExperimental studySkill level assessment and phase detection.
St John A 2024 [106]10.1007/s00464-024-11068-3LaparoscopicValidation studyAI-powered mobile game for learning safe dissection in LC.
Yen 2025 [107]10.1007/s00464-025-11663-yLaparoscopicDevelopment and validation studyTo assess surgical actions and develop automated models for competency assessment in LC.
Nakajima 2025 [108]10.1007/s00423-025-03641-8LaparoscopicRetrospective multicenter studyTo assess surgical dissection skill.
Igaki 2023 [109]10.1001/jamasurg.2023.1131LaparoscopicDevelopment and validation studyTo recognize standardized surgical fields and assess skill.
Smith 2022 [110]10.1007/s11701-021-01284-7RoboticValidation studyClassification of surgical skill level.
Workflow Recognition and Intraoperative Guidance
Loukas 2024 [111]10.1002/rcs.2632LaparoscopicModel development To predict the remaining surgery duration.
Wagner 2023 [112]10.1016/j.media.2023.102770LaparoscopicMulticenter dataset development and benchmark studyTo evaluate generalizability of workflow, instrument, action, and skill recognition models in laparoscopic cholecystectomy.
Zhang 2023 [113]10.1007/s11548-022-02811-zLaparoscopic and RoboticModel development and validationAutomatic surgical workflow recognition.
Park 2023 [114]10.1016/j.compbiomed.2023.107453LaparoscopicMultimodal model developmentSurgical phase recognition by integrating tool interaction and visual modality in laparoscopic surgery.
Twinanda 2019 [115]10.1109/TMI.2018.2878055LaparoscopicModel developmentTo estimate remaining surgery duration intraoperatively.
Zang 2023 [116]10.3390/bioengineering10060654RoboticModel comparisonSurgical phase recognition in RALIHR.
Cartucho 2024 [117]10.1016/j.media.2023.102985LaparoscopicModel development and validationTo track soft tissue movement during laparoscopic procedures.
Zheng 2022 [118]10.1007/s11548-022-02568-5LaparoscopicExperimental study To detect stress in surgical motion.
Zhai 2024 [119]10.1007/s11548-023-03027-5LaparoscopicModel development and validationSurgical phase recognition in gastric cancer surgery.
Takeuchi 2022 [23]10.1007/s10029-022-02621-xLaparoscopicModel development Phase recognition in TAPP and assess links to surgical skill.
Hashimoto 2020 [23]10.1097/SLA.0000000000003460LaparoscopicAlgorithm development and validationTo identify operative steps in laparoscopic sleeve gastrectomy.
You 2024 [120]10.1007/s00464-024-10916-6LaparoscopicModel development and validationAutomated surgical phase recognition in laparoscopic pancreaticoduodenectomy.
Takeuchi 2023 [24]10.1007/s00464-023-09924-9RoboticModel development Surgical phases recognition and prediction of complexity in robotic distal gastrectomy
Zheng 2023 [121]10.1002/rcs.2449LaparoscopicModel development Better intraoperative field visualization.
Dayan 2024 [122]10.1007/s11695-023-07043-xLaparoscopicExternal validation studyAI model for identifying sleeve gastrectomy safety milestones.
Kitaguchi 2020 [123]10.1016/j.ijsu.2020.05.015LaparoscopicModel developmentAI to recognize surgical phase, action and tools.
Yoshida 2024 [124]10.1007/s00423-024-03411-yLaparoscopicModel development and evaluationSurgical step recognition in laparoscopic distal gastrectomy.
Fer 2023 [125]10.1007/s00464-023-09870-6LaparoscopicModel development Step labelling in Roux-en-Y gastric bypass.
Liu 2023 [126]10.1097/JS9.0000000000000559RoboticModel developmentWorkflow recognition model for robotic left lateral sectionectomy.
Khojah 2025 [127]10.1007/s00464-025-11694-5LaparoscopicModel development and intraoperative validationReal-time ureter localization during laparoscopic sigmoidectomy.
Lavanchy 2024 [128]10.1007/s11548-024-03166-3LaparoscopicDataset creation and benchmarkingImproving AI model generalizability.
Komatsu 2024 [129]10.1007/s10120-023-01450-wLaparoscopicModel development and feasibility studyPhase recognition for laparoscopic distal gastrectomy.
Sasaki 2022 [130]10.1016/j.ijsu.2022.106856LaparoscopicModel developmentAutomated surgical step identification in laparoscopic hepatectomy.
Madani 2022 [131]10.1097/SLA.0000000000004594LaparoscopicModel development and validationIntraoperative guidance by identifying safe/dangerous zones and anatomical landmarks in cholecystectomy.
Cheng 2022 [132]10.1007/s00464-021-08619-3LaparoscopicModel development and multicenter validationPhase recognition in laparoscopic cholecystectomy.
Golany 2022 [133]10.1007/s00464-022-09405-5LaparoscopicModel development Surgical phase recognition.
Shinozuka 2022 [134]10.1007/s00464-022-09160-7LaparoscopicModel development and validationSurgical phase recognition in laparoscopic cholecystectomy.
Laplante 2022 [25]10.1007/s00464-022-09439-9LaparoscopicModel validationIdentifying safe and dangerous zones in left colectomy.
Surgical decision support and outcome prediction
López 2024 [25]10.1007/s00464-024-10681-6LaparoscopicRetrospective multicenter studyTo predict surgical complexity and postoperative outcomes in laparoscopic liver surgery.
Masum 2022 [135]10.1007/s12672-022-00472-7Laparoscopic and roboticRetrospective studyPrediction of LOS, readmission, and mortality.
López 2022 [136]10.1007/s11605-022-05398-7Laparoscopic Retrospective multi-institutional cohort studyTo identify factors associated with successful initial repair of IBDI and predict the success of definitive repair using AI.
Cai 2023 [137]10.3748/wjg.v29.i3.536LaparoscopicRetrospective studyPrediction of the number of stapler cartridges needed to avoid high-risk anastomosis.
Dayan 2024 [138]10.1007/s00464-024-10847-2LaparoscopicRetrospective observational validation study Grading intraoperative complexity and safety adherence in laparoscopic appendectomy.
Arpaia 2022 [139]10.1038/s41598-022-16030-8LaparoscopicDevelopment and validation studyAssessment of perfusion quality during laparoscopic colorectal surgery.
Gillani 2024 [140]10.1016/j.surg.2024.08.015RoboticFeasibility study Performance indicators during robotic proctectomy.
Emile 2024 [141]10.1007/s13304-024-01915-2Laparoscopic and RoboticRetrospective case–control study Predict of conversion from minimally invasive (laparoscopic or robotic) to open colectomy.
Wang 2024 [26]10.3748/wjg.v30.i43.4669LaparoscopicMulticenter retrospective cohort study developmentRisk of postoperative complications in laparoscopic radical gastrectomy.
Velmahos 2023 [142]10.1177/00031348231167397LaparoscopicComparative model analysisMorbidity prediction after laparoscopic colectomy.
Jo 2025 [143]10.1016/j.hpb.2025.02.016LaparoscopicRetrospective studyPredictors of conversion to open surgery in laparoscopic repeat liver resection.
Li 2025 [144]10.1016/j.surg.2024.108999LaparoscopicRetrospective studyEstimate the risk of duodenal stump leakage in laparoscopic gastrectomy.
Lippenberger 2024 [145]10.1007/s00384-024-04593-zLaparoscopicRetrospective single-center cohort study To predict procedure duration of laparoscopic sigmoid resections.
Zhou 2024 [146]10.1016/j.heliyon.2024.e26580LaparoscopicRetrospective predictive model Predicting postoperative intestinal obstruction in laparoscopic colorectal cancer surgery.
Augmented reality and navigation
Aoyama 2024 [28]10.1007/s00464-024-11160-8LaparoscopicFeasibility studyTo identify anatomical landmarks associated with postoperative pancreatic fistula during laparoscopic gastrectomy.
Du 2022 [147]10.1186/s12893-022-01585-0LaparoscopicSystem development and preclinical evaluationIntraoperative navigation system.
Kasai 2024 [148]10.7759/cureus.48450LaparoscopicSystem developmentMapping for portal segment identification in laparoscopic liver surgery.
Ryu 2024 [149]10.1007/s10895-024-04030-yLaparoscopicFeasibility studyCombining AI and NIR fluorescence for anatomical recognition during colorectal surgery.
Garcia-Granero 2023 [150]10.1016/j.ciresp.2022.10.023LaparoscopicCase report3D image reconstruction system in mesocolic excision and lymphadenectomy.
Guan 2023 [27]10.1007/s11548-023-02846-wLaparoscopicLaboratory/technical studyMapping using stereo 3D laparoscopy and CT registration for liver resection.
Ali 2024 [151] 10.1016/j.media.2024.103371LaparoscopicChallenge dataset + algorithm developmentAutomate landmark detection for CT-laparoscopic image.
Robu 2017 [152]10.1007/s11548-017-1584-7LaparoscopicProof-of-concept studyView-planning strategy to improve AR registration using CT and laparoscopic video.
Wei 2022 [153]10.1109/TBME.2022.3195027LaparoscopicTechnical/methodological study3D localization method for anatomical navigation during MIS.
Nicolau 2005 [154]10.1007/11566489_4LaparoscopicFeasibility studyEnhance depth perception.
Calinon 2014 [155]10.1016/j.cmpb.2013.12.015LaparoscopicTechnical development Skill transfer interfaces for soft robotic using context-aware learning.
Image enhancement
Zheng 2022 [156]10.1007/s11548-022-02777-yLaparoscopicAlgorithm developmentRemove visual impairments
Cheng 2022 [157]10.1155/2022/2752444RoboticComparative experimental studyImage edge detection algorithm in robotic gastric surgery.
Akbari 2009 [29]10.1109/IEMBS.2009.5333766LaparoscopicProspective evaluationArtery detection in laparoscopic cholecystectomy.
Katic 2013 [158]10.1016/j.compmedimag.2013.03.003LaparoscopicConceptual/methodological articleReduce information overload in surgery.
Beyersdorffer 2021 [159]10.1515/BMT-2020-0106LaparoscopicFeasibility studyDetect the presence of dissecting tool within camera field.
Salazar-Colores 2022 [160]10.24875/CIRU.20000951LaparoscopicAlgorithm developmentRemove surgical smoke.
Wagner 2021 [30]10.1007/s00464-021-08509-8LaparoscopicProspective experimental studyAutonomous camera-guiding robot.
He 2025 [161]10.1007/s00464-025-11693-6LaparoscopicProspective feasibility studyIntraoperative perfusion assessment in colorectal surgery.
Surgeon perception, preparedness, and attitudes
Acosta 2025 [31]10.1016/j.ciresp.2024.12.003RoboticSurvey studyEvaluate Spanish surgeons’ knowledge, attitudes, and preparedness toward Digital Surgery and AI, comparing robotic vs. non-robotic users.
Luense 2023 [162]10.1007/s00423-023-03134-6LaparoscopicSurvey studySurvey of German surgeons to identify limitations of current laparoscopy and desired AI features in future systems.
Shafiei 2025 [32]10.1177/00187208241285513Laparoscopic and RoboticExperimental study using simulation dataTo predict mental workload during surgical tasks.
Table 3. Risk of bias assessment.
Table 3. Risk of bias assessment.
CategoryLow RiskModerate RiskHigh RiskExternal ValidationMulticenter Design
Object/Structure Detection28.0%60.0%12.0%12.0%18.0%
Skill Assessment & Training21.2%66.7%12.1%9.1%15.2%
Workflow Recognition & Guidance35.0%55.0%10.0%25.0%30.0%
Decision Support & Outcome Prediction31.8%59.1%9.1%27.3%40.9%
Augmented Reality & Navigation40.0%50.0%10.0%33.3%46.7%
Image Enhancement20.0%60.0%20.0%10.0%20.0%
Perception/Preparedness100.0%0.0%0.0%100.0%
Global 31.4%58.0%10.6%20.0%27.1%
Table 4. Summary of performance metrics and validation characteristics by thematic category. Median or representative values are shown where available.
Table 4. Summary of performance metrics and validation characteristics by thematic category. Median or representative values are shown where available.
CategoryTypical Metrics ReportedMedian/Representative ValuesExternal Validation (%)Clinical or Ex Vivo Validation (%)Multicenter Studies (%)Main Limitations
Object/Structure DetectionAccuracy, IoU, Dice, F1-scoreAccuracy 0.87 (range 0.72–0.96); Dice 0.83 (0.68–0.92)12%48%18%Lack of standard annotation, limited generalizability
Surgical Skill Assessment & TrainingAccuracy, F1-score, correlation with human ratingAccuracy 0.84 (0.70–0.93); r = 0.71 with expert scoring8%15%15%Mostly simulation-based, subjective reference
Workflow Recognition & Intraoperative GuidancePhase accuracy, F1-score, mAPAccuracy 0.89 (0.74–0.95); mAP 0.82 (0.65–0.91)25%30%30%Variability in labelling and phase definitions
Decision Support & Outcome PredictionAUC, sensitivity, specificityAUC 0.86 (0.73–0.93); sens. 0.82 (0.70–0.91)27%40%41%Retrospective data, poor model interpretability
Augmented Reality & NavigationRegistration error (TRE, mm), overlay latencyTRE 3.5 mm (2.4–5.8); latency < 150 ms33%66%45%Limited intraoperative usability evaluation
Image EnhancementPSNR, SSIM, FPSPSNR 31.5 dB (28–37); SSIM 0.91 (0.84–0.96)10%30%20%Preclinical data, no clinical outcome measures
Perception, Preparedness & AttitudesDescriptive survey statisticsPositive perception ≥ 80%; low formal AI training (≤20%)100%Self-reporting bias, uneven response rate
IoU: intersection over union; Dice: Dice similarity coefficient; F1: F1-score; AUC: area under the receiver operating characteristic curve; TRE: target registration error; PSNR: peak signal-to-noise ratio; SSIM: structural similarity index; FPS: frames per second.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gorini, L.; de la Plaza Llamas, R.; Díaz Candelas, D.A.; Arellano González, R.; Sun, W.; García Friginal, J.; Fra López, M.; Gemio del Rey, I.A. Artificial Intelligence in Gastrointestinal Surgery: A Systematic Review of Its Role in Laparoscopic and Robotic Surgery. J. Pers. Med. 2025, 15, 562. https://doi.org/10.3390/jpm15110562

AMA Style

Gorini L, de la Plaza Llamas R, Díaz Candelas DA, Arellano González R, Sun W, García Friginal J, Fra López M, Gemio del Rey IA. Artificial Intelligence in Gastrointestinal Surgery: A Systematic Review of Its Role in Laparoscopic and Robotic Surgery. Journal of Personalized Medicine. 2025; 15(11):562. https://doi.org/10.3390/jpm15110562

Chicago/Turabian Style

Gorini, Ludovica, Roberto de la Plaza Llamas, Daniel Alejandro Díaz Candelas, Rodrigo Arellano González, Wenzhong Sun, Jaime García Friginal, María Fra López, and Ignacio Antonio Gemio del Rey. 2025. "Artificial Intelligence in Gastrointestinal Surgery: A Systematic Review of Its Role in Laparoscopic and Robotic Surgery" Journal of Personalized Medicine 15, no. 11: 562. https://doi.org/10.3390/jpm15110562

APA Style

Gorini, L., de la Plaza Llamas, R., Díaz Candelas, D. A., Arellano González, R., Sun, W., García Friginal, J., Fra López, M., & Gemio del Rey, I. A. (2025). Artificial Intelligence in Gastrointestinal Surgery: A Systematic Review of Its Role in Laparoscopic and Robotic Surgery. Journal of Personalized Medicine, 15(11), 562. https://doi.org/10.3390/jpm15110562

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop