Automated Capture of Intraoperative Adverse Events Using Artificial Intelligence: A Systematic Review and Meta-Analysis
Abstract
:1. Introduction
2. Materials and Methods
2.1. Search Strategy
2.2. Inclusion and Exclusion Criteria
2.3. Screening
2.4. Data Extraction
2.5. Meta-Analysis
2.6. Quality Assessment
3. Results
3.1. Outcome Statistics
3.2. Study Validation and Conventional Parameters
3.3. Meta-Analysis
3.4. Quality Assessment
4. Discussion
5. Conclusions
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Leape, L.L.; Brennan, T.A.; Laird, N.; Lawthers, A.G.; Localio, A.R.; Barnes, B.A.; Hebert, L.; Newhouse, J.P.; Weiler, P.C.; Hiatt, H. The nature of adverse events in hospitalized patients: Results of the Harvard Medical Practice Study II. N. Engl. J. Med. 1991, 324, 377–384. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Mitchell, I.; Schuster, A.; Smith, K.; Pronovost, P.; Wu, A. Patient safety incident reporting: A qualitative study of thoughts and perceptions of experts 15 years after “to err is human”. BMJ Qual. Saf. 2016, 25, 92–99. [Google Scholar] [CrossRef] [PubMed]
- Bohnen, J.D.; Mavros, M.N.; Ramly, E.P.; Chang, Y.; Yeh, D.D.; Lee, J.; De Moya, M.; King, D.R.; Fagenholz, P.J.; Butler, K. Intraoperative adverse events in abdominal surgery: What happens in the operating room does not stay in the operating room. Ann. Surg. 2017, 265, 1119–1125. [Google Scholar] [CrossRef] [PubMed]
- Ramly, E.P.; Larentzakis, A.; Bohnen, J.D.; Mavros, M.; Chang, Y.; Lee, J.; Yeh, D.D.; Demoya, M.; King, D.R.; Fagenholz, P.J. The financial impact of intraoperative adverse events in abdominal surgery. Surgery 2015, 158, 1382–1388. [Google Scholar] [CrossRef] [PubMed]
- Han, K.; Bohnen, J.D.; Peponis, T.; Martinez, M.; Nandan, A.; Yeh, D.D.; Lee, J.; Demoya, M.; Velmahos, G.; Kaafarani, H.M. The surgeon as the second victim? Results of the Boston Intraoperative Adverse Events Surgeons’ Attitude (BISA) study. J. Am. Coll. Surg. 2017, 224, 1048–1056. [Google Scholar] [CrossRef]
- Kaafarani, H.M.; Velmahos, G.C. Intraoperative adverse events: The neglected quality indicator of surgical care? Surgery 2015, 157, 6–7. [Google Scholar] [CrossRef]
- Jung, J.J.; Elfassy, J.; Jüni, P.; Grantcharov, T. Adverse events in the operating room: Definitions, prevalence, and characteristics. A systematic review. World J. Surg. 2019, 43, 2379–2392. [Google Scholar] [CrossRef]
- Madani, A.; Namazi, B.; Altieri, M.S.; Hashimoto, D.A.; Rivera, A.M.; Pucher, P.H.; Navarrete-Welton, A.; Sankaranarayanan, G.; Brunt, L.M.; Okrainec, A. Artificial intelligence for intraoperative guidance: Using semantic segmentation to identify surgical anatomy during laparoscopic cholecystectomy. Ann. Surg. 2022, 276, 363–369. [Google Scholar] [CrossRef]
- Hashimoto, D.A.; Rosman, G.; Witkowski, E.R.; Stafford, C.; Navarrete-Welton, A.J.; Rattner, D.W.; Lillemoe, K.D.; Rus, D.L.; Meireles, O.R. Computer vision analysis of intraoperative video: Automated recognition of operative steps in laparoscopic sleeve gastrectomy. Ann. Surg. 2019, 270, 414. [Google Scholar] [CrossRef]
- Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Syst. Rev. 2021, 10, 1–11. [Google Scholar] [CrossRef]
- McInnes, M.D.; Moher, D.; Thombs, B.D.; McGrath, T.A.; Bossuyt, P.M.; Clifford, T.; Cohen, J.F.; Deeks, J.J.; Gatsonis, C.; Hooft, L. Preferred reporting items for a systematic review and meta-analysis of diagnostic test accuracy studies: The PRISMA-DTA statement. JAMA 2018, 319, 388–396. [Google Scholar] [CrossRef]
- Salameh, J.-P.; Bossuyt, P.M.; McGrath, T.A.; Thombs, B.D.; Hyde, C.J.; Macaskill, P.; Deeks, J.J.; Leeflang, M.; Korevaar, D.A.; Whiting, P. Preferred reporting items for systematic review and meta-analysis of diagnostic test accuracy studies (PRISMA-DTA): Explanation, elaboration, and checklist. BMJ 2020, 370, m2632. [Google Scholar] [CrossRef]
- Mitchell Goldenberg, M.E.; Aref, S.; Giovanni, C. Automated Capture of Intraoperative Adverse Evenets: A systematic Review. PROSPERO Int. Prospect. Regist. Syst. Rev. 2022. [Google Scholar]
- Stam, W.T.; Goedknegt, L.K.; Ingwersen, E.W.; Schoonmade, L.J.; Bruns, E.R.; Daams, F. The prediction of surgical complications using artificial intelligence in patients undergoing major abdominal surgery: A systematic review. Surgery 2021, 171, 1014–1021. [Google Scholar] [CrossRef]
- Russo, G.I.; Sholklapper, T.N.; Cocci, A.; Broggi, G.; Caltabiano, R.; Smith, A.B.; Lotan, Y.; Morgia, G.; Kamat, A.M.; Witjes, J.A. Performance of narrow band imaging (Nbi) and photodynamic diagnosis (pdd) fluorescence imaging compared to white light cystoscopy (wlc) in detecting non-muscle invasive bladder cancer: A systematic review and lesion-level diagnostic meta-analysis. Cancers 2021, 13, 4378. [Google Scholar] [CrossRef]
- Whiting, P.F.; Rutjes, A.W.; Westwood, M.E.; Mallett, S.; Deeks, J.J.; Reitsma, J.B.; Leeflang, M.M.; Sterne, J.A.; Bossuyt, P.M. QUADAS-2: A revised tool for the quality assessment of diagnostic accuracy studies. Ann. Intern. Med. 2011, 155, 529–536. [Google Scholar] [CrossRef]
- Chen, J.-W.; Lin, W.-J.; Lin, C.-Y.; Hung, C.-L.; Hou, C.-P.; Tang, C.-Y. An Automatic Bleeding-Rank System for Transurethral Resection of the Prostate Surgery Videos Using Machine Learning. Diagnostics 2021, 11, 1767. [Google Scholar] [CrossRef]
- Morita, S.; Tabuchi, H.; Masumoto, H.; Tanabe, H.; Kamiura, N. Real-time surgical problem detection and instrument tracking in cataract surgery. J. Clin. Med. 2020, 9, 3896. [Google Scholar] [CrossRef]
- Park, S.-H.; Park, H.-M.; Baek, K.-R.; Ahn, H.-M.; Lee, I.Y.; Son, G.M. Artificial intelligence based real-time microcirculation analysis system for laparoscopic colorectal surgery. World J. Gastroenterol. 2020, 26, 6945. [Google Scholar] [CrossRef]
- Su, R.; van der Sluijs, M.; Cornelissen, S.A.; Lycklama, G.; Hofmeijer, J.; Majoie, C.B.; van Doormaal, P.J.; van Es, A.C.; Ruijters, D.; Niessen, W.J. Spatio-temporal deep learning for automatic detection of intracranial vessel perforation in digital subtraction angiography during endovascular thrombectomy. Med. Image Anal. 2022, 77, 102377. [Google Scholar] [CrossRef]
- Zhang, S.; Wu, S.; Shang, S.; Qin, X.; Jia, X.; Li, D.; Cui, Z.; Xu, T.; Niu, G.; Bouakaz, A. Detection and monitoring of thermal lesions induced by microwave ablation using ultrasound imaging and convolutional neural networks. IEEE J. Biomed. Health Inform. 2019, 24, 965–973. [Google Scholar] [CrossRef] [PubMed]
- Zha, X.; Wehbe, L.; Sclabassi, R.J.; Mace, Z.; Liang, Y.V.; Yu, A.; Leonardo, J.; Cheng, B.C.; Hillman, T.A.; Chen, D.A. A deep learning model for automated classification of intraoperative continuous emg. IEEE Trans. Med. Robot. Bionics 2020, 3, 44–52. [Google Scholar] [CrossRef] [PubMed]
- Garcia-Martinez, A.; Vicente-Samper, J.M.; Sabater-Navarro, J.M. Automatic detection of surgical haemorrhage using computer vision. Artif. Intell. Med. 2017, 78, 55–60. [Google Scholar] [CrossRef] [PubMed]
- Wei, H.; Rudzicz, F.; Fleet, D.; Grantcharov, T.; Taati, B. Intraoperative Adverse Event Detection in Laparoscopic Surgery: Stabilized Multi-Stage Temporal Convolutional Network with Focal-Uncertainty Loss. Proc. Mach. Learn. Healthc. Conf. 2021, 149, 1–24. [Google Scholar]
- Hua, S.; Gao, J.; Wang, Z.; Yeerkenbieke, P.; Li, J.; Wang, J.; He, G.; Jiang, J.; Lu, Y.; Yu, Q. Automatic bleeding detection in laparoscopic surgery based on a faster region-based convolutional neural network. Ann. Transl. Med. 2022, 10, 546. [Google Scholar] [CrossRef]
- Okamoto, T.; Ohnishi, T.; Kawahira, H.; Dergachyava, O.; Jannin, P.; Haneishi, H. Real-time identification of blood regions for hemostasis support in laparoscopic surgery. Signal Image Video Process. 2019, 13, 405–412. [Google Scholar] [CrossRef] [Green Version]
- Jo, K.; Choi, B.; Choi, S.; Moon, Y.; Choi, J. Automatic detection of hemorrhage and surgical instrument in laparoscopic surgery image. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016. [Google Scholar]
- Kugener, G.; Zhu, Y.; Pangal, D.J.; Sinha, A.; Markarian, N.; Roshannai, A.; Chan, J.; Anandkumar, A.; Hung, A.J.; Wrobel, B.B.; et al. Deep neural networks can accurately detect blood loss and hemorrhage control task success from intraoperative video. Neurosurgery 1906, 10. [Google Scholar] [CrossRef]
- Pangal, D.J.; Kugener, G.; Zhu, Y.; Sinha, A.; Unadkat, V.; Cote, D.J.; Strickland, B.; Rutkowski, M.; Hung, A.; Anandkumar, A. Expert surgeons and deep learning models can predict the outcome of surgical hemorrhage from 1 min of video. Sci. Rep. 2022, 12, 1–10. [Google Scholar] [CrossRef]
- Lecoanet, A.; Vidal-Trecan, G.; Prate, F.; Quaranta, J.-F.; Sellier, E.; Guyomard, A.; Seigneurin, A.; François, P. Assessment of the contribution of morbidity and mortality conferences to quality and safety improvement: A survey of participants’ perceptions. BMC Health Serv. Res. 2016, 16, 1–7. [Google Scholar] [CrossRef] [Green Version]
- Jin, A.; Yeung, S.; Jopling, J.; Krause, J.; Azagury, D.; Milstein, A.; Fei-Fei, L. Tool detection and operative skill assessment in surgical videos using region-based convolutional neural networks. In Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018. [Google Scholar]
- Yamazaki, Y.; Kanaji, S.; Matsuda, T.; Oshikiri, T.; Nakamura, T.; Suzuki, S.; Hiasa, Y.; Otake, Y.; Sato, Y.; Kakeji, Y. Automated surgical instrument detection from laparoscopic gastrectomy video images using an open source convolutional neural network platform. J. Am. Coll. Surg. 2020, 230, 725–732.e1. [Google Scholar] [CrossRef]
- Lee, D.; Yu, H.W.; Kwon, H.; Kong, H.-J.; Lee, K.E.; Kim, H.C. Evaluation of surgical skills during robotic surgery by deep learning-based multiple surgical instrument tracking in training and actual operations. J. Clin. Med. 2020, 9, 1964. [Google Scholar] [CrossRef]
- Cacciamani, G.E.; Anvar, A.; Chen, A.; Gill, I.; Hung, A.J. How the use of the artificial intelligence could improve surgical skills in urology: State of the art and future perspectives. Curr. Opin. Urol. 2021, 31, 378–384. [Google Scholar] [CrossRef]
- Checcucci, E.; Autorino, R.; Cacciamani, G.E.; Amparore, D.; De Cillis, S.; Piana, A.; Piazzolla, P.; Vezzetti, E.; Fiori, C.; Veneziano, D.; et al. Artificial intelligence and neural networks in urology: Current clinical applications. Minerva Urol. Nefrol. 2020, 72, 49–57. [Google Scholar] [CrossRef]
- Chen, A.B.; Haque, T.; Roberts, S.; Rambhatla, S.; Cacciamani, G.; Dasgupta, P.; Hung, A.J. Artificial Intelligence Applications in Urology: Reporting Standards to Achieve Fluency for Urologists. Urol. Clin. N. Am. 2022, 49, 65–117. [Google Scholar] [CrossRef]
- Gómez Rivas, J.; Toribio Vázquez, C.; Ballesteros Ruiz, C.; Taratkin, M.; Marenco, J.L.; Cacciamani, G.E.; Checcucci, E.; Okhunov, Z.; Enikeev, D.; Esperto, F.; et al. Artificial intelligence and simulation in urology. Actas Urol. Esp. (Engl. Ed.) 2021, 45, 524–529. [Google Scholar] [CrossRef]
- Hung, A.J.; Chen, A.B.; Cacciamani, G.E.; Gill, I.S. Artificial Intelligence Will (MAY) Make Doctors Expendable (IN GOOD WAYS). Pro. Eur. Urol. Focus. 2021, 7, 683–684. [Google Scholar] [CrossRef]
- Anh, N.X.; Nataraja, R.M.; Chauhan, S. Towards near real-time assessment of surgical skills: A comparison of feature extraction techniques. Comput. Methods Programs Biomed. 2020, 187, 105234. [Google Scholar] [CrossRef]
- Zia, A.; Sharma, Y.; Bettadapura, V.; Sarin, E.L.; Essa, I. Video and accelerometer-based motion analysis for automated surgical skills assessment. Int. J. Comput. Assist. Radiol. Surg. 2018, 13, 443–455. [Google Scholar] [CrossRef] [Green Version]
- Azari, D.P.; Frasier, L.L.; Quamme, S.R.P.; Greenberg, C.C.; Pugh, C.; Greenberg, J.A.; Radwin, R.G. Modeling surgical technical skill using expert assessment for automated computer rating. Ann. Surg. 2019, 269, 574. [Google Scholar] [CrossRef]
- Hung, A.J.; Chen, J.; Jarc, A.; Hatcher, D.; Djaladat, H.; Gill, I.S. Development and validation of objective performance metrics for robot-assisted radical prostatectomy: A pilot study. J. Urol. 2018, 199, 296–304. [Google Scholar] [CrossRef]
- Levin, M.; McKechnie, T.; Khalid, S.; Grantcharov, T.P.; Goldenberg, M. Automated methods of technical skill assessment in surgery: A systematic review. J. Surg. Educ. 2019, 76, 1629–1639. [Google Scholar] [CrossRef] [PubMed]
- Aminsharifi, A.; Irani, D.; Pooyesh, S.; Parvin, H.; Dehghani, S.; Yousofi, K.; Fazel, E.; Zibaie, F. Artificial neural network system to predict the postoperative outcome of percutaneous nephrolithotomy. J. Endourol. 2017, 31, 461–467. [Google Scholar] [CrossRef] [PubMed]
- Murff, H.J.; FitzHenry, F.; Matheny, M.E.; Gentry, N.; Kotter, K.L.; Crimin, K.; Dittus, R.S.; Rosen, A.K.; Elkin, P.L.; Brown, S.H. Automated identification of postoperative complications within an electronic medical record using natural language processing. JAMA 2011, 306, 848–855. [Google Scholar] [CrossRef] [Green Version]
- Bruce, J.; Russell, E.; Mollison, J.; Krukowski, Z. The measurement and monitoring of surgical adverse events. Clin. Gov. 2002, 7, 48. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Hamilton, E.C.; Pham, D.H.; Minzenmayer, A.N.; Austin, M.T.; Lally, K.P.; Tsao, K.; Kawaguchi, A.L. Are we missing the near misses in the OR?—Underreporting of safety incidents in pediatric surgery. J. Surg. Res. 2018, 221, 336–342. [Google Scholar] [CrossRef]
- Seiden, S.C.; Barach, P. Wrong-side/wrong-site, wrong-procedure, and wrong-patient adverse events: Are they preventable? Arch. Surg. 2006, 141, 931–939. [Google Scholar] [CrossRef] [Green Version]
- Elwy, A.R.; Itani, K.M.; Bokhour, B.G.; Mueller, N.M.; Glickman, M.E.; Zhao, S.; Rosen, A.K.; Lynge, D.; Perkal, M.; Brotschi, E.A. Surgeons’ disclosures of clinical adverse events. JAMA Surg. 2016, 151, 1015–1021. [Google Scholar] [CrossRef]
- Alsubaie, H.; Goldenberg, M.; Grantcharov, T. Quantifying recall bias in surgical safety: A need for a modern approach to morbidity and mortality reviews. Can. J. Surg. 2019, 62, 39. [Google Scholar] [CrossRef] [Green Version]
- Feldman, L.; Barkun, J.; Barkun, A.; Sampalis, J.; Rosenberg, L. Measuring postoperative complications in general surgery patients using an outcomes-based strategy: Comparison with complications presented at morbidity and mortality rounds. Surgery 1997, 122, 711–720. [Google Scholar] [CrossRef]
- Hutter, M.M.; Rowell, K.S.; Devaney, L.A.; Sokal, S.M.; Warshaw, A.L.; Abbott, W.M.; Hodin, R.A. Identification of surgical complications and deaths: An assessment of the traditional surgical morbidity and mortality conference compared with the American College of Surgeons-National Surgical Quality Improvement Program. J. Am. Coll. Surg. 2006, 203, 618–624. [Google Scholar] [CrossRef]
Author (Year) | Study Type, Data Source | Dataset | Type of Procedure, Surgical Specialty | Adverse Event, Medium | Type of AI | AI Training/Ground-Truth Establishment | Validation | Outcome/Comparison to Ground-Truth or Conventional Parameter |
---|---|---|---|---|---|---|---|---|
Chen et al. [17] (2021) | Retrospective Data source: Recorded videos of 50 different TURP procedures | 287 video clips from complete recording videos of 50 different TURP procedures 150 videos training data (10% for validation); 137 videos testing data | TURP, urology | Bleeding, video | ResUNet for segmentation (neural network) KNN, NB, Random Forest, SVM for video classification | 3 experienced urologists graded video clips 0–3 based on visual clarity | Validation completed (limited information), unique data for testing stage | KNN: highest performing AI classification model; Improved when compared to ground-truth after optimizing video |
Morita et al. [18] (2020) | Retrospective Data source: Recordings of cataract surgeries performed at Saneikai Tsukazaki Hospital | 425 video recordings of cataract surgery 310 training data (57 with problems), 15 validation data (5 problems), 100 test data (50 with problems) | Cataract surgery, ophthalmology | Vitreous prolapse, capsule rupture, damage to iris, iris prolapse, rupture of the zonule of the zinn, dropped nucleus, video | Inception V3 (neural network) | Annotations of surgical problems in video of cataract surgery | Validation completed (limited information), unique data for testing stage | High problem detection in critical phase of cataract surgery; detected problem faster than ophthalmologist 42/44 (95%) times |
Park et al. [19] (2020) | Prospective Data source: Patients undergoing laparoscopic surgery for colorectal cancer at Pusan National University Yangsan Hospital | 50 training videos (10,000 ICG curves from 200 different locations in the ICG videos) 15 testing videos | Laparoscopic surgery for colorectal cancer, general surgery | Microperfusion, Indocyanine green (ICG) curves | Self-organizing map (neural network) | Training ICG curves were classified into 25 most common patterns, associated with risk of inadequate perfusion | Cross-validation, unique data for testing stage | Compared to T 1/2max, TR, and RS, AUC higher (0.842 vs. 0.734, 0.750, 0.677) and equal or higher for most other statistics |
Su et al. [20] (2022) | Retrospective Data source: 3 large digital subtraction angiography (DSA) image series databases | 4429 patients from 3 databases; 85 perforations, 233 non-perforations in study | Endovascular therapy, interventionalist | Intracranial vessel perforation, DSA runs | Spatial-temporal networks (CNN, RNN) | Experienced radiologist reviewed DSA images for all perforation cases and annotated locations | Ten-fold cross-validation | AI performed at similar level as expert radiologist |
Zhang et al. [21] (2019) | Retrospective Data source: 82 groups of ablation experiments from 32 ex vivo liver tissues | 1640 ultrasound data matrices of thermal lesions: 1400 for training, 240 for testing | Microwave ablation, n/s | Thermal injuries, ultrasound images | CNN | Optical images of tissues sections used as ground-truths | Validated (limited information) | AUC for AI higher than conventional B-mode images |
Zha et al. [22] (2020) | Prospective Data source: EMG data recorded continuously during thyroid surgery | 5 patients undergoing thyroid surgery One patient model (85% for training, 15% for testing) Cross-testing (4 for training, 1 for testing) | Thyroid surgery, n/s | Abnormal EMG signals, intraoperative neurophysiological monitoring | CNN, LSTM | Expert neurophysiologists classified EMGs | Unclear validation, unique data for testing stage | AI performed higher than other baseline methods |
Garcia-Martinez et al. [23] (2017) | Retrospective Data source: Non-specific laparoscopic videos; in vitro laboratory system | 23 in vivo laparoscopic training videos (17 bleeding) In vitro training videos with 5 different configurations 25 in vitro images for testing 32 in vivo images for testing | Various (cholecystectomies, pelvic surgeries, total mesorectal excisions, radical hysterectomies, pancreatectomy, gastrectomy, aortic lymphadenectomy, retroperitoneal dissections, nephroureterectomies, and colectomies) | Bleeding, video | Computer vision algorithm (open source computer vision and machine learning software library Open CV) | Developed algorithm after analyzing series of images for blood detection based on pixel ratios | Cross-validation of pixels to obtain threshold for bleeding, unique date for testing stage | Compared to two previous algorithms for blood pixel classification; in vitro bleeding classification performed better than in vivo bleeding classification |
Wei et al. [24] (2021) | Retrospective Data Source: Operating room at St. Michael’s Hospital in Toronto, Canada, using the OR Black Box ® | 130 laparoscopic videos | Laparoscopic surgery for colorectal cancer, general surgery | Bleeding/thermal injury, video | CNN | Videos reviewed and annotated by three trained surgeons, labeling blood, bleeding, burn, and thermal injury | 5-fold cross-validation to select best epoch and threshold, unclear if used unique data for testing | AI outperformed InceptionV3: AUROC 0.74 vs. 0.80 in bleeding detection; 0.83 vs. 0.93 in thermal injury detection; average precision 0.24 vs. 0.36 in bleeding; 0.38 vs. 0.56 in thermal injury detection |
Hua et al. [25] (2022) | Retrospective Data source: Laparoscopic surgery video recorded at Peking Union Medical College Hospital | 12 bleeding video clips (10 laparoscopic surgeries) | Laparoscopic surgery, general surgery | Bleeding point detection, video | RCNN | Ground-truth areas of bleeding point marked by 2 senior surgeons | No validation explanation | Introduced temporal component that improved bleeding detection compared to previous systems |
Okamoto et al. [26] (2019) | Retrospective Data source: Non-specific laparoscopic surgical videos | 10 videos of patients undergoing laparoscopic surgery | Laparoscopic surgery, n/s | Bleeding, video | SVM | Ground-truth established by annotations | Cross-validation, unique data for testing stage | High outcome measures when compared to ground-truths |
Jo et al. [27] (2016) | Retrospective Data source: Non- specific laparoscopic surgical videos | 4 testing videos | Robot-assisted laparoscopy, n/s | Bleeding, video | Original algorithm | Established threshold for hemorrhage candidate areas | No validation explanation, likely unique data for testing stage | No comparison identified |
Kugener et al. [28] (2022) | Retrospective Data source: SOCAL | 123 training videos, 20 testing videos | Internal carotid artery injury repair, neurosurgery | Bleeding, video | Deep neural network, LSTM | Automated and annotated versions | Validation of model | RSME higher compared to two control methods |
Pangal et al. [29] (2022) | Retrospective Data source: SOCAL | 127 training videos, 20 testing videos | Internal carotid artery injury repair, neurosurgery | Bleeding, video | Deep Neural Network, LSTM | Blood loss measured for ground-truth | Validated SOCALNet predictions | SOCALNet met or surpassed expert prediction performance |
Algorithm Type | Citation Using Algorithm |
---|---|
Trees and boosting (Random Forest) | 17 |
Support vector machine | 17, 26 |
Naïve Bayes | 17 |
K nearest neighbor | 17 |
Artificial neural network | 17, 18, 19, 20, 21, 22, 24, 25, 28, 29 |
Computer vision algorithm | 23, 26, 27 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Eppler, M.B.; Sayegh, A.S.; Maas, M.; Venkat, A.; Hemal, S.; Desai, M.M.; Hung, A.J.; Grantcharov, T.; Cacciamani, G.E.; Goldenberg, M.G. Automated Capture of Intraoperative Adverse Events Using Artificial Intelligence: A Systematic Review and Meta-Analysis. J. Clin. Med. 2023, 12, 1687. https://doi.org/10.3390/jcm12041687
Eppler MB, Sayegh AS, Maas M, Venkat A, Hemal S, Desai MM, Hung AJ, Grantcharov T, Cacciamani GE, Goldenberg MG. Automated Capture of Intraoperative Adverse Events Using Artificial Intelligence: A Systematic Review and Meta-Analysis. Journal of Clinical Medicine. 2023; 12(4):1687. https://doi.org/10.3390/jcm12041687
Chicago/Turabian StyleEppler, Michael B., Aref S. Sayegh, Marissa Maas, Abhishek Venkat, Sij Hemal, Mihir M. Desai, Andrew J. Hung, Teodor Grantcharov, Giovanni E. Cacciamani, and Mitchell G. Goldenberg. 2023. "Automated Capture of Intraoperative Adverse Events Using Artificial Intelligence: A Systematic Review and Meta-Analysis" Journal of Clinical Medicine 12, no. 4: 1687. https://doi.org/10.3390/jcm12041687