You are currently viewing a new version of our website. To view the old version click .
Healthcare
  • Systematic Review
  • Open Access

27 February 2024

Risk Management and Patient Safety in the Artificial Intelligence Era: A Systematic Review

,
,
,
,
,
,
,
and
1
Department of Anatomical, Histological, Forensic and Orthopaedic Sciences, Sapienza University of Rome, 00161 Rome, Italy
2
Complex Intercompany Structure of Forensic Medicine, 85100 Potenza, Italy
3
Department of Medical and Surgical Sciences, University Magna Graecia of Catanzaro, 88100 Catanzaro, Italy
4
Regional Hospital “San Carlo”, 85100 Potenza, Italy

Abstract

Background: Healthcare systems represent complex organizations within which multiple factors (physical environment, human factor, technological devices, quality of care) interconnect to form a dense network whose imbalance is potentially able to compromise patient safety. In this scenario, the need for hospitals to expand reactive and proactive clinical risk management programs is easily understood, and artificial intelligence fits well in this context. This systematic review aims to investigate the state of the art regarding the impact of AI on clinical risk management processes. To simplify the analysis of the review outcomes and to motivate future standardized comparisons with any subsequent studies, the findings of the present review will be grouped according to the possibility of applying AI in the prevention of the different incident type groups as defined by the ICPS. Materials and Methods: On 3 November 2023, a systematic review of the literature according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines was carried out using the SCOPUS and Medline (via PubMed) databases. A total of 297 articles were identified. After the selection process, 36 articles were included in the present systematic review. Results and Discussion: The studies included in this review allowed for the identification of three main “incident type” domains: clinical process, healthcare-associated infection, and medication. Another relevant application of AI in clinical risk management concerns the topic of incident reporting. Conclusions: This review highlighted that AI can be applied transversely in various clinical contexts to enhance patient safety and facilitate the identification of errors. It appears to be a promising tool to improve clinical risk management, although its use requires human supervision and cannot completely replace human skills. To facilitate the analysis of the present review outcome and to enable comparison with future systematic reviews, it was deemed useful to refer to a pre-existing taxonomy for the identification of adverse events. However, the results of the present study highlighted the usefulness of AI not only for risk prevention in clinical practice, but also in improving the use of an essential risk identification tool, which is incident reporting. For this reason, the taxonomy of the areas of application of AI to clinical risk processes should include an additional class relating to risk identification and analysis tools. For this purpose, it was considered convenient to use ICPS classification.

1. Introduction

Healthcare systems represent complex organizations [] within which multiple interconnected factors (physical environment, human factors, technological devices, quality of care) form a dense network whose imbalance is potentially able to compromise patient safety []. The latter, defined as “the absence of preventable harm to a patient and reduction of risk of unnecessary harm associated with healthcare to an acceptable minimum” [], is a fundamental principle of healthcare and is part of the patient’s rights. According to the World Health Organization (WHO), there is a one in a million chance of aviation accidents, while the possibility of harming a patient in the Clinical process is one in three hundred []. As specified by the Global Patient Safety Action plan 2021–2030, patient safety incidents are a growing problem and one of the major causes of death and disability worldwide []. In this scenario, the need for hospitals to expand clinical governance programs is easily understood. Clinical governance is defined as the system through which healthcare organizations improve the quality of care and guarantee high standards of care, striving for excellence [].
One of the main pillars of this healthcare quality system is clinical risk management, which refers to the set of proactive and reactive clinical tools, procedures, and processes used to detect, monitor, reduce, and prevent potential risks and errors in clinical practice to safeguard patient safety []. Risk assessment instruments include incident reporting [], reviews of medical records, safety walk-arounds, administrative data obtained from hospital discharge forms, patient complaints, and information derived from claims litigation. These methods allow for the identification of potential or actual problems that may cause or have caused an adverse event for the patient or healthcare workers. Risk can be managed by using several approaches, including the FMEA and the FMECA [,], morbidity and mortality review, clinical auditing [], significant event auditing, the London Protocol, the SHELL model, and root cause analysis []. The information collected after identifying and analyzing the biases present in the clinical care process is preparatory to the introduction of future risk prevention strategies, aimed at improving the quality of care. This objective is achieved through the introduction or implementation of procedures and protocols, by ensuring continuous training for healthcare workers and introducing new technologies [].
Nowadays, the introduction and development of new technologies in the healthcare sector is going to transform medical service and offer the opportunity to promote harm minimization. Artificial intelligence (AI), the application of which has grown exponentially in various sectors in recent times, fits well in this context [].
AI, coined in 1956 [], is a modern informatics technology that simulates human behavior and makes devices efficient for achieving tasks that usually require skilled human intelligence. It is used in different fields and over the past few years, AI application in medicine has been growing []. AI encompasses several domains, such as machine learning (ML), which relies on different algorithms to learn and improve from experience without being programmed []; deep learning (DL), based on artificial neural networks []; or speech recognition, that is, the ability of a machine to convert a speech signal into a sequence of words, creating an interface between humans and technology []. AI can enhance medical achievements using automated clinical decision support (CDS), which assists health workers in making complex decisions in clinical practice by combining relevant patient information []. This application is facilitated by increasing diffusion of electronic medical record (EMR) and computerized provider order entry (CPOE) systems []. Moreover, natural language processing (NLP) is a technique able to convert unstructured text (e.g., medical records) into datasets easily analyzable by ML models []. Lastly, AI supports Internet of medical things (IoMT) technologies, which are bio-analytical tools capable of collecting, analyzing, and transmitting health data to increase the efficiency of human care [].
Previous studies have reported that AI can improve the quality of healthcare [], although its application still has numerous limitations [].
This systematic review aims to investigate the state of the art regarding the impact of AI on clinical risk management processes. The International Classification for Patient Safety (ICPS), developed by the WHO, provides a taxonomy for the types of healthcare incidents that can occur, grouping them according to common characteristics, and facilitating benchmarks between results deriving from multiple sources, both at a national and an international level []. To simplify the analysis of the review outcomes and to motivate future standardized comparisons with any subsequent studies, the findings of the present review will be grouped according to the possibility of applying AI in the prevention of the different incident type groups as defined by the ICPS.

2. Materials and Methods

On 3 November 2023, a systematic review of the literature according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [] was carried out using the SCOPUS and Medline (via PubMed) databases, using the following search strings: (artificial intelligence OR AI) AND (patient safety); (artificial intelligence OR AI) AND (risk management).

2.1. Inclusion and Exclusion Criteria

The inclusion criteria were as follows: case report; original article; short survey; article in English; human study; medical and nursing field; full text available; publication date between 1 January 2013 and 3 November 2023. A ten-year interval research was chosen to focus the review on the most recent studies.
The exclusion criteria were as follows: articles not in English; abstract; editorial; review; erratum; book chapter; note; conference paper. All articles focused on other topics were excluded.

2.2. Quality Assessment and Critical Appraisal

M.F. and G.B. independently evaluated the entire texts of the articles. The articles on which there was a disagreement were discussed with the senior investigator, R.L.R., for the final decision.

2.3. Risk of Bias

The main risk was linked to the keywords selected for the search strings. Therefore, the Kappa interobserver variability coefficient showed “almost perfect agreement” (0.80) [].

2.4. Characteristics of Eligible Studies

A total of 297 articles were identified; 25 duplicate articles were removed, and 21 articles did not meet the inclusion criteria. After the selection process, 36 articles were included in the present systematic review.

3. Results and Discussion

Of the 297 articles found, 36 met the inclusion criteria (Figure 1).
Figure 1. PRISMA flow diagram of this systematic review.
The main features of each article included are summarized in Table 1.
Table 1. Main results of the systematic review.

3.1. Patient Safety Domains

In January 2009, the World Health Organization published a technical report providing the International Classification for Patient Safety (ICPS), a conceptual framework set out to allow for the codification of patient safety issues []. Specifically, “incident type” is one of the ten level classes comprising the framework, useful for categorizing patient safety incidents (clinical process, documentation, healthcare-associated infection, medication, blood, nutrition, oxygen, medical device, behavior, patient accidents, infrastructure, resources) [].
Based on the aforementioned classification, results were grouped into ICPS patient safety domains to facilitate analysis from a risk management perspective. The studies included in this review allowed for the identification of three main domains: clinical process, healthcare-associated infection, and medication. Furthermore, the topic of incident reporting was discussed in an additional paragraph, since thirteen studies focus on this issue.

3.1.1. Clinical Process

In recent times, AI has become increasingly prominent in the healthcare field, mainly in diagnosing, managing, treating, and screening pathologies [,,,]. Consequently, it is intuitive that the use of AI, in a clinical risk management perspective, concerns mainly these aspects of the Clinical process. By applying the ICPS classification, the use of AI in studies aimed at preventing the misinterpretation of radiographic investigations [,,] or the operating field [], translates into the potential advantage of avoiding inadequate treatments and procedures other than incorrect interventions.
Automated voice recognition software [] and data analysis [] have proven valuable in recognizing pathologies such as stroke and cancer, facilitating early detection and avoiding a delay in treatment. CDS systems [,] or intelligent checklists [] can avoid harm to patients deriving from inadequate treatment or non-conformity to guidelines and good clinical practices.

3.1.2. Healthcare-Associated Infections

Healthcare-associated infections (HCAIs) represent a major public health problem, with significant impacts on patients []. According to the Agency for Healthcare Research and Quality, HCAIs, and in particular sepsis, represent one of the ten leading causes of death in the United States []. Hence, as mortality in septic patients increases proportionally with the delay of antibiotic treatment, early prediction is crucial for timely interventions.
Machine learning algorithms applied to datasets can map several variables to predict the risk of Surgical Site Infections and sepsis [,] that could be undetected by healthcare workers, leading to a considerable reduction of morbidity and mortality-related infections. However, if the ML system is applied on a different dataset from the one on which it is deployed, the so-called dataset shift can occur, leading to the AI’s ineffective performance. For this reason, a clinician’s supervision is always necessary to evaluate any discrepancy between the clinical evaluation and the AI prediction or external validation to ascertain the general applicability of the ML system [].

3.1.3. Medication

An adverse drug event (ADE) consists of a medical therapy error that causes harm to the patient. It is estimated that this type of adverse event affects approximately 1 out of every 30 patients in health care [].
Medication errors are due to different factors, including the human factor []. A very common error in clinical practice related to the human factor is prescription error []. One of the best-known human factor-related prescription error examples is that of look-alike or sound-alike (LASA) drugs, in which the error occurs due to orthographic or phonetic similarity or packaging between drugs [,]. AI could be useful in clinical practice to prevent so-called “look-alike” errors due to similar packaging between different medications by applying deep learning to drug images [,].
Furthermore, the error due to the simultaneous prescription of drugs that have the same effect can be avoided through the use of CDS systems which generate an alarm capable of alerting the healthcare professional []. However, these systems require high accuracy and a low number of alerts burns and false positives to avoid generating further stress in the healthcare worker, which would contribute to fueling the risk of errors related to the human factor [].
Several studies have demonstrated that over half of ADEs occur during drug ordering []. To promote the best pharmaceutical care, it is essential to carry out a patient’s medication review, which is a critical evaluation of the drugs assumed to evaluate potential interactions and consequent adverse reactions. However, this part of clinical practice is often not exhaustive and takes a long time []. Machine learning combined with CDS systems or applied to data extracted from electronic health records are able to support pharmacovigilance activities and appears to be safe in performing medication reviews [,,]. In addition, natural language processing linked to clinical notes can determine the reason for the pharmaceutical prescription to verify its appropriateness [].
Computerized provider order entry (CPOE) systems reduce the risk of misinterpretation of the pharmacological order, as the integration of these systems with electronic health records promotes coordination of drug ordering, and cooperation between healthcare professionals allows for contextual collaboration between healthcare staff [,,]. On the other hand, the interaction of the healthcare worker with the CPOE has proven to be a source of error [,], potentially hesitating in an ADE. In this regard, the prevention of these adverse events can be promoted through the use of machine learning systems capable of identifying the factors predisposing to medication ordering errors [].

3.1.4. Incident Reporting Systems

In healthcare facilities, incident reporting systems are essential for managing clinical risk by notifying providers of adverse events []. It consists of reporting adverse events, near misses, risks, and potentially unsafe conditions to healthcare professionals [], including falls, HCAIs, transfusion [], and patients’ and operators’ aggressions []. By using databases, healthcare facilities can identify, map, and analyze adverse events that occur to prevent them from occurring again. One of the limitations of this method concerns the inexperience of the different categories of healthcare professionals who carry out the reporting []. It has been noted in several studies that an absence of codified terminology is one of the main obstacles [,], resulting in an under-analysis of the event and, consequently, an inability to learn from it. Due to the high volume of data collected by the IT systems responsible for these purposes, using free text for reporting these events reduces its effectiveness due to the difficulty in aggregating the data []. Of the 36 articles included in this systematic review, 13 [,,,,,,,,,,,,] investigate the applications of AI to improve the efficiency of incident reporting systems. The studies predominantly focus their attention on the possibility of standardizing events according to their type and severity. Furthermore, machine learning systems can evaluate the reporting rates of adverse events and estimate the risk of under-reporting [] or to analyze contributing factors that predispose to their genesis [].
The use of AI in this regard has a dual purpose. On the one hand, it reduces the workload of human work risk management staff, allowing them to dedicate themselves to other activities to implement patient safety [,]. On the other hand, the conversion of unstructured data into structured information has proven effective in identifying situations that are potentially fatal or capable of causing serious harm [], prioritizing adverse events with significant consequences for patients []. The potential exclusive application of AI in real-world settings requires further studies, since a part of a reported event can be related to more than one type of accident []. In addition, the use of abbreviations or acronyms requires manual review [,]. At present, AI systems are not able to perfectly replace manual review [], and the machine can only act as a support to risk management experts.

4. Conclusions

This review highlighted that AI can be applied transversely in various clinical contexts to enhance patient safety and facilitate the identification of errors. To facilitate the analysis of the present review outcome and to enable comparison with future systematic reviews, it was deemed useful to refer to a pre-existing taxonomy for the identification of adverse events.
For this purpose, it was considered convenient to use ICPS classification, which includes ten classes of “incident type”. The main fields of application, according to the codification mentioned above, concern the prevention of errors concerning clinical processes, medication errors and the development of HAIs.
Additionally, the results of the present study highlighted the usefulness of AI not only for risk prevention in clinical practice, but also in improving the use of an essential risk identification tool, which is incident reporting. It follows that ICPS classification could be limiting for the analysis of the application of AI to clinical risk management systems, as it relates to the clinical aspects of healthcare risk. For this reason, the taxonomy of the areas of application of AI to clinical risk processes should include an additional class relating to risk identification and analysis tools.
The advantages of using AI in risk management systems translate into a reduction in the workload of the risk manager, who can devote more time to developing procedures and paths to prevent mistakes, and the healthcare staff, who can spend more time with patients. However, aside from diminishing existing risks, as in the case of dataset shift, there is also the possibility of introducing new risks, such as false positive alerts that increase cognitive stress, thereby enabling human error. Furthermore, its use necessarily requires human supervision. To conclude, AI appears to be a promising tool to improve clinical risk management, although its use requires human supervision and cannot completely replace human skills.

Author Contributions

Conceptualization, M.F. and G.B.; methodology, R.L.R.; software, A.M.; validation, R.L.R., A.M. and I.A.; formal analysis, G.V. and I.A.; investigation, M.F. and A.D.F.; resources, N.D.F. and A.D.F.; data curation, M.F., G.B. and G.V.; writing—original draft preparation, M.F. and G.B.; writing—review and editing, G.V. and N.D.F.; visualization, R.L.R. and P.F.; supervision, R.L.R., A.M. and P.F.; project administration, P.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. McDaniel, R.R.J.; Driebe, D.J.; Lanham, H.J. Health care organizations as complex systems: New perspectives on design and management. Adv. Health Care Manag. 2013, 15, 3–26. [Google Scholar] [CrossRef]
  2. Braithwaite, J.; Runciman, W.B.; Merry, A.F. Towards safer, better healthcare: Harnessing the natural properties of complex sociotechnical systems. Qual. Saf. Health Care 2009, 18, 37–41. [Google Scholar] [CrossRef]
  3. World Health Organization. Fact Sheets. Patient Safety. Available online: https://www.who.int/news-room/facts-in-pictures/detail/patient-safety (accessed on 2 January 2024).
  4. Lima Júnior, A.J.; de Zanetti, A.C.B.; Dias, B.M.; Bernardes, A.; Gastaldi, F.M.; Gabriel, C.S. Occurrence and preventability of adverse events in hospitals: A retrospective study. Rev. Bras. Enferm. 2023, 76, e20220025. [Google Scholar] [CrossRef] [PubMed]
  5. Macfarlane, A.J.R. What is clinical governance? BJA Educ. 2019, 19, 174–175. [Google Scholar] [CrossRef]
  6. Stoumpos, A.I.; Kitsios, F.; Talias, M.A. Digital Transformation in Healthcare: Technology Acceptance and Its Applications. Int. J. Environ. Res. Public Health 2023, 20, 3407. [Google Scholar] [CrossRef]
  7. Benn, J.; Koutantji, M.; Wallace, L.; Spurgeon, P.; Rejman, M.; Healey, A.; Vincent, C. Feedback from incident reporting: Information and action to improve patient safety. BMJ Qual. Saf. 2009, 18, 11–21. [Google Scholar] [CrossRef] [PubMed]
  8. La Russa, R.; Fazio, V.; Ferrara, M.; Di Fazio, N.; Viola, R.V.; Piras, G.; Ciano, G.; Micheletta, F.; Frati, P. Proactive Risk Assessment through Failure Mode and Effect Analysis (FMEA) for Haemodialysis Facilities: A Pilot Project. Front. Public Health 2022, 10, 823680. [Google Scholar] [CrossRef] [PubMed]
  9. Micheletta, F.; Ferrara, M.; Bertozzi, G.; Volonnino, G.; Nasso, M.; La Russa, R. Proactive Risk Assessment through Failure Mode and Effect Analysis (FMEA) for Perioperative Management Model of Oral Anticoagulant Therapy: A Pilot Project. Int. J. Environ. Res. Public Health 2022, 19, 16430. [Google Scholar] [CrossRef]
  10. Paton, J.Y.; Ranmal, R.; Dudley, J. Clinical audit: Still an important tool for improving healthcare. Arch. Dis. Child.-Educ. Pract. 2015, 100, 83–88. [Google Scholar] [CrossRef]
  11. Shaqdan, K.; Aran, S.; Daftari Besheli, L.; Abujudeh, H. Root-Cause Analysis and Health Failure Mode and Effect Analysis: Two Leading Techniques in Health Care Quality Assessment. J. Am. Coll. Radiol. 2014, 11, 572–579. [Google Scholar] [CrossRef]
  12. Bellandi, T.; Albolino, S.; Tartaglia, R.; Bagnara, S. Human factors and ergonomics in patient safety management. In Handbook of Human Factors and Ergonomics in Health Care and Patient Safety, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2012; pp. 671–690. [Google Scholar]
  13. Ellahham, S.; Ellahham, N.; Simsekler, M.C.E. Application of Artificial Intelligence in the Health Care Safety Context: Opportunities and Challenges. Am. J. Med. Qual. 2019, 35, 341–348. [Google Scholar] [CrossRef]
  14. Hamet, P.; Tremblay, J. Artificial intelligence in medicine. Metabolism 2017, 69, S36–S40. [Google Scholar] [CrossRef]
  15. Bohr, A.; Memarzadeh, K. The rise of artificial intelligence in healthcare applications. In Artificial Intelligence in Healthcare; Academic Press: Waltham, MA, USA, 2020; pp. 25–60. [Google Scholar]
  16. Alzubi, J.; Nayyar, A.; Kumar, A. Machine Learning from Theory to Algorithms: An Overview. J. Phys. Conf. Ser. 2018, 1142, 012012. [Google Scholar] [CrossRef]
  17. Sarker, I.H. Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions. SN Comput. Sci. 2021, 2, 420. [Google Scholar] [CrossRef]
  18. Benkerzaz, S.; Elmir, Y.; Dennai, A. A Study on Automatic Speech Recognition. J. Inf. Technol. Rev. 2019, 10, 77–85. [Google Scholar] [CrossRef]
  19. Van Cauwenberge, D.; Van Biesen, W.; Decruyenaere, J.; Leune, T.; Sterckx, S. “Many roads lead to Rome and the Artificial Intelligence only shows me one road”: An interview study on physician attitudes regarding the implementation of computerised clinical decision support systems. BMC Med. Ethics 2022, 23, 50. [Google Scholar] [CrossRef]
  20. Sutton, R.T.; Pincock, D.; Baumgart, D.C.; Sadowski, D.C.; Fedorak, R.N.; Kroeker, K.I. An overview of clinical decision support systems: Benefits, risks, and strategies for success. NPJ Digit. Med. 2020, 3, 17. [Google Scholar] [CrossRef] [PubMed]
  21. Harrison, C.J.; Sidey-Gibbons, C.J. Machine learning in medicine: A practical introduction to natural language processing. BMC Med. Res. Methodol. 2021, 21, 158. [Google Scholar] [CrossRef] [PubMed]
  22. Al-Turjman, F.; Nawaz, M.H.; Ulusar, U.D. Intelligence in the Internet of Medical Things era: A systematic review of current and future trends. Comput. Commun. 2020, 150, 644–660. [Google Scholar] [CrossRef]
  23. Akinrinmade, A.O.; Adebile, T.M.; Ezuma-Ebong, C.; Bolaji, K.; Ajufo, A.; Adigun, A.O.; Mohammad, M.; Dike, J.C.; Okobi, O.E. Artificial Intelligence in Healthcare: Perception and Reality. Cureus 2023, 15, e45594. [Google Scholar] [CrossRef] [PubMed]
  24. Phelps, G.; Cooper, P. Can artificial intelligence help improve the quality of healthcare. J. Hosp. Manag. Health Policy 2020, 4, 29. [Google Scholar] [CrossRef]
  25. Runciman, W.; Hibbert, P.; Thomson, R.; Van Der Schaaf, T.; Sherman, H.; Lewalle, P. Towards an International Classification for Patient Safety: Key concepts and terms. Int. J. Qual. Health Care 2009, 21, 18–26. [Google Scholar] [CrossRef]
  26. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Syst. Rev. 2021, 10, 1–11. [Google Scholar] [CrossRef]
  27. Viera, A.J.; Garrett, J.M. Understanding interobserver agreement: The kappa statistic. Fam. Med. 2005, 37, 360–363. [Google Scholar] [PubMed]
  28. Drozdov, I.; Dixon, R.; Szubert, B.; Dunn, J.; Green, D.; Hall, N.; Shirandami, A.; Rosas, S.; Grech, R.; Puttagunta, S.; et al. An Artificial Neural Network for Nasogastric Tube Position Decision Support. Radiol. Artif. Intell. 2023, 5, e220165. [Google Scholar] [CrossRef] [PubMed]
  29. Bowness, J.S.; Macfarlane, A.J.R.; Burckett-St Laurent, D.; Harris, C.; Margetts, S.; Morecroft, M.; Phillips, D.; Rees, T.; Sleep, N.; Vasalauskaite, A.; et al. Evaluation of the impact of assistive artificial intelligence on ultrasound scanning for regional anaesthesia. Br. J. Anaesth. 2023, 130, 226–233. [Google Scholar] [CrossRef] [PubMed]
  30. Hameed, M.S.; Laplante, S.; Masino, C.; Khalid, M.U.; Zhang, H.; Protserov, S.; Hunter, J.; Mashouri, P.; Fecso, A.B.; Brudno, M.; et al. What is the educational value and clinical utility of artificial intelligence for intraoperative and postoperative video analysis? A survey of surgeons and trainees. Surg. Endosc. 2023, 37, 9453–9460. [Google Scholar] [CrossRef]
  31. Datar, M.; Ramakrishnan, S.; Chong, J.; Montgomery, E.; Goss, T.F.; Coca, S.G.; Vassalotti, J.A. A Kidney Diagnostic’s Impact on Physician Decision-making in Diabetic Kidney Disease. Am. J. Manag. Care 2022, 28, 654–661. [Google Scholar] [CrossRef]
  32. Scholz, M.L.; Collatz-Christensen, H.; Blomberg, S.N.F.; Boebel, S.; Verhoeven, J.; Krafft, T. Artificial intelligence in Emergency Medical Services dispatching: Assessing the potential impact of an automatic speech recognition software on stroke detection taking the Capital Region of Denmark as case in point. Scand. J. Trauma. Resusc. Emerg. Med. 2022, 30, 36. [Google Scholar] [CrossRef]
  33. Torrente, M.; Sousa, P.A.; Hernández, R.; Blanco, M.; Calvo, V.; Collazo, A.; Guerreiro, G.R.; Núñez, B.; Pimentao, J.; Sánchez, J.C.; et al. An Artificial Intelligence-Based Tool for Data Analysis and Prognosis in Cancer Patients: Results from the Clarify Study. Cancers 2022, 14, 4041. [Google Scholar] [CrossRef]
  34. Scala, A.; Loperto, I.; Triassi, M.; Improta, G. Risk Factors Analysis of Surgical Infection Using Artificial Intelligence: A Single Center Study. Int. J. Environ. Res. Public Health 2022, 19, 10021. [Google Scholar] [CrossRef] [PubMed]
  35. Brown, A.; Cavell, G.; Dogra, N.; Whittlesea, C. The impact of an electronic alert to reduce the risk of co-prescription of low molecular weight heparins and direct oral anticoagulants. Int. J. Med. Inform. 2022, 164, 104780. [Google Scholar] [CrossRef] [PubMed]
  36. Festor, P.; Jia, Y.; Gordon, A.C.; Aldo Faisal, A.; Habli, I.; Komorowski, M. Assuring the safety of AI-based clinical decision support systems: A case study of the AI Clinician for sepsis treatment. BMJ Health Care Inform. 2022, 29, e100549. [Google Scholar] [CrossRef] [PubMed]
  37. Levivien, C.; Cavagna, P.; Grah, A.; Buronfosse, A.; Courseau, R.; Bézie, Y.; Corny, J. Assessment of a hybrid decision support system using machine learning with artificial intelligence to safely rule out prescriptions from medication review in daily practice. Int. J. Clin. Pharm. 2022, 44, 459–465. [Google Scholar] [CrossRef] [PubMed]
  38. Wang, Y.; Coiera, E.; Runciman, W.; Magrabi, F. Using multiclass classification to automate the identification of patient safety incident reports by type and severity. BMC Med. Inform. Decis. Mak. 2017, 17, 84. [Google Scholar] [CrossRef] [PubMed]
  39. Evans, H.P.; Anastasiou, A.; Edwards, A.; Hibbert, P.; Makeham, M.; Luz, S.; Sheikh, A.; Donaldson, L.; Carson-Stevens, A. Automated classification of primary care patient safety incident report content and severity using supervised machine learning (ML) approaches. Health Inform. J. 2020, 26, 3123–3139. [Google Scholar] [CrossRef] [PubMed]
  40. Wang, Y.; Coiera, E.; Magrabi, F. Using convolutional neural networks to identify patient safety incident reports by type and severity. J. Am. Med. Inform. Assoc. 2019, 26, 1600–1608. [Google Scholar] [CrossRef] [PubMed]
  41. Ozonoff, A.; Milliren, C.E.; Fournier, K.; Welcher, J.; Landschaft, A.; Samnaliev, M.; Saluvan, M.; Waltzman, M.; Kimia, A.A. Electronic surveillance of patient safety events using natural language processing. Health Inform. J. 2022, 28, 14604582221132429. [Google Scholar] [CrossRef]
  42. Wang, Y.; Coiera, E.; Magrabi, F. Can unified medical language system–based semantic representation improve automated identification of patient safety incident reports by type and severity? J. Am. Med. Inform. Assoc. 2020, 27, 1502–1509. [Google Scholar] [CrossRef]
  43. Fong, A.; Adams, K.T.; Gaunt, M.J.; Howe, J.L.; Kellogg, K.M.; Ratwani, R.M. Identifying health information technology related safety event reports from patient safety event report databases. J. Biomed. Inform. 2018, 86, 135–142. [Google Scholar] [CrossRef]
  44. Fong, A.; Adams, K.; Samarth, A.; McQueen, L.; Trivedi, M.; Chappel, T.; Grace, E.; Terrillion, S.; Ratwani, R.M. Assessment of Automating Safety Surveillance from Electronic Health Records: Analysis for the Quality and Safety Review System. J. Patient Saf. 2021, 17, e524–e528. [Google Scholar] [CrossRef]
  45. Lu, W.; Jiang, W.; Zhang, N.; Xue, F. A Deep Learning-Based Text Classification of Adverse Nursing Events. J. Healthc. Eng. 2021, 2021, 9800114. [Google Scholar] [CrossRef] [PubMed]
  46. King, C.R.; Abraham, J.; Fritz, B.A.; Cui, Z.; Galanter, W.; Chen, Y.; Kannampallil, T. Predicting self-intercepted medication ordering errors using machine learning. PLoS ONE 2021, 16, e0254358. [Google Scholar] [CrossRef]
  47. Fong, A.; Howe, J.L.; Adams, K.T.; Ratwani, R.M. Using active learning to identify health information technology related patient safety events. Appl. Clin. Inform. 2017, 8, 35–46. [Google Scholar] [CrossRef] [PubMed]
  48. Barmaz, Y.; Ménard, T. Bayesian Modeling for the Detection of Adverse Events Underreporting in Clinical Trials. Drug Saf. 2021, 44, 949–955. [Google Scholar] [CrossRef] [PubMed]
  49. Zhou, S.; Kang, H.; Yao, B.; Gong, Y. Analyzing Medication Error Reports in Clinical Settings: An Automated Pipeline Approach. AMIA Annu. Symp. Proc. 2018, 2018, 1611–1620. [Google Scholar]
  50. Wong, Z.S.Y.; So, H.Y.; Kwok, B.S.C.; Lai, M.W.S.; Sun, D.T.F. Medication-rights detection using incident reports: A natural language processing and deep neural network approach. Health Inform. J. 2020, 26, 1777–1794. [Google Scholar] [CrossRef] [PubMed]
  51. Yang, J.; Wang, L.; Phadke, N.A.; Wickner, P.G.; Mancini, C.M.; Blumenthal, K.G.; Zhou, L. Development and Validation of a Deep Learning Model for Detection of Allergic Reactions Using Safety Event Reports Across Hospitals. JAMA Netw. Open 2020, 3, E2022836. [Google Scholar] [CrossRef]
  52. Ting, H.W.; Chung, S.L.; Chen, C.F.; Chiu, H.Y.; Hsieh, Y.W. A drug identification model developed using deep learning technologies: Experience of a medical center in Taiwan. BMC Health Serv. Res. 2020, 20, 312. [Google Scholar] [CrossRef]
  53. Tabaie, A.; Sengupta, S.; Pruitt, Z.M.; Fong, A. A natural language processing approach to categorise contributing factors from patient safety event reports. BMJ Health Care Inform. 2023, 30, e100731. [Google Scholar] [CrossRef]
  54. Zhao, J.; Henriksson, A.; Asker, L.; Boström, H. Predictive modeling of structured electronic health records for adverse drug event detection. BMC Med. Inform. Decis. Mak. 2015, 15, S1. [Google Scholar] [CrossRef]
  55. Corny, J.; Rajkumar, A.; Martin, O.; Dode, X.; Lajonchère, J.P.; Billuart, O.; Bézie, Y.; Buronfosse, A. A machine learning-based clinical decision support system to identify prescriptions with a high risk of medication error. J. Am. Med. Inform. Assoc. 2020, 27, 1688–1694. [Google Scholar] [CrossRef]
  56. Lee, H.; Mansouri, M.; Tajmir, S.; Lev, M.H.; Do, S. A Deep-Learning System for Fully-Automated Peripherally Inserted Central Catheter (PICC) Tip Detection. J. Digit. Imaging 2018, 31, 393–402. [Google Scholar] [CrossRef]
  57. Li, Y.; Salmasian, H.; Harpaz, R.; Chase, H.; Friedman, C. Determining the reasons for medication prescriptions in the EHR using knowledge and natural language processing. AMIA Annu. Symp. Proc. 2011, 2011, 768–776. [Google Scholar] [PubMed]
  58. You, Y.S.; Lin, Y.S. A Novel Two-Stage Induced Deep Learning System for Classifying Similar Drugs with Diverse Packaging. Sensors 2023, 23, 7275. [Google Scholar] [CrossRef] [PubMed]
  59. De Bie, A.J.R.; Mestrom, E.; Compagner, W.; Nan, S.; van Genugten, L.; Dellimore, K.; Eerden, J.; van Leeuwen, S.; van de Pol, H.; Schuling, F.; et al. Intelligent checklists improve checklist compliance in the intensive care unit: A prospective before-and-after mixed-method study. Br. J. Anaesth. 2021, 126, 404–414. [Google Scholar] [CrossRef] [PubMed]
  60. Segal, G.; Segev, A.; Brom, A.; Lifshitz, Y.; Wasserstrum, Y.; Zimlichman, E. Reducing drug prescription errors and adverse drug events by application of a probabilistic, machine-learning based clinical decision support system in an inpatient setting. J. Am. Med. Inform. Assoc. 2019, 26, 1560–1565. [Google Scholar] [CrossRef] [PubMed]
  61. Zhu, Y.; Simon, G.J.; Wick, E.C.; Abe-Jones, Y.; Najafi, N.; Sheka, A.; Tourani, R.; Skube, S.J.; Hu, Z.; Melton, G.B. Applying Machine Learning Across Sites: External Validation of a Surgical Site Infection Detection Algorithm. J. Am. Coll. Surg. 2021, 232, 963–971.e1. [Google Scholar] [CrossRef]
  62. Lind, M.L.; Mooney, S.J.; Carone, M.; Althouse, B.M.; Liu, C.; Evans, L.E.; Patel, K.; Vo, P.T.; Pergam, S.A.; Phipps, A.I. Development and Validation of a Machine Learning Model to Estimate Bacterial Sepsis among Immunocompromised Recipients of Stem Cell Transplant. JAMA Netw. Open 2021, 4, e214514. [Google Scholar] [CrossRef]
  63. World Health Organization. The Conceptual Framework for the International Classification for Patient Safety. Available online: https://www.who.int/publications/i/item/WHO-IER-PSP-2010.2 (accessed on 31 December 2023).
  64. Jiang, F.; Jiang, Y.; Zhi, H.; Dong, Y.; Li, H.; Ma, S.; Wang, Y.; Dong, Q.; Shen, H.; Wang, Y. Artificial intelligence in healthcare: Past, present and future. Stroke Vasc. Neurol. 2017, 2, 230–243. [Google Scholar] [CrossRef]
  65. He, J.; Baxter, S.L.; Xu, J.; Xu, J.; Zhou, X.; Zhang, K. The practical implementation of artificial intelligence technologies in medicine. Nat. Med. 2019, 25, 30–36. [Google Scholar] [CrossRef] [PubMed]
  66. Yu, K.-H.; Beam, A.L.; Kohane, I.S. Artificial intelligence in healthcare. Nat. Biomed. Eng. 2018, 2, 719–731. [Google Scholar] [CrossRef] [PubMed]
  67. Haque, M.; Sartelli, M.; McKimm, J.; Abu Bakar, M. Health care-associated infections—An overview. Infect. Drug Resist. 2018, 11, 2321–2333. [Google Scholar] [CrossRef] [PubMed]
  68. Agency for Healthcare Research and Quality. Health Care-Associated Infections. Available online: https://www.ahrq.gov/professionals/quality-patient-safety/patient-safety-resources/resources/hais/index.html (accessed on 31 December 2023).
  69. La Russa, R.; Maiese, A.; Viola, R.V.; De Matteis, A.; Pinchi, E.; Frati, P.; Fineschi, V. Searching for highly sensitive and specific biomarkers for sepsis: State of the art in post-mortem diagnosis of sepsis through immunoistochemical analysis. Int. J. Immunopathol. Pharmacol. 2019, 33, 2058738419855226. [Google Scholar] [CrossRef] [PubMed]
  70. Marti, M.; Neri, M.; Bilel, S.; Di Paolo, M.; La Russa, R.; Ossato, A.; Turillazzi, E. MDMA alone affects sensorimotor and prepulse inhibition responses in mice and rats: Tips in the debate on potential MDMA unsafety in human activity. Forensic Toxicol. 2019, 37, 132–144. [Google Scholar] [CrossRef]
  71. Emmerton, L.M.; Rizk, M.F.S. Look-alike and sound-alike medicines: Risks and “solutions”. Int. J. Clin. Pharm. 2012, 34, 4–8. [Google Scholar] [CrossRef]
  72. Kaushal, R.; Shojania, K.G.; Bates, D.W. Effects of computerized physician order entry and clinical decision support systems on medication safety: A systematic review. Arch. Intern. Med. 2003, 163, 1409–1416. [Google Scholar] [CrossRef]
  73. Potts, A.L.; Barr, F.E.; Gregory, D.F.; Wright, L.; Patel, N.R. Computerized physician order entry and medication errors in a pediatric critical care unit. Pediatrics 2004, 113, 59–63. [Google Scholar] [CrossRef] [PubMed]
  74. Cordero, L.; Kuehn, L.; Kumar, R.R.; Mekhjian, H.S. Impact of computerized physician order entry on clinical practice in a newborn intensive care unit. J. Perinatol. Off. J. Calif. Perinat. Assoc. 2004, 24, 88–93. [Google Scholar] [CrossRef] [PubMed]
  75. Robberechts, A.; De Petter, C.; Van Loon, L.; Rydant, S.; Steurbaut, S.; De Meyer, G.; De Loof, H. Qualitative study of medication review in Flanders, Belgium among community pharmacists and general practitioners. Int. J. Clin. Pharm. 2021, 43, 1173–1182. [Google Scholar] [CrossRef] [PubMed]
  76. Abraham, J.; Kannampallil, T.G.; Jarman, A.; Sharma, S.; Rash, C.; Schiff, G.; Galanter, W. Reasons for computerised provider order entry (CPOE)-based inpatient medication ordering errors: An observational study of voided orders. BMJ Qual. Saf. 2018, 27, 299–307. [Google Scholar] [CrossRef] [PubMed]
  77. Reckmann, M.H.; Westbrook, J.I.; Koh, Y.; Lo, C.; Day, R.O. Does computerized provider order entry reduce prescribing errors for hospital inpatients? A systematic review. J. Am. Med. Inform. Assoc. 2009, 16, 613–623. [Google Scholar] [CrossRef] [PubMed]
  78. Santoro, P.; La Russa, R.; Besi, L.; Voloninno, G.; dell’Aquila, M.; De Matties, A.; Maiese, A. The forensic approach to plastic bag suffocation: Case reports and review of the literature. Med.-Leg. J. 2019, 87, 214–220. [Google Scholar] [CrossRef] [PubMed]
  79. Kohn, L.T.; Corrigan, J.M.; Donaldson, M.S. To Err Is Human: Building a Safer Health System; Institute of Medicine (US) Committee on Quality of Health Care in America: Washington, DC, USA, 2000; ISBN 0-309-06837-1.
  80. Shaw, R.; Drever, F.; Hughes, H.; Osborn, S.; Williams, S. Adverse events and near miss reporting in the NHS. Qual. Saf. Health Care 2005, 14, 279. [Google Scholar] [CrossRef] [PubMed]
  81. Ferorelli, D.; Solarino, B.; Trotta, S.; Mandarelli, G.; Tattoli, L.; Stefanizzi, P.; Bianchi, F.P.; Tafuri, S.; Zotti, F.; Dell’Erba, A. Incident Reporting System in an Italian University Hospital: A New Tool for Improving Patient Safety. Int. J. Environ. Res. Public Health 2020, 17, 6267. [Google Scholar] [CrossRef] [PubMed]
  82. Mele, F.; Buongiorno, L.; Montalbò, D.; Ferorelli, D.; Solarino, B.; Zotti, F.; Carabellese, F.F.; Catanesi, R.; Bertolino, A.; Dell’Erba, A.; et al. Reporting Incidents in the Psychiatric Intensive Care Unit: A Retrospective Study in an Italian University Hospital. J. Nerv. Ment. Dis. 2022, 210, 622–628. [Google Scholar] [CrossRef]
  83. Oweidat, I.; Al-Mugheed, K.; Alsenany, S.A.; Abdelaliem, S.M.F.; Alzoubi, M.M. Awareness of reporting practices and barriers to incident reporting among nurses. BMC Nurs. 2023, 22, 231. [Google Scholar] [CrossRef] [PubMed]
  84. Stavropoulou, C.; Doherty, C.; Tosey, P. How effective are incident-reporting systems for improving patient safety? A systematic literature review. Milbank Q. 2015, 93, 826–866. [Google Scholar] [CrossRef]
  85. Elkin, P.L.; Johnson, H.C.; Callahan, M.R.; Classen, D.C. Improving patient safety reporting with the common formats: Common data representation for patient safety organizations. J. Biomed. Inform. 2016, 64, 116–121. [Google Scholar] [CrossRef]
  86. Ong, M.-S.; Magrabi, F.; Coiera, E. Automated categorisation of clinical incident reports using statistical text classification. Qual. Saf. Health Care 2010, 19, e55. [Google Scholar] [CrossRef]
  87. Piccioni, A.; Cicchinelli, S.; Saviano, L.; Gilardi, E.; Zanza, C.; Brigida, M.; Tullo, G.; Volonnino, G.; Covino, M.; Franceschi, F.; et al. Risk Management in First Aid for Acute Drug Intoxication. Int. J. Environ. Res. Public Health 2020, 17, 8021. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.