Legal and Regulatory Framework for AI Solutions in Healthcare in EU, US, China, and Russia: New Scenarios after a Pandemic

Simple Summary: We offer an overview of the state of regulation of AI in healthcare in the European Union, the United States of America, the China

In this scenario, the issues showed in this paper acquire particular importance and urgency. They display the complexity of AI-based healthcare and highlight the need to develop policies and legal strategies that carefully consider the multiple dimensions of the integration process, and this need for multidisciplinary efforts to coordinate, validate and monitor the development and integration of AI tools in the healthcare [19]. Challenges such as organizational and technical barriers for health data use, the debate about the ownership of data and privacy protection, the regulation of data sharing and cybersecurity surrounding it, and accountability issues will have to be addressed as soon as possible.
This paper explores the status of legal and regulatory frameworks for healthcare AI in the European Union (EU), the U.S., China, and the Russian Federation, analyzing challenges, hurdles, and opportunities. The results are particularly significant, as the COVID-19 pandemic is triggering an unprecedented surge in the development of and demand for digital and AI technologies worldwide.

Organizational and Technical Barriers for the Adoption of AI in the Medical Field
Both ML and DL technologies require the availability of large amounts of comprehensive, verifiable datasets; integration into clinical workflows; and compliance with regulatory frameworks [1,23]. With improved global connectivity via the internet and cloud-based technologies, data access and distribution have become easier, with both beneficial and malicious outcomes [25]. Adequately regulated integration of health data and disease will provide unprecedented opportunities in the management of medical information at the interface of patients, physicians, hospitals, policymakers, and regulatory institutions. However, despite the pervasive enthusiasm about the potential of AI-based healthcare, there are only a few healthcare organizations with the data infrastructure required to collect the sensitive data needed to train AI algorithms for patients [26]. Consequently, published AI success stories fit the local population and/or the local practice patterns centered on these organizations and should not be expected to be directly applicable to other cohorts [27] (i.e., an AI algorithm trained on one specific population is not expected to have the same accuracy when applied elsewhere) [28].

Telehealth: A Boon Redefining Medicine for the 21st Century or a Short-Term Fix during the COVID-19 Pandemic?
Telehealth (or telemedicine as it is sometimes called) is literally "healing at a distance", with the provider in one location and the patient somewhere else [29]. Although the concept of telemedicine has existed for decades, developments in technology galvanized the ability to provide this service on a large-scale basis. In the U.S., historically, the largest hurdles to universal adoption of telehealth were twofold: first, insurance reimbursement was lacking; and second, the intricate and inconsistent web of state laws barred out-of-state doctors from practicing medicine across state lines.
With one fell swoop, the COVID-19 pandemic temporarily chipped away at these barriers. The U.S. Congress enacted legislation allowing the U.S. Department of Health and Human Services to issue waivers for telemedicine under Section 1135 of the Social Security Act. Additionally, former President D. Trump issued a Proclamation Declaring a National Emergency Concerning the Novel Coronavirus Disease Outbreak under the U.S. National Emergencies Act. The Emergency Proclamation authorizes HHS to offer additional waivers designed to increase providers' ability to treat the anticipated influx of ill patients.
The U.S. Centers for Medicare and Medicaid Services (CMS) now allows Medicare to cover telehealth visits and pay for such visits at the same rates as traditional, in-person visits. Private health insurance carriers quickly followed suit.
This waiver had a profound effect on the delivery of telehealth services in the U.S. University of California, San Diego Health (UCSDH), for example, had a long history of performing telemedicine on a limited basis before the pandemic. Its telemedicine infrastructure provided care to other remote centers for telestroke and telepsychiatry, amounting to up to 15 service lines in the past ten years. Through a small-scale pilot project, UCSDH provided 870 ambulatory home telemedicine video visits over the course of three years. Because this foundation, though limited in scope, was in place, when the waivers for telemedicine were issued, UCSDH was able to leverage its experience to quickly provide wide telemedicine services during the pandemic. Over a 5-month period, UCSDH conducted over 119,500 ambulatory telemedicine evaluations (a remarkable increase from the pre-COVID-19 waiver period) [30].
The CMS also issued several nationwide blanket emergency waivers available throughout the duration of the pandemic, including a waiver of federal regulation 42 CFR 485.608(d), which required that critical access hospital staff (CAH) be certified in accordance with federal, state, and local laws and regulations. Under this waiver, out-of-state providers need no longer be in the same state as the patients to whom they provide telehealth services-if permitted by state law [31].
Each state has its own laws governing telemedicine, and the state in which the patient is located controls whether an out-of-state doctor must be licensed in that state at the time of the telehealth visit. While 40 states have issued waivers modifying their licensure requirements for telehealth visits by out-of-state physicians, the waivers are only effective during the pandemic [32].
Forty-nine state boards still require physicians engaging in telemedicine to be licensed in the state in which the patient is located [33]. One potential solution was advanced by the Interstate Medical Licensure Compact Commission (IMLCC). The IMLCC has an expedited pathway to licensure for qualified physicians seeking to obtain multiple licenses. Twenty-four states, Guam, and the District of Columbia enacted legislation to join the Compact. Still, with fewer than half of the states belonging to the Compact, there is still a long way to go for a permanent fix to this problem. While penalties vary from state to state, there are significant civil, professional, and even criminal licensure consequences for violating state telemedicine laws.
Several other challenges also remain with the delivery of telemedicine services in the U.S. Telemedicine is widening the existing gap in access to care. One recent study found that patients over 65 years old have the lowest odds of using telemedicine services, and that Black and Hispanic patients have lower odds of using these services than their White or Asian counterparts [34]. Additional concerns remain involving patient privacy. The HHS has waived penalties against providers that fail to comply with many of the U.S. HIPAA Privacy Rules until the Emergency Proclamation is rescinded, i.e., throughout the pandemic [35].
While we may be awed by technological advancements that allow for more medical services to be delivered remotely, long after the pandemic is over, Americans will continue to be challenged by the legal issues raised by telemedicine. The biggest impediment may be a real belief by state medical associations that telemedicine might adversely affect doctor incomes by allowing out-of-state providers to compete, resulting in continued rules restricting this highly efficient method of delivering medical services.

Regulatory Issues and Policy Initiatives
In the last few years, governments have started to promote data sharing [36]. For instance, anonymized benchmarking datasets with annotated diagnoses have been created to provide reference standards [37,38]. Existing examples of data-sharing efforts include biobanks and international consortia for medical imaging databases, such as the Cancer Imaging Archive (TCIA) [39], the Visual Concept Extraction Challenge in Radiology Project [40], the Cardiac Atlas Project [41], the U.K. Biobank [42], and the Kaggle Data Science Bowl [25], the latter of which represents a valuable step in the direction of an openaccess database of anonymized medical images coupled with histology, clinical history, and genomic signatures.
Despite those hopeful examples, the amount of data sharing required for widespread adoption of AI technologies across different health systems demands still more efforts. It will probably depend more on the socioeconomic context of the health system in question rather than on technology itself, which has already been showed to be available and ready.
Once AI in healthcare is fully institutionalized and its rules are defined, it may be difficult to change those rules. To prevent this, state regulation and supervision should remain flexible and proactive [26].
The role of the government in the legal discipline of AI-based medical systems must manifest itself in the following activities: • securing patients' medical privacy; • creating regulatory sandboxes and experimental legal regimes; • supervising medical organizations that use AI-based medical solutions; • certifying software engineers for development of such systems; • certifying AI-based medical systems and confirming their quality and effectiveness; • avoiding uniformity in the process of AI-based medical systems development; • providing state funding in the form of grants, subsidies, etc.

Legal and Regulatory Framework in EU
The "Medical Device Regulation" (MDR) should have been initially applied starting from 26 May 2020, but on 23 April 2020, the EU Council and the EU Parliament postponed the date of application for most of the MDR's provisions by one year, until 26 May 2021. On the other hand, the "In Vitro Diagnostic Medical Device Regulation" (IVDR) will apply, as initially provided, starting from 26 May 2022 [43].
The MDR and IVDR do not substantially impact the purposes of previous sets of EU laws. First, like the previous Directives [44,45], the new Regulations aim to: • harmonize the single market by granting uniform standards for the quality and safety of medical devices; • classify medical devices and in vitro diagnostics based on the relevant risk profiles by requiring different, specific assessment procedures in relation to such classifications; • highlight responsibilities of notified bodies and competent authorities.
The main reasons behind the regulatory change consisted of divergent interpretations of the previous Directives [44,45], incidents concerning product performance, and lack of control of notified bodies. Thus, the legislation's revision was required to reach high standards of product quality and safety concerning evolving technologies, including AI, and to reconsolidate the EU's leading role in the medical-devices field [37]. The new Regulations should ensure a consistently high level of health protection and safety for EU citizens using AI-based products; the free and fair trade of the products throughout the EU; and the adaptation of EU legislation to the significant technological and scientific progress in the AI-based medical device sector over the last 20 years [43].
The scope of the new legislation includes a wider range of products, extends liability in relation to defective products, strengthens the requirements for clinical data and traceability of devices, increases clinical investigation requirements and manages risk to ensure patient safety, reinforces surveillance and management of medical devices as well as the lifecycle of in vitro diagnostic medical devices, and, finally, improves transparency relating to the use of personal data.
According to the new legislation, a software, whether as a component in a wider medical device or standing alone, is qualified as a medical device without any other specifics.
Starting from 24 May 2018, the General Data Protection Regulation (GDPR) applied in the EU. This new legislation is a suitable instrument to regulate AI because it has an extended territorial scope and wide rights for data subjects, providing, overall, more rights to citizens vis-à-vis information about the use of their personal data and giving clear responsibilities to people and entities using personal data [23,46,47].
The GDPR established rules to strengthen citizens' rights as regards the process of consent to the collection, use, and sharing of their personal data [23]. The regulation explained that consent must be explicit and unambiguous, and that data controllers must demonstrate that a person has given consent (in other words, the burden of the proof is with them). Consent must be informed, which it means it has to be demanded in intelligible and easily accessible forms using clear and plain language. In addition, patients should be informed on how to withdraw consent prior to giving it.
Under the GDPR, patients have the right to access their own medical records and health data when they are being processed (i.e., with remote access). However, the GDPR does not make clear that access must be provided for free and even allows data controllers to charge a fee for administrative costs if data subjects ask for the data more than once [23].

Legal and Regulatory Framework in the U.S.
In the U.S., the 21st Century Cures Act [48] of 2016 defined the medical device as a tool "intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease, in man or other animals, or intended to affect the structure or any function of the body of man or other animals" [49].
The FDA categorizes medical devices into three classes according to their use and risk (the higher the risk, the stricter the control) and regulates them accordingly.
The black box nature of AI applications will make it difficult for the FDA to approve all the new medical devices that are quickly being developed, given the volume of innovation and the complex nature of the testing and verification involved. For instance, the introduction of computer-assisted detection (CAD) software for mammography [49] required many years and extensive lobbying to obtain clearance from the FDA to be used as a second screening reader [50]. FDA clearance is even harder to obtain for an AI system that does not need humans' supervision and cannot be compared to predicated medical devices used as replacement for radiologists. Therefore, AI systems are usually presented today as merely support for physicians rather than as tools that substitute them [7,[51][52][53][54].
The FDA and the International Medical Device Regulators Forum (IMDRF) recently assessed that AI technologies are different from traditional medical devices. The IMDRF is a voluntary group of medical device regulators including the EU, the U.S., Canada, Australia, Brazil, China, Japan, Russia, Singapore, and South Korea that works toward harmonizing international medical device regulation. The collaboration between IMDRF and the FDA defined a new category called "Software as Medical Device" (SaMD), pointing out for the need for an updated regulatory framework [25,54] that considers that AI systems must face safety challenges in the forms of complex environments, i.e., periods of learning (during which the system's behavior may be unpredictable) that may result in significant variation in the system's performance [54]. These organizations recommended a continuous iterative process based on real-world performance data and stated that low-risk SaMDs may not require independent review [25].
According to Thierer et al. [55], there are two main approaches to regulating new technologies. The "precautionary approach" gives some limits (or sometimes outright bans) to certain applications because of their potential risks: this means that these systems are never tested because of what could happen in the worst-case scenarios. On the contrary, the "permission-less innovation approach" allows experimentation to proceed freely; the issues that do arise are addressed as they emerge.

Legal and Regulatory Framework in China
Although COVID-19 has accelerated ongoing digital healthcare trends, in China, the regulation of AI is still developing [56].
China's regulatory body for life sciences products, the National Medical Product Administration (NMPA), certifies that products meet the requisite standards. However, once market approvals are granted, there is low control because of the continuous nature of software development and because software updates can so readily be pushed out to the public/market. For AI products that change second-to-second as the product adapts to a variety of inputs, the challenge is even more urgent.
Nevertheless, with a number of recent announcements and guidelines over the past few years, the NMPA has demonstrated a maturing approach [57].
The role of the NMPA Legal Agent, the key contact for the NMPA for each registered device, has correspondingly increased in importance. Like the EU and U.S., China requires a local entity before market clearance applications are accepted, but this need not be the manufacturer itself. A local distributor (that thereby tends to gain inordinate power over the foreign manufacturer) or a third-party service provider may also perform this role [56].
The NMPA has issued various guidelines relating to AI in catchup to the FDA, which approved an AI-based diabetes-related device in 2018. Furthermore, the NMPA issued in 2019 the "Technical Guideline on AI-Aided Software", in which it clarified that for AI software, version naming rules should cover algorithm-driven and data-driven software updates and should list all typical scenarios for major software updates [57].
For AI devices, the NMPA recognized that the risk-based method is the guiding principle in determining whether and when a product change needs to be filed.
Therefore, whether AI device updates require product change approval depends on statistical significance. Where the software maintains its effect based on the original application, there is no need to obtain preapproval.
In China, the following fast track pathways are available [56]: Innovative approval has a number of criteria to be satisfied, most relevantly that the product has significant clinical application value and a national patent and that no other similar products are already present on the market [56].
Priority review relates to treatment of rare diseases using devices with significant application value.
Emergency approvals are for public health crises, which was relevant in 2020 to face the COVID-19 pandemic, but such applications were no longer accepted in 2021.

Legal and Regulatory Framework in the Russian Federation
In the Russian Federation, the government has taken on the role of an observer and does not outrace developers with supervisory and regulatory measures. Striving instead to form an institutional basis for a wide range of AI development and application, Russia keeps up with the creation of different strategies, roadmaps, and standards. These documents have a defined, hierarchical structure. The National Program "Digital Economy of the Russian Federation" [58] consists of a description of the main directions, tasks, and goals for the development of the digital economy, and AI is mentioned in this document solely in the context of regulation and lawmaking.
The Decree of the President of the Russian Federation of 9 May 2017, 203, "On the Strategy for the Information Society Development in the Russian Federation for 2017-2030" [59] proclaimed the necessity and importance of intensification in the field of digital technologies, including AI. The Decree of the President of the Russian Federation of 10 October 2019, No. 490, "On the Development of Artificial Intelligence in the Russian Federation", together with the "National Strategy for the Artificial Intelligence Development" for the period up to 2030 [60], listed a number of AI solutions in healthcare, such as the creation of prediction models, reduction of risks and negative effects of pandemics, preventive screening, diagnostics based on medical images, automation, and increasing accuracy and effectiveness of physisicians.
In the Federal Law No 323-FZ of 21 November 2011, "On the Fundamentals of Healthcare in the Russian Federation", a "medical device" was defined as "any tools, equipment, devices, materials and other products used for medical purposes, necessary accessories and software" (article 38) [61]. Therefore, any AI solution, used independently or in combination with other medical devices, must be registered as a medical device, passing through clinical testing and acceptance according to article 36.1 of said Federal Law. The Russian supervisory authority-the Federal Service for Supervision of Healthcare (Roszdravnadzor)-requires technical and clinical tests as well as examination of the safety, quality, and effectiveness of all medical devices prior to their use and sale.
Moreover, according to the Federal Law 152-FZ of 27 July 2006, "On Personal Data" [62], it is necessary to obtain consent even for anonymized data. Article 9 of this Federal Law dictated that "the subject of personal data decides on the provision of his personal data and agrees to their processing freely, of his own free will and in his interest". Consent to processing of personal data must be specific, informed, and conscientious. In accordance with Article 91 (part 2) of the Federal Law 323-FZ of 21 November 2011 [61], "On the Fundamentals of Healthcare in the Russian Federation", "processing of personal data in information systems in the healthcare sector is carried out in compliance with the requirements established by the legislation of the Russian Federation in the field of personal data and medical secrecy". Thus, for individuals and institutions dealing with personal data, there are legal risks and limitations that can be regarded as a factor that counteracts medical AI-software development.
Currently, in Russia, developers of medical software are required to register their software with Roszdravnadzor according to regulatory documents and standards, which are currently unavailable for AI-based solutions. However, the authorities are taking measures to remedy this situation. The first part of this standard was released in August 2020 and is available on the Federal Agency for Technical Regulation and Metrology (Rosstandart). State standard "Artificial Intelligence Systems in Clinical Medicine. Part 1. Clinical tests" will regulate the methodological basis of the clinical test process, the procedures for conducting clinical tests, accuracy indicators, and audits and quality control of medical AI systems. The other six parts are: "Technical test program and methodology"; "Application of quality management to retraining programs. Algorithm change protocol"; "Assessment and control of performance parameters"; "Requirements for the structure and application of a dataset for training and testing algorithms"; "General requirements for operation"; and "Life cycle processes".
According to the current regulation, it is exceedingly difficult to obtain subjects' consent to subsequent personal data processing. Therefore, it is difficult to use AI technologies for medical purposes, as they involve the analysis of information about thousands of patients and thus require many consents for the processing of personal data. In July 2020, it was proposed to remove the processing of personal data within the framework of experimental legal regimes from the norms of the federal laws "On Communications", "On Personal Data", and "On the Basics of Protecting Citizens' Health" [63]. However, while the usage of regulatory sandboxes and experimental regimes may represent a solution for understanding the problems, risks, and benefits of AI-based medical software, this proposal was regarded by many experts as very ambiguous and leading to many risks that would be hard to evaluate and prevent.

Ownership and Control of the Data
When using personal data such as the health information of patients, AI algorithms need to comply with regulatory frameworks. Accordingly, such data would need to be anonymized, or at least pseudo-anonymized, with an informed consent process that includes the possibility of wide distribution [64].
Therefore, the rules of patient privacy, the notions of patient confidentiality, and cybersecurity measures will be increasingly important in healthcare systems [65]. Currently, healthcare organizations are the owners (and, at the same time, the guardians) of patient data in the healthcare system. However, informed consent from patients would be mandatory should their data be used in a manner not pertaining to their direct care [23].
Some have argued that patients should be the own holder of their health data and subsequently consent to their data being used to develop AI solutions [66], but governance is needed to provide the appropriate regulations and surveillance. Both the GDPR in the EU and California's Consumer Privacy Act in the U.S. legitimately tried to regulate the ownership of health data [23]. Although these regulations are necessary, they may limit the growth of smaller healthcare providers and technology organizations.
The GDPR requires informed consent before any collection of personal data, but it allows processing of anonymized health data without explicit patient consent in the interest of health care in the EU [23].
In the last decade, new issues arose complicating the health data ownership scenario. The healthcare system is in a slow transition from a hospital-centric to a more patient-centric data model [67]. This hinders the integration of new information acquired through health wearables, i.e., devices that consumers can wear to collect data about their personal health and exercise. Moreover, open data sharing has resulted in huge collections of data available in the cloud, which can be used by anyone to train and validate their algorithms [25,40] with the risk of disconnected and non-standardized cloud solutions [25,68].
With the GDPR, healthcare operators and regulatory bodies are called to closely protect patient data [23]. The development of huge health datasets including wide ranges of clinical/imaging data and pathologic information across multiple institutions for the development of AI algorithms will require a reexamination of issues surrounding patient privacy and informed consent. However, Article 23 of the GDPR allows member states to restrict data subject rights, as well as the principles outlined in Article 5, by way of a legislative measure that respects the essence of fundamental rights and freedoms. These restrictions, if they are embodied in necessary and proportionate measures, should aim to safeguard "important objectives of general public interest including monetary, budgetary and taxation matters, public health and social security". With the COVID-19 pandemic, processing personal data was necessary to take appropriate measures to contain the spread of the virus and mitigate its effects. In such a scenario, relevant personal data can be processed in accordance with both Articles 6(1)(d) and (e) of the GDPR because they are necessary either to protect the vital interest of individuals or to safeguard the public interest or the exercise of official authority vested in the controller. Notably, Recital 46 of the GDPR explicitly mentions the monitoring of epidemics as a circumstance in which data processing may serve both important grounds of public interest and the vital interests of data subjects. Nevertheless, specific safeguards should be implemented because of the sensitivity of these categories of data. Among the possible safeguards, policymakers should take measures aimed at: (a) limiting access to the data, (b) establishing stricter retention times, (c) training staff, (d) minimizing the amount of processed data, and (e) keeping records of any related decision-making process.
The ownership of health data is also part of the discussion on the application of different ownership rules to original, deidentified, anonymized, and processed data [69]. Once again, only collaboration among patients, healthcare operators, and policymakers will be able to prevent the risks of inappropriate use of sensitive datasets, inaccurate or inappropriate disclosures, and limitations in deidentification techniques.

The Problem of Anonymization
In healthcare, a balance between privacy and better user experience is demanded. AI algorithms should use DL to provide patients' data without saving their personally identifiable information. Therefore, anonymization or at least deidentification (true anonymization is an irreversible process that is not easily achievable) must be performed to generate such dataset with removal of all personal health information [70].
However, current anonymization and deidentification techniques are still substandard [71]. There are no currently available certifications for tools or methods for anonymization because no known method can currently guarantee 100% data protection. If data is made anonymous, its information content is inevitably reduced and distorted.
In data anonymization, the conflict between security and usability means that so far, no European data protection authority has extensively evaluated or even certified technologies or methods for data anonymization outside of specific use cases [72]. Collaboration among different institutions is crucial when sharing data to perform ML or DL studies that, by definition, are based on big data.

Data Protection and Cybersecurity Implications
The legal obligation to protect the privacy of data, especially health data, is a crucial priority, as the circulation of confidential information in huge numbers of copies among many unregulated companies is increasingly risky.
As access to vast amounts of medical data is needed to train AI algorithms [73], policies should prevent the collection of illicit or unverified sensitive data [74]. Although data privacy concerns are still growing, we still face a lack of unique and clear regulations in data protection and cybersecurity regulations [75].
The concept of physician-patient confidentiality requires that a doctor withholds medical information in line with the patient's wishes as long as this poses no risk to the patient or others [24]. Once a medical choice based on AI algorithms is integrated into clinical care, withholding information from digital data impairs the validity of algorithmdriven medical practice. The privacy of such health data must be protected against both external cyberattacks and the same bodies collecting it.
Per the EU Cybersecurity Directive [76], EU member states must respect some requirements to ensure that health operators take appropriate measures to minimize the impact of incidents and to preserve service continuity (Articles 14(2) and 16(2)) ( Table 1). Moreover, according to Articles 14(3) and 16 (3), supervisory authorities must be notified of incidents without undue delay [75,76]. In the U.S., the Health Insurance Portability and Accountability Act (HIPAA) is a compliance focus for health information [54], defining standards to protect patients' data and health information that apply to all healthcare providers, including insurers.
Cybersecurity is dealt with by the FDA, and providers must report only a limited number of risks their devices present and the actions taken to decrease vulnerability [54].
Considering that the amount of data and the number of AI applications can only grow, regulatory actions regarding cybersecurity will face continuous challenges [74]. Instead of resorting to government overregulation, a technological solution of cybersecurity implications is most necessary, as data protection can no longer rely on current technologies that allow the spreading of personal data at a large and uncontrolled scale [77]. Blockchain technology, an open-source software, may allow the creation of large, decentralized, and safe public databases, containing ordered records arranged in a block structure [78]. Different blocks are stored digitally, in nodes, using the computers of the blockchain network members themselves, and the information on all transactions are stored in the nodes [79]. Although blockchain technology is most famously used in field of economics (i.e., cryptocurrencies), its usefulness is extending to other fields, including health data [79]. Blockchain can be used to validate the provenance of and facilitate the distribution of data without compromising the quality of said data. As the blocks are impossible to change, it is impossible to delete or to modify anything without leaving a mark, and this is crucial in the case of sensitive data such as medical information. Unfortunately, the flipside of the coin is that to obtain greater security, privacy is compromised. Patients would need to accept sharing their sensitive data without a central authority to decide what is right or wrong.

Accountability and Liability
Alongside AI regulations and data protection issues, there are other legal implications of AI and its use in healthcare, namely, accountability and liability.
Modern medicine is strongly shaped around a multidisciplinary approach. It involves not only medical professionals with different fields of expertise and levels of experience, but also professionals from radically different backgrounds, such as biomedical engineers and medical physicists. Multidisciplinary teamwork has greatly enhanced, both in quality and quantity, the level of healthcare provided to patients. That said, this approach also comes with its unique shortcomings, which include uncoordinated administrative support, insufficient circulation of relevant information, and an excessive focus on a professional's own specific point of view, to the detriment of a holistic, comprehensive evaluation of the patient's case.
Unavoidably, those shortcomings pose a significant challenge not only from a clinical perspective but from a juridical one, as they alter the ordinary criteria that regulate the assessment of medical liability, which-especially when it comes to criminal liability-are traditionally shaped with reference to the individual rather than to a team of a different professionals.
To address this challenge, legal systems have developed a number of consolidated principles aimed at assessing medical liability while taking into account both the different roles and levels/fields of expertise of each professional and the common duties that fall upon all the members of a medical team simply because they work toward the same goal (i.e., the well-being of the patient).
Those principles revolve around the so-called "principle of trust", first theorized by the German doctrine under the name "Vertrauensgrundsatz" [80], which is specifically designed to regulate (from a liability perspective) the various possible forms of collaboration among two or more human professionals. This brings us back to the core topic of this paper: what happens if one of the members of a medical team is an AI-based software or machine? To what extent can the principles that regulate the collaboration among human colleagues be applied when one of those colleagues is not human at all? Is there room-in the juridical mind as much as the clinical one-to conceive a "principle of trust" with reference to a form of intelligence that does not have a human nature? These questions might seem very abstract, but the answers that will be developed within each legal system will certainly lead to very tangible consequences, not only for hospitals and medical professionals but for the companies that produce and market clinical tools based on AI and for the agencies and public authorities that regulate the use of these tools.
For instance, in most European legal systems, the principles that govern the assessment of professional liability within a medical team provide that each member of the team, consistently with their own role and level/field of expertise, has a specific duty to challenge the decisions made by their colleagues anytime they have reasons to believe that those decisions could be detrimental to the patient's wellbeing, particularly if they are aware of any circumstance that would lead them to doubt their colleagues' reliability (among others, overtiredness, inexperience, or lack of information about a particular patient).
If we were to apply the same criteria to the hypothesis of collaboration between a human professional and an AI software, it would be crucial to establish the inherent value attributed to the opinion expressed by this kind of device, also considering that, in most cases, the human professional has no visibility of the reasoning behind such opinion, as AI devices can neither explain nor elaborate on the outcomes of their analysis.
As long as opinion expressed by the AI device aligns to the opinion of the human professional, there is no particular issue, as the AI device acts merely as a confirmation of a previously existing conviction (even though one could argue that the medical professional might feel comforted in a wrong decision and be less inclined to seek consultation with their human colleagues).
On the other hand, if the opinion expressed by the AI device differs from the opinion of the medical professional, the situation becomes more complicated. Taking the situation to its extreme consequences, ultimately, the human professional needs to choose whether to trust the AI device over their own judgment, thereby taking advantage of the full potential of this technical innovation-even though they will be exposed to the liability arising from any mistake committed by the device-or to trust their own judgment over that of the AI device, thereby avoiding any potential liability for a mistake committed by the device. This is not an easy choice, and not one that medical professionals should be left to face alone. It is crucial that both hospitals and professional associations take an active step toward their employees and members by offering specific instruments (such as guidelines, protocols, and training programs) that can help medical professionals to understand the functioning of the AI devices they use and, therefore, to better assess the reliability of the opinions offered by those same devices and resolve possible discrepancies.
Moreover, as soon as AI devices start making autonomous decisions about the management of patients, ceasing to be only a support tool, problems will arise as to whether their developers can be held accountable for their decisions. As a matter of fact, errors in AI happen mainly when confounding variables are correlated with pathologic entities in the training datasets rather than in actual symptoms. When AI devices make decisions, the decisions are based on a combination of the collected data and the algorithms the devices are based on (and what they learnt). Conclusions of AI algorithms may be unpredictable for humans [81] because, while we consider only the intuitive, AI can evaluate every potential scenario and detail, leaving humans with a decision not derived from a common basis [82,83]. Therefore, it is worth considering whether, when something fails following a decision made by an AI application, it might be the developer of that device, rather than the medical staff who relied on its opinion, that should be considered at fault.
Without some clear guidance and a proper understanding of the potential and limits of the increasingly advanced AI systems that are now being implemented in many hospitals, it can be very difficult for medical professionals to get to know those devices and, therefore, to build real confidence in the support they offer.
Furthermore, specific guidance issued by a reliable a source (be it a hospital or a professional association) could also represent a useful reference from a legal point of view, as abiding by such guidance may-to a certain extent-shield medical professionals from the criminal and civil liability potentially arising from malpractice claims/complaints. Of course, there will always be clinical cases wherein the complexity and peculiarity involved make it impossible to rely on existing guidelines. Nevertheless, guidelines and protocols represent the most common term of reference for courts and authorities that are required to assess the potential malpractice liability of medical professionals, even more so when the cases brought to their attention involve a significant degree of complexity (e.g., because of the number of professionals that handled the same case, or because of the involvement of an AI device).
Therefore, proper guidance could both help medical professionals exploit the full potential of AI devices and protect them against the setbacks of that same technology from a legal standpoint. This could greatly enhance the level of confidence with which both professionals and courts look at the introduction of AI devices in the medical field, as well as the level of trust that patients themselves put in this kind of nonhuman intelligence.
Medical liability cases-like medical practice itself-essentially revolve around the patient or the patient's family. Therefore, educating patients about the potential benefits of the use AI devices is just as important, from a legal perspective, as increasing the sensitivity of courts and authorities toward this very same subject.
In conclusion, although the complexity of AI makes unavoidable that some of its inner workings will always appear to be a black box [74], that is not enough to keep liability out of the question. Because, over the coming years, AI devices are bound to play an increasingly crucial role in the healthcare scenario, the issue of accountability for AI-based decisions will need to be properly addressed by competent authorities, always keeping in mind the core ethical principles of the medical profession: to respect patients and to do good for them [24].

Conclusions
Some laws and policies about AI regulation in healthcare, such as the GDPR, have just entered into force. Although, in the short term, such policies may potentially delay AI implementation in healthcare, in the long term, they will facilitate implementation by promoting public trust and patient engagement. With an appropriate and updated legal and regulatory framework around healthcare all over world, good employment of AI may be helpful and powerful for both healthcare providers and patients. On the contrary, bad application of AI may be dangerous. Patients, physicians, and policymakers must work to find a balance that provides security, privacy protection, and ethical use of sensitive information to ensure both humane and regulated management of patients.
Although technological advancement will continue to create new situations for which policymakers will be demanded to create new laws and ethical standards, physicians and healthcare workers should never forget whom they should serve and therefore strictly adhere to their oath, "primum non nocere" (first, do no harm). For this reason, the ownership and control of data and the relevant accountability and responsibility need to be assessed and clarified to realize the potential of AI across health systems in a respectful and ethical way.