Previous Article in Journal
UK Consumer Protection and the Debate for Reform in Medical Device Liability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Custodian of Autonomous AI Systems in the UAE: An Adapted Legal Framework

by
Mohamed Morsi Abdou
Private Law Department, College of Law, University of Kalba, Kalba 11115, United Arab Emirates
Submission received: 3 November 2025 / Revised: 14 December 2025 / Accepted: 17 December 2025 / Published: 25 December 2025

Abstract

The existence of a legal framework for Artificial Intelligence systems is of great importance for the growth and development of this advanced technology, especially given the growing sense of legal insecurity that may arise from potential irreparable harm. Therefore, the issue of legal liability for AI systems is one of the most critical legal topics that should receive the attention of legal literature. This paper critically examines the tempting analogy between the liability of custodians and the liability of operators of AI systems under UAE law. This paper seeks to address this legal gap, by offering suggestions and sharing examples of the legal requirements necessary to establish appropriate liability rules for AI. This legal gap can be filled by improving the provisions of custodian liability in UAE law. Our analysis focuses on three main concerns: (i) proposing an expansion of the concept of thingness; (ii) discussing the challenges of applying legal custodianship; and (iii) concluding that autonomous AI systems are inherently dangerous. In this context, it is particularly important to analyse the specific aspects that should be taken into consideration when operating advanced AI systems, which include mandatory registration and insurance. The article concludes that applying the custodian liability provisions to the operators of AI systems ensures the protection of third parties from potential damage on one hand. On the other hand, the specific regulations governing the operation of these AI systems encourage investment in this vital field.

1. Introduction

Recent reports indicate that the economic impact of AI is enormous, with some predicting that investments will reach between $14 to $50 trillion by 2025 (Mizrahi 2019). AI systems have become the focal point for significant investments from companies seeking to acquire and develop this technology, which facilitates their access to competitive advantage within the liberalised global trading environment and leads to substantial profits. These companies are led by Google and Amazon, which have already begun the arms race with AI techniques by attracting researchers in this field in every way, establishing research centres, and purchasing startups in this vital field (Scherer 2016).
Thus, AI is “the intelligence of machines and the branch of computer science that aims to create it” (Kallem 2012), which can be integrated into various physical instruments and robotic entities that can perform a wide range of tasks, representing a significant trend in technological development (van Genderen 2018).
The interest in this advanced technology stems from the ability of AI systems to identify solutions that humans may overlook (Calo 2015). It is even possible for AI to function as an inventor (Abdou 2024). This capability arises from their ability to learn independently through experience and interaction (Martinez 2019), and think deeply by analysing data available internally or on the internet. Then, they can make their own decisions independently and thus surpassing the human ability to solve problems (Yanisky-Ravid and Liu 2018).
As a notable example, AI systems can recognise similarities and patterns that may be challenging for humans to detect (Yanisky-Ravid and Liu 2018). Similarly, AI can provide significant services to insurers by processing vast amounts of data and proposing effective solutions (Touzain 2023), or play a significant role in maritime transport through the use of smart cargo ships (Abdou and Alqydi 2024).
Furthermore, as the role of intelligent machines has expanded in our contemporary world, it has become appropriate to examine the legal and ethical risks that may arise from the use of AI systems and set legal restrictions regarding their uses. Whether we embrace it or not, this technology is likely to advance rapidly—most likely faster than our capacity to enact laws that govern it (Weaver 2014). Consequently, as will be shown, it is crucial to ensure that AI is utilised ethically and in a manner that upholds the human rights.
This has prompted the European Commission to emphasise that AI systems are developing rapidly, but they present numerous potential risks (Européenne Commission 2020). These risks include uncertainty in decision-making processes based on AI systems (Varošanec 2022), and the potential for racial discrimination based on sex, colour or other grounds that AI systems may overlook. In addition, they include an increased invasion of individuals privacy interacting with AI systems and the ease with which these systems can be exploited for criminal purposes.
However, it is critical to manage the risks associated with AI to prevent potential negative consequences for individuals and society (Bensamoun 2023). Consequently, tort liability for the operation of AI systems is a highly debated topic in legal literature (See for example: (Latil 2024; Wendehorst 2022; Wendehorst 2020; Borges 2019; Kingston 2016)). The ongoing controversy regarding the recognition of legal personality for AI systems has overshadowed civil liability arising from harm caused by their operation, highlighting the urgent need to modernise the rules governing civil liability. This modernisation is essential to prevent the judiciary from encountering a gap between traditional provisions governing civil liability—both in contractual terms and default—and the identification of those responsible for the actions and conduct of AI systems.
Existing legal frameworks pose challenges to effectively managing liability concerns related to the implementation of autonomous AI systems, especially due to the unpredictability arising from their autonomous decision-making capabilities. The frequency of autonomous vehicle incidents (Figure 1) illustrates the inherent risks of AI operation, supporting the need for custodial liability frameworks. The “liability gap” in this context is widening due to the lack of control exerted by the producers, coders, or users over the autonomous AI system (Mendoza-Caminade 2016; Matthias 2004). It is therefore appropriate for a specific natural or legal person, referred to as the operator, to assume responsibility for the deployment of an AI system, regardless of whether that natural or legal person was involved in its creation.
Therefore, addressing fault liability for harm caused by AI systems presents challenges within the current legal framework. The application of traditional civil law rules does not adequately address acts or omissions that result in harm to other parties. However, several comparative judicial decisions1 have emerged in disputes concerning damages arising from autonomous AI. These rulings highlight the potential for attributing liability in that case by returning the cause of the act or omission to a “specific human agent”, such as the operators,2 on the basis that these entities could have anticipated and prevented the harmful conducts of the AI (Payas v. Adventist Health Sys./Sunbelt Inc. 2018; Gonzalez Prod. Sys., Inc. v. Martinrea Int’l Inc. 2016; LEI Packaging, LLC v. Emery Silfurtun Inc. 2015; Cristono Almonte v. Averna Vision & Robotics, Inc. 2015). The referenced cases indicate the judicial inclination to assign liability to identified humans capable of anticipating and alleviating harm. Regardless of contextual variations, these decisions collectively highlight a shared principle: when harm emanates from intricate systems or autonomous operations, liability is frequently associated with those capable of supervising, controlling, or averting detrimental outcomes, thereby reinforcing the idea that accountability in novel technological contexts is inherently connected to human intervention.
Currently, there are no specific regulations in the United Arab Emirates (UAE) governing liability for harm caused by the operation of AI systems. Therefore, the judiciary can adapt existing general rules on liability for harmful acts and apply them to relevant disputes.
This paper examines the challenges presented by AI systems under the existing civil liability framework in the United Arab Emirates. Although the UAE Civil Transaction Code provides general principles for tort liability, including Article 282,3 which establishes that the perpetrator of the acts causing harm may be a natural or legal person, these provisions appear insufficient to address situations in which harm results from autonomous AI. The difficulty lies in the fact that autonomous AI systems do not fall neatly within the category of natural or legal persons, making the application of existing tort liability rules problematic. This illustrates that the current civil liability framework is ill-equipped to respond effectively to the challenges posed by emerging technologies.
Thus, the present study focuses on the potential application of the provisions regarding the liability of custodians of items requiring special care, as outlined in Article 316 of the UAE Civil Transactions Act which states: “Whoever is in charge of a thing whose supervision requires special care in order to protect him from its danger, or of a machine, is liable for damage caused by these things or machines except that which, without prejudice to any special provisions in this respect”, to compensate for harm caused by the operation of AI systems (Figure 2). The study also examines limiting the liability of AI system operators to encourage investment in these technologies. Limiting liability is viewed as an exception to general civil liability rules, aiming to protect investors from the significant risks associated with operating AI systems and to encourage insurance coverage for related civil claims. This legal limitation, however, excludes damages arising from intentional misconduct, thereby guaranteeing that unlawful or unethical conduct is not incentivized.

2. Application of the Custodianship Provisions to the Operator of AI System

The legal framework of AI systems in the UAE will promote an open, fair, and competitive ecosystem and marketplace for AI and related technologies, so that small developers and entrepreneurs can continue to drive innovation4.
As mentioned above, there is currently no law in the UAE specifically regulating AI. However, the concept of an AI system can be inferred from the Federal Decree by the Law on Electronic Transactions and Trust Services.5 According to Article 1 of this decree, an Automated Electronic Medium is defined as “an Electronic Information System that operates automatically and autonomously, in whole or in part, without the intervention by any natural person at the time of operation or response”. Moreover, the concept of AI can also be inferred from the definition of the Autonomous Driving System outlined in Law No. 9 of 2023, Regulating the Operation of Autonomous Vehicles in the Emirate of Dubai.6 According to Article 2 of this law, the Autonomous Driving System means “[a] system consisting of a set of devices and software approved by the manufacturer of an Autonomous Vehicle, which enables interaction between the Autonomous Vehicle and Road elements and controls its movement without any human intervention.”
As a notable definition, the EU Artificial Intelligence Act defines an AI system as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. 7
As a doctrinal perspective, Artificial Intelligence (AI) refers to electronically executed processes that extend beyond the mere execution of predefined instructions. AI encompasses techniques designed to enhance the accuracy, efficiency, and scalability of machine performance in complex or large-scale data environments, potentially augmenting or even supplanting human capabilities. Its applications increasingly rely on advanced analytical methods, including machine learning, natural language processing, image recognition, neural networks, and deep learning, enabling the rapid and sophisticated execution of complex tasks or processes (Soyer and Tettenborn 2022).
Based on the above, it can be emphasised that AI systems should, in all cases, possess a certain level of independence for the operator to be held legally liable for any damages caused.
Regarding the operator and to ensure legal certainty, it is necessary to clarify that any exploiter should be considered an operator of the AI system and, consequently, should assume all relevant obligations. Therefore, it is appropriate to initiate the discussion by asking whether custodian liability can be applied to the AI operator.
Initially, custodian liability under UAE law—likened to that outlined in the French Civil Code8—is classified as objective liability “no-fault liability”, based on the rule that the burden accompanies the benefit, rather than being contingent upon presumed fault (Sharjah Roads and Transport Authority v. Al-Futtaim Motors and Machinery 2019). According to Article 316 of Federal Decree Law Concerning the Issuance of the Civil Transactions Law of the UAE, if damage is caused by a thing whose supervision requires special care, the custodian will be held liable if the affected party can establish a causal link between the damage and the thing.
This legal basis for tort liability is particularly relevant given the special nature of AI systems. Demonstrating fault on the part of the designers, developers, or users of AI systems, as well as establishing a causal link to the damage suffered by a third party, proves to be quite challenging due to the inherent characteristics of AI systems, including their learning methods and usage conditions (Latil 2024). In other words, given the specific characteristics of AI, including autonomy, complexity, and opacity (often referred to as the “black box” effect), it may be excessively difficult or costly for affected parties to identify the responsible individual and provide evidence of the conditions required to win their cases (Pierre 2023).
Hence, it is sufficient for the affected party to prove that the AI system under the operator’s control is the cause of the damage without having to prove any fault of the operator. However, implementing the latter may encounter two legal obstacles (Lachièze 2020; Mendoza-Caminade 2016): 1—AI systems do not possess the characteristic of thingness, since they are not merely physical entities embodied as mechanical machines, but rather they are represented by intangible elements (algorithmic structure). 2—Custodianship, as a pillar of custodian’s liability, is incompatible with the independence of an intelligent system that operates without control and supervision, as the system operator does not truly have control over it.
Before discussing the previous legal obstacles, it is essential to note that custodian’s liability of things was a legislative response to the technological developments and industrial innovations of the last century (Al-Sanhouri 1952). Therefore, it is necessary to revisit this concept considering the evolving technology of AI.
Presently, it is no longer jurists and legislators who drive the development of legal norms; rather, it is innovators and their inventions that have revealed the inadequacy of traditional legal norms in addressing the liability of operators of autonomous AI systems. In other words, it is time to reconsider the provisions of custodian’s liability of things to keep pace with the tremendous technological developments that have transferred the very concept of “thingness” itself.

2.1. Expanding the Interpretation of the Concept of “Thing”

Taken as a whole, the term “thing” from a legal perspective refers to everything that can be the subject of financial rights and managed in relation to those rights. However, classical legal literature argues that “things” are legally confined to physical objects, including both movable and immovable things (Al-Sanhouri 1952) alive or inanimate (Wagner 2011). As a result, harm caused by intangible things, such as AI systems primarily based on algorithms, is excluded from the scope of custodian’s liability, even if these systems manifest physically as smart robots.
This, of course, raises the question: what about damage caused by intangible things, such as AI systems? In this regard, contemporary legal literature—rightly—argues that limiting the term “thingness” to physical entities such as machines is a result of an unjustified mental obstacle, as practical reality has proven the existence of things of an intangible nature (Tricoire 2008; Lucas 2001). Likewise, the distinction between thing-ness (form) and about-ness (substance) is often criticised, as the boundaries of things are as semantically complex as their substance, making a strict focus on form versus content or boundaries versus meaning potentially unhelpful in understanding their legal functions (Madison 2017).
In sum, including about-ness into thing-ness implies that regulating artificial intelligence necessitates viewing AI systems not solely as physical entities, but also in relation to their functions and decisions. This method enables the allocation of legal liability and the efficient regulation of AI systems. Moreover, this is consistent with Article 316 of Civil Transactions Law of the UAE, which makes no distinction between material and intangible things when regulating liability of the custodian of things.
In an interesting paper related to this study, the author examines the legal issues that arises when legal liability is assigned for damage caused to a patient due to a medical diagnosis influenced by one of IBM’s AI systems, known as “Watson” (Mazeau 2018).
The author argues that such cases expose significant gaps in traditional civil liability frameworks, as the autonomy of AI decision-making complicates the identification of fault and causation. This paper provides a detailed analysis of how traditional liability frameworks are challenged when AI systems participate in clinical decision-making, addressing key issues such as fault attribution, causation, and the limits of existing legal doctrines. The author emphasises that the semi-autonomous nature of AI complicates the identification of responsible parties, raising critical questions about accountability in AI-assisted medical practice.
This analysis is particularly relevant to the present research because it illustrates the broader challenge of attributing responsibility in contexts where AI systems act as quasi-autonomous agents, capable of producing harmful outcomes without direct human intervention. By referencing this example, the current study underscores the necessity of adapting existing legal doctrines, such as the custodian’s liability, to emerging AI technologies.
In light of this legal problem, there is no objection to applying the custodian’s liability of things to damage resulting from the use of AI systems This particularly relevant since the UAE legislator did not explicitly limit the term to material things; rather, it merely mentions the term “things”, and the rule is that the interpretation of generality remains broad unless restricted by legal text. Moreover, the classical legal literature interpretations may have confined Article 316 to material things because, at the time the Civil law was enacted in 1985, it was difficult to foresee the existence of intangible things such as autonomous AI systems capable of performing legal actions (E-agents) or positive material actions that could cause harm to others (Surgical robots and self-driving mechanisms).

2.2. Legal Custodianship as an Alternative to Actual Custodianship: A Suitable Solution

Classical legal literature argues that actual control over a thing fulfils the condition of custodianship, based on the assumption that the owner of the thing is its custodian. Since ownership constitutes a material authority over a thing, it is established only over tangible things, allowing the owner to exercise all legal powers granted by ownership rights (Roubier 1954; Al-Sanhouri n.d.). Thus, this interpretation restricts the application of liability provisions for the custodian of the thing under UAE law to things of an intangible nature, including AI systems.
Clearly, unless the custodian has intentional and autonomous control over the thing, he or she will not be held liable for damage caused by it (Al-Sanhouri 1952). Likewise, actual custodianship within the UAE judiciary entails the authority to use, direct, and control a thing for a personal benefit (Abu Dhabi Court of Cassation 2016). Hence, the requirement of effective control by the courts with the physical force of use, control and direction exercised over the guarded thing makes it impossible to apply the provisions of custodian liability to autonomous AI systems.
Contrary to previous legal opinion, the developments that have occurred in the concept of “property rights” have made it unlinked to physical control and traditional possession, but rather legal control that gives the owner direct authority over the thing is sufficient (Pélissier 2001; Kamina 1996). Consequently, there is now legal recognition of intellectual property rights, wherein legislators acknowledge the intangible nature of the thing subject to this right and allow the existence of property rights over intangible assets (Revet 2005).
Modern legal literature argues that the ownership of intangible things is founded on three interconnected elements: 1—The existence of a legal authority that protects the intangible thing and obligates others to refrain from actions that would prevent its exploitation; 2—The authority to use or refrain from using the thing; 3—The authority to dispose of the thing (Rebel 1995; Wagner 2011).
Based on this conceptual framework, it is pertinent to examine whether these legal elements of ownership can be extended to AI systems. Clearly, the inventor of the AI system has the authority to operate or refrain from operating it as intended. Additionally, he can seek judicial resources to defend the AI system against any external threats, as it is considered an asset. The inventor also has the option to sell or destroy the system by disclosing the innovation to society. Accordingly, it might be argued, with due consideration, that AI systems are intangible things that can be owned and legally controlled.
Extending this reasoning further, claims that intangible things elude the rules of liability for custodians reflect a simplistic understanding of control over things. If we accept that possession of intangible things arises from mental control rather than merely physical acts of exploitation (Schmidt-Szaleweski and Pierre 1974), it follows that the owner exercises control through deliberate actions, preservation, and exploitation (Revet 2005).
This principle is supported by judicial practice, as the French Court of Cassation has affirmed that intangible things can indeed be possessed (French Court of Cassation 2006). Thus, some landmark judgments by the Cour de Cassation have interpreted the rules on the custody of things as providing the legal basis for imposing liability on custodians for damages caused by objects of any nature (De Mot and Visscher 2014).
Based on earlier ideas, the concept of custodianship in UAE law must be expanded by resorting to the theory of legal custody, which legally obliges the custodian to preserve the object and prevent it from becoming a source of harm to others (control over the risks of operating the system), even if he does not have physical control over the intangible things.
However, if the UAE judiciary applies an extended interpretation of the concept of custodianship under UAE law to artificial intelligence systems, it must consider that the legislator conditions the imposition of legal liability on the custodian of things upon the requirement that the things entrusted demand special care (Art. 316 of the Civil Transaction Law). This can reasonably be envisaged in autonomous AI systems, where the risks arising from their operation are difficult to predict due to the degree of autonomy they possess.
As a notable example, Robotic Traffic Systems, such as AI traffic signals, autonomously regulate traffic based on external data, including motor traffic, volume traffic intensity, pedestrian movement and other external factors. This AI system poses inherent risk and requires special attention from the traffic department (operator), as a sudden failure in one of its sensors could potentially lead to a humanitarian disaster. The author used the previous example to show that autonomous AI systems are risky by their nature, which means they need special care as one of the conditions for the custodian of things to be liable.
In this regard, and in support of the author’s view, there are always risks associated with operating autonomous AI systems. The EU Artificial Intelligence Act9 focused on high-risk AI systems such as those used in the management and operation of critical infrastructure, education and vocational training, assistance in legal interpretation and application of the law, and law enforcement. It follows that the European legislator is convinced that autonomous AI systems are inherently dangerous and that their operation entails varying levels of risk, whether high or low.
Moreover, the liability of an AI system custodian requires that its operation results in damage to a third party10. This damage must arise from the positive intervention of the AI system, whilst the operator is not liable for damage caused to others by the AI system without its positive intervention. For example, if an AI robot is placed in a specific and clearly marked location at the entrance of a company to direct the public to the various departments where services are offered, and a person entering the premises collides with the AI robot and gets injured, this individual cannot seek compensation from the company. This is because the damage that befell them was not due to any positive intervention by the AI robot. Thus, the custodian of an AI system is not liable if the damage resulted from the injured party’s own actions. In this case, since the AI robot did not actively cause the harm, the injured party cannot claim compensation from the operator.
Additionally, the AI system may positively intervene in causing damage without making physical contact with the damaged. For example, a self-driving vehicle might make an abrupt manoeuvre to avoid an object that suddenly appears on the road. This action could cause the driver of the oncoming car to swerve off the road to avoid a collision with the self-driving vehicle, ultimately leading to the crash. In this scenario, the driver is undoubtedly entitled to compensation from the auto-operator, who intervened positively in a manner that contributed to the damage, even though there was no physical contact with the damaged.

3. Requirements for the Safe and Legal Operation of AI Systems

The effectiveness of the legal framework governing the responsibilities of AI systems custodians depends, in part, on the establishment of strict legal controls for identifying the operator of these systems. However, in some circumstances, the liability of AI custodian may not adequately compensate for the damage caused by the operation of these systems. Consequently, additional legal provisions may be needed to ensure the safe operation of AI systems.

3.1. Having an Effective Mechanism to Identify the Custodian of AI Systems: Mandatory Registration of AI Systems

The mandatory registration system for AI systems obliges manufacturers to create an identification code for each system, which includes a unique numerical sequence “similar to the chassis numbers of cars”. This facilitates easy access for licenced parties to all information related to each AI system, whether before, during, or after its operational period. Consequently, this ensures complete transparency and compliance with relevant laws and regulations regarding its manufacture and operation.
The importance of mandatory registration in a specialised registry supervised by a specific government agency for AI systems stems from two reasons:
First, mandatory registration would help avoid “loss of responsibility” arising from the operation of the system. In other words, the autonomy of AI systems and their capacity to perform actions or tasks without direct supervision may result lead to challenges in identifying the system operator unless these systems are mandatorily registered. Hence, mandatory registration will enable the identification of the operator of the AI system, allowing them to be held initially responsible for any damages caused by the system’s operation. This responsibility is objective and rooted in the principle of risk-bearing associated with the operation of the AI system. The operator may subsequently deny responsibility by demonstrating that an external cause, over which they had no control, led to the harm. Such external causes could include a manufacturing defect, a programming error, or the fault of the affected party themselves. Second, mandatory registration will enable competent authorities to exercise control and supervision over AI activities, especially since many of these activities relate to vital issues. Therefore, there must be a legislative framework that regulates the registration process, which include penalties for violators.
This aligns with the EU Artificial Intelligence Act, which referred to the mandatory registration of high-risk AI systems in the EU database referred to in Article 71. This requirement ensures that these systems comply with its specified requirements and impose its control over these systems to protect fundamental rights stipulated in the EU. This Act imposes a monitored risk management system, data governance, specific documentation, traceability of operations (by logging), transparency of the system by providing information, human supervision, and finally, a guarantee of accuracy, robustness, and security (Bensamoun 2023).

3.2. Compulsory Insurance Against Liability Arising from the Operation of AI Systems

It is widely recognised that the main purpose of the legislator’s approval of compulsory insurance is to enable the affected party to obtain compensation for damages incurred due to the insured risk, while simultaneously safeguarding the insured’s assets from the burdens of civil liability. In some cases, compulsory insurance is permitted by the UAE legislator. According to the Federal Decree-Law Regulating Insurance Activities, the Board of Directors of the Central Bank may impose compulsory insurance for specific risks under any regulations that outline the controls and conditions of insurance and other related provisions.11
This legislative approach reflects a proactive mechanism to ensure compensation for victims in situations where the liability of the custodian of things, as provided under Article 316 of the UAE Civil Transactions Law, may not be applicable to damages arising from the operation of autonomous AI systems. This is particularly important in disputes where the conditions of custodian liability are met, but its application may be excluded due to force majeure, act of a third party and the fault of the injured party.
Based on the conceptual framework, article (316) of the UAE Civil Transactions Law establishes the custodian’s liability for things as a form of presumed-fault liability that approximates strict liability, while not formally categorised as such. The injured person is relieved from the obligation of proving fault, as liability is legally presumed when harm is caused by a thing requiring special care or by a mechanical device under the custodian’s supervision. This liability is not unconditional, as the custodian may contest the presumption by proving the existence of an external cause, such as force majeure, the acts of a third party, or the fault of the injured party.
This legal classification is especially important for autonomous AI systems. While these systems can run autonomously without direct human involvement, the operator maintains a level of legal and operational oversight on their implementation and risk management (Zech 2021). Thus, the application of Article 316 to AI-related harm establishes a liability framework close to strict liability but nevertheless permits the operator to evade liability under extraordinary conditions. This intrinsic constraint necessitates additional legal measures, such mandatory insurance and guarantee funds, to secure adequate recompense for victims of AI-related harm.
Therefore, compulsory insurance can play a fundamental role in “the AI Governance” in the future. It can address the risks associated with operating AI systems across various fields and mitigate the growing concerns about electronic fraud, privacy violations, discriminatory practices, and other economic and social issues that may arise from increased reliance on AI systems in the future (Eling 2019). Also, some authors suggest implementing liability insurance for designers of AI products (Kumar and Nagle 2020).
For example, think about a robot in a restaurant that serves plates and decides on its own when to start or stop moving and when to put down or pick up dishes by looking at real-time data about the environment. In one case, the robot falls over, breaking expensive dishes and damaging the restaurant’s carpet (this constitutes a first-party loss that directly impacts the restaurant proprietor). In a further incident, the robot falls once more, resulting in the plates striking a customer, damaging her costly dress and causing burns to her skin (this constitutes a third-party loss, and implicates the restaurant owner in liability) (Faure and Li 2022). Enforcing obligatory AI liability insurance in these circumstances would safeguard the operator against financial losses resulting from damage to their own assets, while concurrently providing restitution for third parties adversely affected by the AI system, thus achieving equilibrium in protection for both business and impacted individuals.
In the domain of AI systems, the justifications for exoneration are likely to occur more frequently because of autonomous, self-learning and often opaque characteristics of AI decision-making. Consequently, relying solely on custodial liability may leave victims unprotected, particularly in cases where demonstrating causation is difficult. Compulsory insurance therefore arises as an essential supplementary mechanism rather than a replacement for civil responsibility. Imposing mandatory insurance on AI operators, who have a level of legal and economic authority over the risks posed by AI systems, enables the legal framework to provide adequate compensation for affected parties while upholding the principles of custodial liability. In this context, mandatory insurance serves a corrective and stabilising function, harmonising conventional liability regulations with the realities of autonomous technologies and strengthening the overarching regulatory framework for artificial intelligence.
However, mandatory insurance could be imposed on operators of high-risk AI systems in the UAE, considering the risk-based classification approach adopted in European IA act.12 Over time, and in light of accumulated practical experience, the scope of such insurance coverage could be gradually extended to include AI systems more generally, regardless of their level of risk. On the other hand, the obligation to pay for the costs of mandatory insurance unequivocally lies with the operator of the artificial intelligence system, eliminating the necessity for convoluted alternatives like conferring legal personality upon the intelligent system to cover the insurance expenses independently (Borghetti 2019).
Notably, AI insurance represents an innovative insurance model, as it addresses risks that are not covered by traditional cyber insurance. While cyber insurance may cover certain risks associated with operating AI systems—such as system downtime, digital attacks, and privacy breaches—it is unlikely to cover liability for bodily harm or property damage caused by AI systems (Kumar and Nagle 2020). To illustrate, aiSure is a product offered by Munich Re13 that delivers tailored coverage for contractual obligations, legal responsibilities, and financial claims arising from malfunctions, discrimination, intellectual property infringement, hallucinations, and regulatory penalties associated with AI.
For instance, the professional services firm (PwC Luxembourg) has divided the risks of operating AI systems into six categories (PwC 2019). Section One (performance risks) includes the following: 1—The risks of technical errors and performance instability, which may lead to the system’s inability to deliver consistent and adequate performance or it to deal with situations incorrectly, causing physical or psychological harm to those dealing with it. 2—The risk of bias, where the AI system can be manipulated in ways that reinforce existing racial or social biases, leading to psychological harm to humans (for example, when a person searches, on Google’s smart image engine, for the word “hands”, most hands will appear to him as white). 3—The risks of opaqueness and explainability, where it will be difficult for some people to understand and comprehend the actions of AI systems in situations where they are required to interact with them. This lack of clarity may lead to potential harm. Thus, an algorithmic credit scoring system that erroneously denies qualifying applicants may lead to compensation lawsuits, while an inaccurately diagnosed medical AI system may result in malpractice litigation (Tekale and Enjam 2024).
Section two (security risks) includes the following: The cyber intrusion risks, where AI is used to disrupt financial transactions (such as global stock exchanges) by attacking vital infrastructure and paralysing government systems. 2—The privacy risks, where AI is used to track and analyse every step people take in the real or virtual world.
Section three (the risks of losing control of AI Systems). Some advanced AI systems could pose a threat to humanity if a malfunction occurs in their central control system, leading to a situation where there is no effective solution to stop the autonomous decisions they generate.
Section four (Societal risks) includes the following: 1—The risks of the proliferation of autonomous weapons, where AI systems can be utilised as lethal tools that endanger human lives when operated by irresponsible individuals. 2—The risks of inequity in the face of the advantages that AI systems will provide due to the material differences that enable people to benefit from these systems.
Section five (Ethical risks) arises from programming AI systems without specific values or moral principles that align with human ethics. This lack of alignment may lead to AI systems making decisions that do not reflect human expectations. For example, if a self-driving car encounters a sudden traffic situation where it must choose between hitting a mother and her child on the side of the road—where the likelihood of saving the car’s passengers is high—or crashing into a wall on the other side of the road, where the chances of passenger survival are low, how will it make a decision without human emotions? In other words, scenarios involving self-driving car accidents have recently been compared to the primary ethical dilemmas associated with the “Trolley Problem” (Nyholm and Smids 2016).
Section Six (Economic risks) includes the following: 1—The job displacement risks, where AI systems can replace human labour in many industries, potentially leading to increased unemployment in society. 2—An increase in economic monopolies, where a country or company controls AI technology and exploits its dominant position to economically dominate others. 3—The liability risk, where the incorrect manufacture or operation of AI systems can lead to both material and moral damage to others, resulting in legal liability.
Despite these different categories of potential damages associated with AI systems, it is difficult to determine whether all of them qualify for insurance coverage. However, the performance risks, security risks, and liability risks remain the most appropriate for mandatory insurance, in contrast to ethical risks, which cannot be insured.
Accordingly, the author might argue that there are several justifications for imposing mandatory insurance for AI operators. These justifications include the following: (1) Avoiding or reducing the negative financial impacts of operating AI systems would encourage the companies involved to accelerate the pace of innovation in this vital field. (2) Operators will be keen to adopt more reliable AI systems, based on the inverse relationship between the value of insurance premiums and the degree of risk arising from the operation of the AI system. (3) Encouraging the third party to accept dealing with AI systems by providing insurance coverage that guarantees them appropriate compensation if damage befalls them as a result. Consequently, well-structured AI insurance products are expected to reduce the uncertainty linked to liability risk for manufacturers, thus encouraging innovation, competition, adoption, and trust in beneficial technical advancements.
Despite the existence of these achievements, developing effective AI liability solutions remains challenging due to several complexities. For an insurer, understanding the scope and likelihood of liability during the insurance period is essential for calculating premiums; however, given that the boundaries of AI-related liability remain highly uncertain, accurately assessing risk exposure is extremely challenging (Faure and Li 2022).
In a notable study, the researcher suggested the establishment of an AI Disaster Insurance Programme (AIDIP) as a risk-based compensation. Participation shall be obligatory for creators of AI models trained beyond a specified effective compute threshold. The programme’s foundation is a risk-adjusted indemnity fee that developers are required to pay for each training run. The programme removes insurers and enables the government to directly manipulate incentives of risk-generating parties. The programme permits developers to present their risk probability assessments, which are subsequently assessed by independent experts and integrated with AI safety research. Consequent to these assessments, risk-based insurance rates are modified, establishing a feedback mechanism that guarantees premiums align with the actual dangers associated with AI systems. Moreover, the initiative promotes cooperation between the government and developers (Trout 2025).
The AIDIP framework links liability to insurance, protecting AI operators financially and third parties from AI system harm. It is submitted that proposal deserves attention, and I believe it is very suitable for addressing the high-risk harms of artificial intelligence, and those that may threaten humanity. However, insurance companies will still play a prominent role in covering low- or medium-risk AI risks in future.

3.3. Compensate the Injured Decisively

Indeed, the custodian of AI system can prove the interruption of the causal relationship, such as the existence of a force majeure event, to evade liability. Additionally, insurance companies assess the risks covered by their policies; therefore, if the operation of the AI system causes damage that is not covered by the insurance policy, the insurance company will not be obligated to compensate insured. Some legal literature also suggests that some AI systems pose risks that are clearly uninsurable, such as AI systems that manipulate human behaviour (Touzain 2023). Consequently, it is prudent to establish a government guarantee fund designed to cover damages resulting from the operation of AI systems for which civil liability cannot be established and that are not covered by mandatory insurance.
It is worth mentioning that guarantee funds operate on the principle of “automatic compensation”, which entitles the injured party to receive compensation as soon as the damage occurs, without the need to seek a judicial ruling to establish the right to compensation (Abed 2011). Consequently, this legal mechanism ensures that appropriate compensation is provided for damages not covered by civil liability rules, such as those resulting from disasters. Therefore, the injured party or their heirs are granted compensation immediately following the incident that caused the damage, eliminating the necessity of pursuing legal action to claim compensation.
This solution has been adopted through South Korean legislation.14 The Minister of Trade, Industry, and Energy may authorise an agency known as “the investment risk guarantee agency” to operate a business for receiving money from an intelligent robot investment company in return for an undertaking to compensate the intelligent robot investment company for a certain amount of losses that such company may sustain during investment in any business activity. Consequently, this coverage includes compensation for damages to third parties that may occur due to the operation of AI robots as a risk of investing in this promising field.
Based on the information provided, the availability of a claim for damage caused by the operation of AI systems from a guarantee fund will depend heavily on the facts of each case. Nevertheless, several common characteristics can be identified regarding the relevant guarantee funds, as follows: 1—The financing of guarantee funds for compensation arising from the operation of autonomous AI systems will be the responsibility of their operators. 2—The law regulating the guarantee fund for the risks of operating autonomous AI systems must specify the cases in which compensation is due. Similarly, compensation will be disbursed from the guarantee fund in the event where the risk falls outside the scope of the compulsory insurance policy, if the damage was greater than the legal insurance amount, if the responsibility was not attributed to a human responsible for the intelligent system that caused the damage, or the responsible operator has proven that the causal link between the operation of AI system and the damage caused, such as force majeure, has been broken. 3—If all the legal requirements are completed, the beneficiary receives the compensation without going to the judiciary.
Finally, it is important to emphasise that this proposal is currently acceptable in light of the Federal Decree-Law Regulating Insurance Activities in UAE, where the law authorises the Central Bank to establish guarantee funds. The Central Bank may establish funds with an autonomous legal person for the purpose of protecting policyholders, beneficiaries, and aggrieved persons. A resolution shall be issued by the CBUAE’s Board of Directors, specifying the method for forming such funds and their objectives, mechanism of finance, risks covered by them, benefits they provide when such risks occur, methods of their termination and the provisions of their liquidation.15

3.4. Limitation of Liability for AI Operation Claims

The limitation of legal liability is an exception to the general principles of civil liability, which stipulate that all a debtor’s assets serve as collateral for their debts (Imad and Abdou 2021). According to these principles, an individual is only held liable within specific limits established by the legislator. A notable example is the international legislator’s restriction of liability for maritime claims,16 which is based on various practical considerations that may also apply to claims arising from the operation of AI systems. These considerations include the significant risks associated with investments in both the maritime sector and autonomous AI technologies. Similarly, there should be defined minimum and maximum liability amounts for operators concerning each material or physical harm caused by AI systems.
Arguably, the importance of limiting the legal liability of AI system operators can be understood from two perspectives: (A) The significant risks associated with investing in this technological field can result in substantial losses for operators, potentially leading them to withdraw from the sector unless their liability for operational damages is restricted. (B) Encouraging insurance companies to provide coverage for civil liability arising from the operation of AI systems is based on the premise that the operator’s legal liability has a clear and defined limit. In contrast, the rule of limited liability for the operator of an AI system should be excluded if the damage caused by the system results from intentional misconduct. Therefore, if an AI robot is intentionally operated incorrectly and caused damage to a third party, there is no justification for adhering to the rules of limited legal liability for the AI system operator. Otherwise, this rule would become a reward for the operator who intentionally damage others, which is both illegal and unethical.

4. Conclusions

In most countries around the world, there is no specific legal regulation governing AI systems. However, the importance of these technologies in our modern world cannot be denied. Therefore, legal literature must address the most pressing legal issues that will arise from their increasing reliance in the future. In that regard, the study focused on the practical challenge of determining who is responsible for harms arising from the operation of AI systems. The aim is to develop the legal provisions regarding the liability of custodians of things under UAE law, making them applicable to the operators of autonomous AI systems. This approach will enable the UAE to take a leading role in AI and harness the technology’s potential to address some of society’s most challenging problems.
In summary, the study demonstrates through critical analysis that applying custodianship provisions in UAE law to operators of autonomous AI system—by expanding the concept of “thing” and relying on legal control rather than actual possession as the criterion for custodianship—provides a suitable solution for achieving legal certainty regarding the operator’s liability for operational damages. However, the safe operation of autonomous AI systems requires additional legal measures, such as mandatory registration and insurance, and, most significantly, a guarantee fund that aims to cover damages resulting from the operation of autonomous AI in specific cases.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available in National Highway Traffic Safety Administration, part of the U.S. Department of Transportation at https://www.nhtsa.gov/laws-regulations/standing-general-order-crash-reporting#69376, Figure 1, p. 3.

Acknowledgments

During the preparation of this manuscript the author used QuillBot, premium version for the purposes of text editing. The author has reviewed and edited the output and take fully responsible for the content of this publication.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Abdou, Mohamed Morsi. 2024. The problem of legal recognition of an artificial intelligence system as an inventor—Comparative study. Journal of Law 1: 317–58. [Google Scholar]
  2. Abdou, Mohamed Morsi, and Shikha Alqydi. 2024. The issue of classifying smart pontoons as ships according to UAE Maritime Law. University of Sharjah (UoS) Journal of Law Sciences 21: 1–26. [Google Scholar]
  3. Abed, Fayed A. A. 2011. Automatic Compensation for Damages through Insurance and Guarantee Funds—A Comparative Study in Egyptian and French Law. Helwan Law Journal for Legal and Economic Studies 25: 11. [Google Scholar]
  4. Abu Dhabi Court of Cassation. 2016. Case Number: 590, UAE. February 1. [Google Scholar]
  5. Al-Sanhouri, Abdelrazaq. 1952. The Mediator in Explanation of the New Civil Law, The Theory of Commitment in General, 3rd ed. Cairo: Egyptian Universities Publishing House Press. [Google Scholar]
  6. Al-Sanhouri, Abdelrazaq. n.d. Explanation of the Civil Law, Property Rights with a Detailed Statement of Things and Money. Beirut: Arab Heritage Revival House.
  7. Bensamoun, Alexandra. 2023. Intelligence artificielle—Maîtriser les risques de l’intelligence artificielle: Entre éthique, responsabilisation et responsabilité. La Semaine Juridique Edition Générale 5: 181. [Google Scholar]
  8. Borghetti, Jean-Sébastien. 2019. Civil Liability for Artificial Intelligence: What Should its Basis Be? La Revue des Juristes de Sciences Po 17: 94–102. [Google Scholar]
  9. Borges, Georg. 2019. New Liability Concepts: The Potential of Insurance and Compensation Funds. In Liability for Artificial Intelligence and the Internet of Things. Baden-Baden: Nomos Verlagsgesellschaft mbH & Co. KG, pp. 145–63. [Google Scholar]
  10. Calo, Ryan. 2015. Robotics and the Lessons of Cyberlaw. California Law Review 103: 538. [Google Scholar]
  11. Cristono Almonte v. Averna Vision, and Robotics, Inc. 2015. Case Number: 11–CV–1088 EAW, United States District Court, W.D. New York, 31 August 2015. Available online: https://www.casemine.com/judgement/us/5914fc14add7b049349b1637 (accessed on 21 April 2025).
  12. De Mot, Jeff, and Louis Visscher. 2014. Custodian Liability. New York: Springer. [Google Scholar]
  13. Eling, Martin. 2019. How Insurance Can Mitigate AI Risks. Available online: https://www.brookings.edu/articles/how-insurance-can-mitigate-ai-risks/ (accessed on 16 September 2024).
  14. Européenne Commission. 2020. Intelligence Artificielle, Une Approche Européenne Axée Sur L’excellence et la Confiance. Bruxelles: Office des Publications de l’Union Européenne. [Google Scholar]
  15. Faure, Michael, and Shu Li. 2022. Artificial Intelligence and (Compulsory) Insurance. Journal of European Tort Law 13: 1–24. [Google Scholar] [CrossRef]
  16. French Court of Cassation. 2006. Case Namber: 04-15995, Paris, 25 April 2006. Available online: https://www.legifrance.gouv.fr/juri/id/JURITEXT000007499275 (accessed on 21 April 2025).
  17. Gonzalez Prod. Sys., Inc. v. Martinrea Int’l Inc. 2016. Available online: https://law.justia.com/cases/federal/district-courts/michigan/miedce/2:2013cv11544/279694/232/ (accessed on 21 April 2025).
  18. Imad, Al-Din Ahmed, and Mohamed Abdou. 2021. Maritime Law of the United Arab Emirates, 1st ed. Sharjah: University of Sharjah. [Google Scholar]
  19. Kallem, Sreekanth Reddy. 2012. Artificial Intelligence Algorithms. Journal Of Computer Engineering 6: 1–8. [Google Scholar] [CrossRef]
  20. Kamina, Pascal. 1996. L’utilisation Finale en Propriété Intellectuelle. Doctoral Dissertation, Université de Poitiers, Poitiers, France. [Google Scholar]
  21. Kingston, John K. C. 2016. Artificial Intelligence and Legal Liability. In Research and Development in Intelligent Systems XXXIII. SGAI 2016. Edited by M. Bramer and M. Petridis. Cham: Springer. [Google Scholar] [CrossRef]
  22. Kumar, Ram Shankar Siva, and Frank Nagle. 2020. The Case for AI Insurance. Harvard Business Review Digital Article, April 29. [Google Scholar]
  23. Lachièze, Christophe. 2020. Intelligence Artificielle: Quel modèle de responsabilité? Dalloz IP/IT 12: 665. [Google Scholar]
  24. Latil, Arnaud. 2024. Droit de l’intelligence artificielle (droit international et droit européen). JurisClasseur Communication Fasc. 988: 6. [Google Scholar]
  25. LEI Packaging, LLC v. Emery Silfurtun Inc. 2015. LEI Packaging, LLC v. Emery Silfurten Incorporated et al, No. 0:2015cv02446 - Document 174 (D. Minn. 2017). Available online: https://law.justia.com/cases/federal/district-courts/minnesota/mndce/0:2015cv02446/148896/174/ (accessed on 21 April 2025).
  26. Lucas, A. 2001. La Responsabilité du Fait des Choses Immatérielles [Responsibility for the Fact of Immaterial Things]. In Études offertes à Pierre Catala. Le droit privé français à la fin du XXe siècle. Paris: Litec, pp. 817–826. [Google Scholar]
  27. Madison, Michael J. 2017. IP Things as Boundary Objects: The Case of the Copyright Work. Laws 6: 13. [Google Scholar] [CrossRef]
  28. Martinez, Rex. 2019. Artificial Intelligence: Distinguishing Between Types & Definitions. Nevada Law Journal 19: 1027. [Google Scholar]
  29. Matthias, Andreas. 2004. The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology 6: 175. [Google Scholar] [CrossRef]
  30. Mazeau, Laurène. 2018. Intelligence artificielle et responsabilité civile: Le cas des logiciels d’aide à la décision en matière médicale [Artificial intelligence and civil liability: The case of medical decision support software]. Revue Pratique de la Prospective et de L’innovation 1: 38–43. [Google Scholar]
  31. Mendoza-Caminade, Alexandra. 2016. Le droit confronté à l’intelligence artificielle des robots: Vers l’émergence de nouveaux concepts juridiques? Recueil Dalloz 8: 445. [Google Scholar]
  32. Mizrahi, Charles. 2019. The Economic Impact of AI Projected To Be Over $14 Trillion. Available online: https://banyanhill.com/economic-impact-ai-14-trillion/ (accessed on 21 April 2025).
  33. Nyholm, Sven, and Jilles Smids. 2016. The Ethics of Accident-Algorithms for Self-Driving Cars: An Applied Trolley Problem? Ethical Theory and Moral Practice 19: 1275–89. [Google Scholar] [CrossRef]
  34. Payas v. Adventist Health Sys./Sunbelt, Inc. 2018. District Court of Appeal of Florida. Available online: https://law.justia.com/cases/florida/second-district-court-of-appeal/2018/16-3615.html (accessed on 21 April 2025).
  35. Pélissier, A. 2001. Possession et Meubles Incorporels. Paris: s.l.:Dalloz-Sirey. [Google Scholar]
  36. Pierre, Philippe. 2023. Responsabilité civile et intelligence artificielle: Une proposition de directive européenne a minima? Responsabilité Civile et Assurances 1: 2. [Google Scholar]
  37. PwC. 2019. Gaining National Competitive Advantage Through Artificial Intelligence (AI). Available online: https://www.pwc.lu/en/advisory/digital-tech-impact/technology/gaining-national-competitive-advantage-through-ai.html (accessed on 19 September 2024).
  38. Rebel, Christopher. 1995. The case for a federal trade secret act. Harvard Journal of Law & Technology 8: 427. [Google Scholar]
  39. Revet, Thierry. 2005. Propriété et droits réels. Revue Trimestrielle de Droit Civil 4: 807. [Google Scholar]
  40. Roubier, P. 1954. Le Droit de la Propriété Industrielle [Industrial Property law]. Paris: Sirey. [Google Scholar]
  41. Scherer, Matthew U. 2016. Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies. Harvard Journal of law & Technology 29: 354. [Google Scholar]
  42. Schmidt-Szaleweski, Joanna, and Jean-Luc Pierre. 1974. Le droit du breveté entre la demande et la délivrance du titre. In Mélanges en l’honneur du Professeur Daniel Bastian. Paris: Librairie Techniques, p. 77. [Google Scholar]
  43. Sharjah Roads and Transport Authority v. Al-Futtaim Motors and Machinery. 2019. Federal Supreme Court, Case number: 585, UAE. January 22. [Google Scholar]
  44. Soyer, Baris, and Andrew Tettenborn. 2022. Artificial intelligence and civil liability—Do we need a new regime? International Journal of Law and Information Technology 30: 385–97. [Google Scholar] [CrossRef]
  45. Tekale, Komal Manohar, and Gowtham Reddy Enjam. 2024. AI Liability Insurance: Covering Algorithmic Decision-Making Risks. International Journal of AI, BigData, Computational and Management Studies 5: 151–59. [Google Scholar] [CrossRef]
  46. Touzain, Antoine. 2023. Les perspectives liées à l’intelligence artificielle—Au titre des perspectives du droit des assurances au quart du XXIe siècle. Bulletin Juridique des Assurances 88: 2. [Google Scholar]
  47. Tricoire, Emmanuel. 2008. La responsabilité du fait des choses immatérielles [Responsibility for immaterial things]. In Mélanges en l’honneur de Philippe Le Tourneau. Paris: Dalloz, pp. 983–1002. [Google Scholar]
  48. Trout, Cristian. 2025. Insuring Uninsurable Risks from AI: Government as Insurer of Last Resort. Paper presented at Generative AI and Law Workshop at the International Conference on Machine Learning (ICML 2024), Vienna, Austria, July 22–27. [Google Scholar]
  49. van Genderen, Robert H. 2018. Do We Need New Legal Personhood in the Age of Robots and AI? In Robotics, AI and the Future of Law. Singapore: Springer Publishers, pp. 15–50. [Google Scholar]
  50. Varošanec, Ida. 2022. On the path to the future: Mapping the notion oftransparency in the EU regulatory framework for AI. International Review of Law, Computers & Technology 36: 95–117. [Google Scholar]
  51. Wagner, Gerhard. 2011. Custodian’s Liability in European Private Law. In Handbook of European Private Law. Edited by Jürgen Basedow, Klaus J. Hopt and Reinhard Zimmermann. Amsterdam: Elsevier. Available online: https://ssrn.com/abstract=1766138 (accessed on 18 February 2025).
  52. Wendehorst, Christiane. 2020. Strict Liability for AI and other Emerging Technologies. Journal of European Tort Law 11: 150–80. [Google Scholar] [CrossRef]
  53. Wendehorst, Christiane. 2022. Liability for Artificial Intelligence: The Need to Address Both Safety Risks and Fundamental Rights Risks. In The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Edited by Silja Voeneky, Philipp Kellmeyer, Oliver Mueller and Wolfram Burgard. Cambridge: Cambridge University Press, pp. 187–209. [Google Scholar]
  54. Weaver, John Frank. 2014. We Need to Pass Legislation on Artificial Intelligence Early and Often. Available online: https://slate.com/technology/2014/09/we-need-to-pass-artificial-intelligence-laws-early-and-often.html (accessed on 8 March 2024).
  55. Yanisky-Ravid, Shlomit, and Xiaoqiong (Jackie) Liu. 2018. When Artificial Intelligence Systems Produce Inventions: The 3A Era and an Alternative Model for Patent Law. Cardozo Law Review 39: 2215–63. [Google Scholar] [CrossRef]
  56. Zech, Herbert. 2021. Liability for AI: Public Policy Considerations. ERA-Forum 22: 147–58. [Google Scholar] [CrossRef]
1
In accordance with general rules of contractual and civil liability, the US judiciary has rendered multiple rulings regarding liability for damage caused by artificial intelligence systems. These rulings cover issues such as factory liability for manufacturing defects or negligent maintenance, operator or owner liability for robot misuse, and user liability for improper or non-purpose operation.
2
“operator” means a provider, product manufacturer, deployer, authorized representative, importer or distributor, article 3, EU Artificial Intelligence Act.
3
Article 282 of the UAE Civil Transaction Code:” Any harm done to another shall render the actor, even though not a person of discretion, liable to make good the harm”.
4
The United Arab Emirates has launched a comprehensive National Artificial Intelligence Strategy 2031, which sets a clear vision to transform the country into a global leader in artificial intelligence by investing in key human capital and priority sectors. This strategy outlines objectives to build a fertile ecosystem for AI development, attract international collaboration, and support innovation within the UAE, thereby fostering an environment conducive to technological growth and practical implementation of AI solutions. https://teams.microsoft.com/l/message/19:61cbbe4f-6c5c-4624-9fa7-16a711a14828_9ede494f-50c3-47a7-ba55-44fb41fae952@unq.gbl.spaces/1766567598425?context=%7B%22contextType%22%3A%22chat%22%7D (accessed on 15 December 2025).
5
Article (1) of Federal Decree by Law No. (46) of 2021 on Electronic Transactions and Trust Services.
6
Article (1) of law No. (9) of 2023 Regulating the Operation of Autonomous Vehicles in the Emirate of Dubai.
7
Article (3/1) of Artificial Intelligence Act.
8
Article 1242, paragraph 1 of the Franche Civil Code: ”One is liable not only for damage caused by one’s own actions, but also for damage caused by the actions of persons for whom one is responsible, or by things in one’s care”.
9
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act).
10
Paragraph (5) of Artificial Intelligence Act provides that: ”depending on the circumstances regarding its specific application, use, and level of technological development, AI may generate risks and cause harm to public interests and fundamental rights that are protected by Union law. Such harm might be material or immaterial, including physical, psychological, societal or economic harm”.
11
Article (5) of the Federal Decree-Law No. (48) of 2023 Regulating Insurance Activities.
12
Article (6) EU AI Act, Classification rules for high-risk AI systems.
13
aiSure™, Munich RE, Insure AI. https://www.munichre.com/en/solutions/for-industry-clients/insure-ai.html (accessed on 16 December 2025).
14
Article (27) of the Intelligent Robots Development and Distribution Promotion Act, No. 9014, 28 March 2008, Amended by Act No. 11690, 23 March 2013.
15
Article (7) of the Federal Decree-Law No. (48) of 2023 Regulating Insurance Activities.
16
See: Convention on Limitation of Liability for Maritime Claims (LLMC). URL: chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://treaties.un.org/doc/Publication/UNTS/Volume%201456/volume-1456-I-24635-English.pdf (accessed on 7 November 2025).
Figure 1. Self-Driving Car Accidents (2021–2025). Source: National Highway Traffic Safety Administration, U.S.
Figure 1. Self-Driving Car Accidents (2021–2025). Source: National Highway Traffic Safety Administration, U.S.
Laws 15 00002 g001
Figure 2. Custodian Liability in UAE Law. Source: Author’ own elaboration.
Figure 2. Custodian Liability in UAE Law. Source: Author’ own elaboration.
Laws 15 00002 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abdou, M.M. Custodian of Autonomous AI Systems in the UAE: An Adapted Legal Framework. Laws 2026, 15, 2. https://doi.org/10.3390/laws15010002

AMA Style

Abdou MM. Custodian of Autonomous AI Systems in the UAE: An Adapted Legal Framework. Laws. 2026; 15(1):2. https://doi.org/10.3390/laws15010002

Chicago/Turabian Style

Abdou, Mohamed Morsi. 2026. "Custodian of Autonomous AI Systems in the UAE: An Adapted Legal Framework" Laws 15, no. 1: 2. https://doi.org/10.3390/laws15010002

APA Style

Abdou, M. M. (2026). Custodian of Autonomous AI Systems in the UAE: An Adapted Legal Framework. Laws, 15(1), 2. https://doi.org/10.3390/laws15010002

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop