The moral and legal interpretation of autonomous vehicles’ decision-making mechanisms raises not only technical but also profound normative questions. The study shows that classic moral dilemmas in philosophy (especially the trolley problem) are still relevant, but they can only be applied to a limited extent to the complexity of real-life traffic situations. At the same time, analyzing these dilemmas helps sensitize engineers and legislators involved in development to the nature of moral decisions. In the field of algorithmic decision-making, it is becoming clear that encoding legal norms and moral preferences involves inevitable compromises and raises the question of reevaluating the moral content of law. Technological approaches ultimately point in two directions: a balance must be found between minimalism, which seeks to avoid dilemmas, and the risky but necessary undertaking of ethical programming—all while establishing a legal framework that ensures the social acceptance of algorithmic decisions.
3.1. The Legacy of the Doctrine of Double Effect and the Trolley Problem—From Philosophy to Software?
The examination of the inevitable negative consequences of an action has long been a subject of debate in philosophy [
4]. St. Thomas Aquinas can be considered a precursor to the trolley dilemma, as he believed that human actions have a “golden rule”: bonum ex integra causa, malum ex quovis defectu [
5]. Nevertheless, he also focused on situations where there is no right answer because the problem always causes some inevitable evil [
6]. The philosopher held that murder committed in self-defense can sometimes be permissible, or more precisely, that killing an attacker is justifiable. This case is unique because it results in two effects: one primarily good and the other primarily bad—the good effect is the actor’s intention (to save their own life), while the bad effect is unintended but unavoidable (praeter intentionem). In moral theology, this principle is known as the doctrine of double effect (DDE), which states that a harmful consequence may be morally acceptable if: 1. the act itself is morally good or at least neutral; 2. the harmful effect is not intended, only foreseen; 3. the good is not achieved by means of the harmful effect; and 4. there is a proportionate moral reason behind the decision [
5,
7]. This “test” later became an important milestone in casuistry, through which life-and-death dilemmas could be explained morally. Separated from moral theology, the doctrine began to conquer the field of ethics, and with its help, ethicists sought to explain problems such as the killing of innocents (how to justify an action whose unintended consequence is the death of another person). Death is therefore a bad effect that is unintended but inevitable [
8,
9].
One of the best-known (but now much-debated) frames for the ethical assessment of autonomous vehicles is the trolley dilemma. This thought experiment originally appeared in Philippa Foot’s 1967 paper, which discussed the moral application of the principle of double effect in relation to abortion [
10]. The basic problem is whether it is morally justifiable to sacrifice one person’s life in exchange for saving several others. Foot’s example is the classic switchman case: a runaway tram is heading towards five people, but if we switch the points, there is only one person standing on the other track—and he will die. The main question is: should we actively intervene and kill one person to save five, or should we be passive and simply let the train kill the five people “on its own” [
11]? Foot’s theory emphasizes the distinction between negative and positive duties: it is not the same to harm someone through active action as it is to fail to help someone through inaction.
The most important developer of Foot’s thought experiment was Judith Jarvis Thomson, who created new versions: for example, pushing the fat man standing on the bridge, or the “loop” case, where sacrificing the man on the loop track directly stops the tram, thus saving the other five [
12]. With these examples, the author points out that the moral judgment of using someone as a means is different from someone “just” becoming a victim of events—even if the outcome is numerically the same. Different versions of the problem (such as the “Loop,” and “Transplant” cases) demonstrate that our intuitive moral judgment varies significantly depending on whether death occurs actively or passively. For example, many people consider pushing a fat man off a bridge to be more morally reprehensible than switching a switch, even if the outcome is the same [
13,
14].
In the literature, Lin pointed out that nowadays we should imagine the classic thought experiment not with trams, but with self-driving cars [
15]. Shortly thereafter, several unfortunate accidents involving Uber and Tesla cars operating in self-driving mode confirmed that the trolley dilemma is a useful tool for expressing the tension between the boundaries of law, morality, and technology [
16]. The main question remains the same in relation to the trolley case and traffic emergencies involving self-driving cars: what is the right thing to do, and how do we judge death as an unintended consequence? The difference between intelligent vehicles and traditional vehicles is that self-driving cars do not need to cooperate with humans; the machine itself “decides” through algorithms. For designers of self-driving software, decision-making constraints have now become a reality. In recent years, there has been growing criticism questioning the applicability of the trolley dilemma in the context of AI-based transportation systems. Criticisms most often concern the level of abstraction of the thought experiment, its irrelevance, and the social distortions it brings to the surface.
It would be impossible to summarize all the criticisms, so we will now provide a brief overview of the most relevant ones. It is clear from the above description that trolley dilemmas are generally limited to binary decisions, but real-life traffic situations are much more complex and involve multiple variables (e.g., speed, braking distance, traffic violations, environmental conditions, etc.). The thought experiment does not authentically reflect the reality of road decisions, as there are countless dynamic factors on the roads and, as a result, several possible maneuvers may arise [
17]. Compared to the example reduced to two fixed outcomes, in reality the specific outcome of dangerous situations is uncertain, the effectiveness of emergency maneuvers is not guaranteed, and it is impossible to calculate in advance who would be injured [
18]. Other studies have found that dilemma situations are statistically extremely rare in modern autonomous systems. The focus of development is not on ethical choices, but on collision avoidance, i.e., preventing trolley dilemma-type situations [
19].
The results of the highly influential Moral Machine Experiment were published in 2018, after nearly 10 million people worldwide took part in an online dilemma questionnaire. The project confirmed that moral decisions are strongly influenced by culture, social status, and personal values. There were differing results in terms of whether people were willing to sacrifice the elderly for the young, pedestrians for passengers, or rule-followers for rule-breakers [
20], and it also pointed out that certain preferences for “socially useful” roles can be observed (e.g., the life of a doctor is more valuable than that of a homeless person). In fact, Edmonds had already touched on this years earlier when, in relation to the trolley dilemma without self-driving cars, he argued that moral intuition is not universal but culturally determined and even socially and psychologically variable [
14].
The social distortions arising in connection with the resolution of paradigmatic cases raise some rather sensitive issues—these have already been raised in connection with the Moral Machine. In situations where the system “assigns value” to human lives (e.g., young vs. old), there is a risk that the decision-making logic will become discriminatory, even if unintentionally. Such “coded morality” is not only ethically questionable, but may also be legally contestable, especially in light of fundamental rights principles. All this leads us to the conclusion that the thought experiment distracts attention from issues of institutional and social justice in the field of transportation [
21]. However, the “social contract” that prevails in transportation must include not only moral but also political legitimacy: people need to know what principles a car will use to “make decisions” and they need to be able to agree to them as social actors [
22].
The common denominator of these perspectives can also be seen in the fact that the thought experiment mistakenly presents high-stakes ethical decisions and describes ethical decision-making in a misleading way [
23]—despite this, however, the methodological role of the trolley dilemma remains significant. Today, the literature tends to view the thought experiment as a means of raising awareness of moral considerations and giving them space in the design of algorithms [
24]. The thought experiment should not be seen as a model of decision-making practice, but rather as a means of clarifying various moral beliefs, thereby providing clear information that can be used in practice. The case of the runaway train and the other case variants may be extreme, but mathematics and physics are not without extreme examples either [
25]. Exploring dilemmas is therefore a useful exercise even if they do not offer concrete and reassuring solutions, as they show what types of questions cannot be answered by technology or law alone. For this reason, the further questions raised by the analysis are as follows: what patterns and norms are used in programming?
3.2. Algorithmic Decision-Making and Legal-Moral Coding
How the software built into self-driving cars “decides” is not only an ethical issue, but also a legal one. It is clear that during programming, the vehicle is programmed to be a rule-abiding vehicle, meaning that during the learning process, the self-driving vehicle will learn the legal regulations governing the world of transportation, and it will have to judge real-life scenarios based on this. Its participation in the world of transportation involves numerous “decisions” (machine decisions), which are formulated and implemented with the help of the applicable transportation legislation.
The question is what methods are available for vehicles to acquire traffic know-how. The main goal of the developments is to reduce traffic accidents with the spread of self-driving vehicles, thereby improving road safety and social mobility. To this end, various learning methods and algorithmic techniques are used during development to enable the vehicle to sense its environment and make decisions through safe navigation [
26]. Deep learning is particularly noteworthy, which is mostly done with convolutional neural networks (CNN); these are used for image processing and interpreting the environment, and are extremely helpful in obstacle, vehicle, and pedestrian detection, for example [
27]. Supervised learning works with labeled data sets, which can be useful for detecting traffic signs or recognizing objects [
28]. In reinforcement learning (RL), the self-driving vehicle learns what the optimal decision is in a simulated environment, receiving continuous feedback based on its actions. This can be an important method, especially for learning more complex traffic situations [
29]. It is well known that AVs are equipped with various sensors, so during the learning process, sensor fusion learns to combine and use data from these sensors (camera, LiDAR, radar, etc.) while driving [
30]. In addition to the myriad of methods, it is also important to mention the area of decision-making and control, which is best taught to the vehicle through behavior trees; this can be supplemented by deep learning, mainly for the purpose of properly handling dynamic traffic situations [
31]. It is also not possible to provide an exhaustive list of methods, but the point is that their combination enables self-driving cars to perceive all dynamic elements in their environment and react to unexpected situations. During development, it is also important that the vehicle be able to learn from real, live environmental data, so experts use various simulation tests [
32].
Taking all this into account, at first glance it is difficult to imagine teaching vehicles that learn in this way such high-level moral commands (e.g., “never intentionally kill an innocent person”; “taking one human life is morally preferable to causing the deaths of five people by braking”), which may arise in a trolley dilemma-like situation. The aforementioned MIT research has already pointed out that moral norms and preferences are heterogeneous [
33], so opinions differ considerably as to which human life is right to save (depending on geographical, cultural, age, social, and other factors). It is very difficult to incorporate these into a single algorithm or to select a consensus moral principle, as there is no “universal morality” about what is right in these extreme traffic situations. Human moral decision-making is also context-dependent, vague, and cannot be algorithmized. Philosophers have long debated fundamental questions (mainly utilitarian and deontological ethicists “clash”) but have not arrived at a clear moral “formula.” Moral preferences cannot therefore be translated into a formal specification [
34]. If we try to do so anyway, there is a risk that the program will become too rigid and blindly apply moral prescriptions, ignoring the specific circumstances of the case. In such situations, the software of an autonomous vehicle cannot exercise flexible judgment like a human being.
In addition to this “moral coding,” let us look at the situation regarding the acquisition of traffic rules. As already mentioned, traffic is primarily based on legal regulations, which can be taught to self-driving software using machine learning algorithms. This study does not deal with the algorithmic learning of these rules (although this is undoubtedly an exciting and complex research question) but focuses on the question of what role these rules play in traffic in relation to or alongside moral norms. In extreme situations such as the trolley problem, the question arises as to why the self-driving car “decided” as it did, i.e., why did the maneuver cause the death of person X or Y? The difficulty is that we are faced with a lack of explainability: if a deep learning system similar to self-driving software causes an accident, it is difficult to answer the question of why afterwards. Machine decision-making has many special characteristics (and here again, the differences can be formulated in comparison to human decision-making): data-centric operation is based on a different grammar, and its building blocks are information and behavior (rather than meaning and action); there is no community practice to aid decision-making—the machine can only resort to the interpretation given during programming; the machine does not know the freedom of deliberation; software does not have the same capacity for abstraction as humans; legal language and the application of legal rules do not recognize narratives and the meta-texts behind the rules, which often help humans navigate the maze of argumentation and decision-making [
35,
36]. Researchers are therefore calling for the application of AI design principles such as “Explainable AI”, i.e., explainable, verifiable algorithms [
37], and “value alignment”, i.e., aligning AI values with human values [
38]. However, there has been no breakthrough in this area either: “fully ethical” machine development remains an unsolved problem, and compromises will likely be necessary during the design process.
In modern constitutional states, it is a fundamental principle that the law does not necessarily sanction all moral evils and does not prescribe all moral goods [
39]—there is a certain separation (e.g., bad thoughts are not punishable, self-sacrifice is not mandatory, etc.). In the case of autonomous vehicles, however, it seems that programming forces the law to take a moral stance. The legal question is: is it right for the law to “codify morality” through machine decisions? Some believe it is, because this guarantees socially acceptable behavior and public trust. This view is reminiscent of the naturalistic tradition of law, according to which law reflects ethical values [
40]. Others, however, warn that the law can become inflexible if it prescribes morality too strictly: what if, in a specific case, the legally prescribed algorithm leads to a worse outcome than human judgment would? For example, imagine an extreme situation where the law prohibits the sacrifice of any life, so the car cannot swerve—but as a result, five people die. In this case, the morality prescribed by law has paradoxically caused greater harm. This is, of course, an extreme example, but it shows that an inflexible ethical code carries risks [
41].
There is a view that when an algorithm decides on life and death, it is in fact an exercise of political power. If this decision is made by the legislator (e.g., stating that the software cannot kill the passenger under any circumstances), then the moral choice becomes part of the democratic process. However, if engineers at private companies decide on the code, then there is a democratic deficit: private actors determine whose life takes priority in a crisis. This also affects social justice, as companies have significant economic and political power and do not make decisions under the same conditions of “mutual vulnerability” as individual drivers would. A multinational company can afford to lobby for its own interests or shift responsibility, while an ordinary driver cannot [
42]. It is therefore important that representatives of society (legislators) are involved in determining the principles governing algorithms—this will ensure some kind of collective responsibility [
43]. Of course, the task is not easy: politicians are reluctant to take responsibility for “deciding who the car should kill,” as this is unpopular. However, they can at least provide a framework at the level of basic principles.
In clarifying the philosophical roots of the thought experiment, we also touched upon the possible discrepancy between what is legally right and what is morally right [
44]. From a legal theory perspective, the problem of “tragic choices” arises: the law generally seeks to avoid condoning the killing of an innocent person, even in an emergency. This is a deeply rooted principle, with the aim of preventing instrumentalization (treating anyone as a means to an end). At the same time, from a moral point of view, we may sometimes consider the lesser evil acceptable (see utilitarianism). In the relationship between law and morality, the question in relation to autonomous cars is: to what extent can the law allow for “moral utilitarianism” [
45]? Legal systems generally take a deontological stance: it is forbidden to kill and it is also forbidden to differentiate between lives. The task of legal theory here is to clarify the extent to which the public good (saving more lives) can override individual rights (the prohibition of—intentionally—taking a life). Although there is no simple answer to this question, modern legal systems are unanimously based on the principle of equal dignity, so it is likely that the protection of individual rights will continue to be of paramount importance in most legal cultures [
46]. Therefore, the law will tend to restrict what the software can do (negative prescriptions: what is not allowed) rather than positively prescribe a “kill the lesser” strategy.
The above line of reasoning supports the view that the legal and moral justification of actions, i.e., the relationship between law and morality, takes on a new dimension in the case of autonomous vehicles. The law is forced to take a stand on moral dilemmas and strike a balance between protecting individuals from machine decisions and enabling the life-saving potential of technology to be exploited. From a legal theory perspective, this issue also involves the law giving legitimacy to algorithmic decisions—after all, if citizens know that the machine operates on the basis of principles laid down in law (and thus agreed upon by the public), they are more likely to accept the result, even if it is tragic. If, on the other hand, the decision-making mechanism is unclear or driven by private interests, it undermines the rule of law and public trust [
47].
The question is: where do regulations concerning the decision-making algorithms of self-driving vehicles stand today, and is there a starting point that responds in some way to the dilemmas discussed above? One of the most significant current developments in the regulatory framework for the moral decisions of autonomous vehicles is the European Union’s regulation on artificial intelligence, the AI Act. The aim of the regulation is to provide legally binding, horizontally applicable regulation for the development, marketing, and use of artificial intelligence systems. The regulation is based on a four-tier risk classification system that distinguishes between prohibited, high-risk, restricted, and low-risk AI systems. Autonomous vehicles fall into the high-risk category, which entails strict compliance, traceability, and documentation requirements. This means that manufacturers must meet strict requirements: they must carry out extensive risk management procedures, ensure human oversight, transparency, safe design, and accountability after a failure. The AI Act also prohibits certain specific practices, such as the use of AI that seriously violates fundamental rights or subliminally manipulates people [
48].
A central procedural safeguard introduced by the AI Act for high-risk systems is the requirement of human oversight [
48]. This requirement is not only technical but also has legal and ethical significance. The requirement for human control recognizes that algorithmic decision-making is not free from moral and social value judgments, and therefore it is necessary to preserve human responsibility. The regulation is not neutral on this point: transparency, accountability, and respect for fundamental rights as principles stem from the European human rights tradition and represent a distinctly normative orientation. The regulation not only serves security and reliability but also seeks to integrate the principles of the rule of law and human rights into the operation of AI systems [
49].
While these obligations are often discussed as legal or institutional requirements, in practice they also demand concrete technical integration into the software and system architecture of autonomous vehicles. Traceability, for instance, must be ensured through multi-layered decision-logging systems that go beyond traditional event data recorders. AI Act–compliant traceability requires documenting the inputs and intermediate states of perception and planning modules, retaining the parameters that influence decision trees or neural network outputs, and maintaining an audit trail that enables independent post-incident reconstruction [
50]. Likewise, human oversight must be operationalized through supervisory control layers: at lower levels of automation via driver monitoring systems that assess attention and readiness to intervene, and at higher automation levels through “human-on-the-loop” mechanisms allowing remote operators to enforce safe fallback modes or override certain system behaviors [
51]. Transparency requires modular, well-documented system design, including clear specification of the operational design domain (ODD), fallback strategies, and the normative assumptions embedded in the software’s decision-making constraints [
52]. Together, these mechanisms illustrate how the AI Act’s procedural safeguards translate directly into technical design choices.
However, the legal regulation does not provide a direct answer to the content of moral decisions that may be made by autonomous vehicles. The AI Act does not define moral algorithms, does not encode the correct answer to dilemmas, and does not seek to create moral artificial intelligence. Instead, it prescribes procedural and institutional safeguards and requires that social values, fundamental rights, and the minimization of harm be taken into account in AI development. The emphasis is therefore not on the moral basis of individual decisions, but on the framework for legitimizing decisions and ensuring that they comply with the norms accepted by European society [
53].
3.3. Technological Development Trends: Avoiding Dilemmas vs. Ethical Programming
Designers of autonomous vehicles generally follow two principal strategies when confronting situations that might involve unavoidable harm: they either seek to avoid the necessity of making morally charged decisions, or they attempt to implement explicit ethical decision rules into the vehicle’s decision architecture [
54]. These approaches (often labelled, respectively, as dilemma avoidance and ethical programming) appear conceptually distinct, yet in practice they form part of a continuum of technological, legal and moral considerations.
Dilemma avoidance rests on a cautious, minimalist philosophy that aims to prevent the autonomous vehicle from entering morally problematic scenarios in the first place. This strategy involves designing the vehicle to adopt defensive driving principles: maintaining safe following distances, reducing speed in uncertain settings, and behaving conservatively in complex environments [
55]. In critical moments, braking and attempting to stop is treated as the primary response. From a technical perspective, this corresponds to the encoding of hierarchical priorities in the behavior-planning module, where emergency braking is a high-priority or hard constraint, and steering is permitted only when the system can predict with sufficient confidence that the maneuver will not introduce additional, unpredictable risks. This logic makes the vehicle’s actions more predictable and aligns with deontological legal reasoning, which places emphasis on avoiding intentional harm. It also resonates with legal caution and public acceptance: if the system does not actively choose between potential victims, the question of explicit moral responsibility appears less acute. The well-known debate around the so-called “Mercedes rule”—according to which passenger protection was initially suggested as an overriding priority, before being rejected in Germany—shows how sensitive such issues are and how strongly legal norms constrain technical design [
56,
57].
The minimalist strategy also has technological advantages, because the relevant systems (automatic emergency braking, pedestrian detection, and collision avoidance) are already widely used. Ethical decision-making modules, by contrast, are far more complex and less mature. However, dilemma avoidance has inherent limitations. Not all dangerous situations can be resolved through braking alone. In many real-world cases (such as the “tunnel problem”) the direction in which the vehicle steers, even minimally, influences who is injured. A strictly passive strategy may also create secondary harms: a vehicle that stops in the middle of a roadway might cause subsequent collisions, leading to multiple casualties. Critical scenarios are too varied to be reliably managed by a purely conservative algorithmic response [
58]. These limitations highlight why dilemma avoidance, although technologically appealing and normatively cautious, cannot fully eliminate the moral burden of autonomous decision-making.
The opposing school of thought contends that the autonomous vehicle will inevitably face cases in which it must evaluate competing harms, and therefore it is better to prepare for such situations consciously. Ethical programming attempts to encode moral principles directly into the vehicle’s decision-making processes [
59]. Proponents argue that autonomous vehicles have advantages in crisis situations—stable processing, no emotional interference, and precise sensing—which make their decisions more predictable than those of human drivers. Since the absence of explicit rules also amounts to a kind of decision, these scholars argue that an “ethics switch” or “ethics setting” is needed. Some proposals adapt Rawlsian ideas to algorithmic form, emphasizing the protection of the most vulnerable; others rely on principles derived from legal doctrines, such as the requirement that the vehicle may deviate from its normal trajectory only if the deviation does not disproportionately infringe anyone’s fundamental rights [
60]. Further contributions emphasize responsibility allocation. If, for example, a motorcyclist breaks the law by running a red light and collides with an autonomous vehicle, it seems counterintuitive to require the system to hit an innocent cyclist in order to save the motorcyclist and passenger. Responsibility-sensitive approaches therefore attempt to integrate legal norms and fairness into ethical decision-making, arguing that the vehicle should not override the logic of traffic regulations for the sake of utilitarian optimization [
61,
62].
Yet ethical programming quickly encounters significant technical and conceptual challenges. Implementing complex moral doctrines, such as the doctrine of double effect, requires the system to distinguish between intended and merely foreseen harm, to assess proportionality, and to determine whether a given harm is a means or a side effect. These distinctions may be represented in symbolic rule-based or hybrid architectures, but are beyond the capabilities of end-to-end neural networks, which lack any representation of intention. Even in hybrid systems, real-time decision-making faces severe constraints: the vehicle has only milliseconds to act in emergency conditions, during which it cannot infer moral salience, vulnerability or responsibility. Moreover, tools designed to support ethical alignment in system development—notably Value Sensitive Design (VSD) and Explainable AI (XAI)—face substantial limitations. VSD requires designers to identify relevant values in advance; however, autonomous vehicles operate in highly dynamic environments where such values cannot reliably be interpreted by the system [
63]. XAI, meanwhile, cannot generate meaningful real-time explanations under the temporal constraints of driving, and post hoc explanations often fail to capture the true causal pathways in high-dimensional neural models. As a result, ethical programming often collapses into simplified harm-minimization models, which conflict with legal norms grounded in human dignity and the prohibition of instrumentalization [
64].
A further challenge arises from the physical and computational limitations of autonomous vehicle sensor systems. Even if an AV were equipped with a sophisticated moral framework, its ability to apply moral principles would remain limited by what it can perceive. LiDAR point clouds deteriorate in rain, fog or snow; camera systems struggle with glare, shadows, low light, and occlusion; and radar, while robust, lacks the granularity required to distinguish between vulnerable road users and inanimate objects [
65]. Sensor fusion pipelines introduce temporal delays, meaning that the system processes the world with an inherent latency that can reach dozens of milliseconds—an interval that is highly significant in fast-moving traffic scenarios. These constraints make it exceedingly difficult for the system to reconstruct the morally relevant features of a situation: it cannot consistently determine how many individuals are present, which of them are especially vulnerable, who bears responsibility for creating the situation, or whether a given obstacle is a person, an animal or an object [
66]. Moral theories assume access to such distinctions, but AV systems cannot reliably obtain them. As a consequence, many hypothetical ethical choices are not technically implementable, because the system lacks the perceptual grounding necessary to execute them. This discrepancy between moral abstraction and sensor-based reality underscores the fundamental challenge of expecting machine morality in environments where the underlying information is uncertain, incomplete, or ambiguous.
These technological strategies both reflect and reinforce underlying legal-philosophical commitments. Dilemma avoidance corresponds to a deontological orientation, which prioritizes negative duties and rejects decisions that instrumentalize individuals. Ethical programming, meanwhile, presupposes a form of utilitarian reasoning, insofar as it evaluates competing harms and seeks to reduce total negative outcomes [
67]. However, neither strategy is fully compatible with the regulatory approach of the EU AI Act. The Act does not take a stance on substantive moral doctrines; instead, it imposes procedural safeguards such as traceability, human oversight, transparency and risk management. These requirements demand technical operationalization—decision logging, supervisory control layers and modular explainability—yet avoid endorsing any particular moral theory. Consequently, the Act simultaneously constrains and shapes the development of both dilemma avoidance and ethical programming, without providing substantive guidance on moral prioritization.
Technological development therefore proceeds between two imperfect extremes: one seeks to remove moral decision-making from the equation, while the other attempts to articulate and encode moral rules [
68]. In practice, a combination of the two may prove most realistic. Advanced autonomous systems might be designed to avoid ethically fraught situations as far as possible, while simultaneously incorporating a limited, socially legitimized and legally compliant framework to guide decisions when such situations cannot be prevented. Ultimately, this challenge leads back to the broader question of how law, technology and morality intersect—and to the need for a normative model that is both technically feasible and democratically legitimate.