Next Article in Journal
Examining Travel Behavior and Activity Changes During Flooding: A Case Study of Kudus, Indonesia
Previous Article in Journal
Effectiveness of Variable Message Signs on Utah Roadways
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

According to Whose Morals? The Decision-Making Algorithms of Self-Driving Cars and the Limits of the Law

1
Department of Legal Theory, Széchenyi István University, Áldozat utca 12, H-9026 Győr, Hungary
2
Department of Road and Rail Vehicles, Audi Hungaria Faculty of Vehicle Engineering, Széchenyi István University, Egyetem tér 1, H-9026 Győr, Hungary
*
Author to whom correspondence should be addressed.
Future Transp. 2026, 6(1), 5; https://doi.org/10.3390/futuretransp6010005
Submission received: 11 November 2025 / Revised: 16 December 2025 / Accepted: 19 December 2025 / Published: 27 December 2025

Abstract

The emergence of self-driving vehicles raises not only technological challenges, but also profound moral and legal challenges, especially when the decisions made by these vehicles can affect human lives. The aim of this study is to examine the moral and legal dimensions of algorithmic decision-making and their codifiability, approaching the issue from the perspective of the classic trolley dilemma and the principle of double effect. Using a normative-analytical method, it explores the moral models behind decision-making algorithms, the possibilities and limitations of legal regulation, and the technological and ethical dilemmas of artificial intelligence development. One of the main theses of the study is that in the case of self-driving cars, the programming of moral decisions is not merely a theoretical problem, but also a question requiring legal and social legitimacy. The analysis concludes that, given the nature of this borderline area between law and ethics, it is not always possible to avoid such dilemmas, and therefore it is necessary to develop a public, collective, principle-based normative framework that establishes the social acceptability of algorithmic decision-making.

1. Introduction

The advent of autonomous vehicles is transforming not only the technological foundations of transportation but also its normative, legal and societal parameters. Unlike traditional vehicles, which operate as extensions of human agency, autonomous systems introduce a qualitatively different form of decision-making in which algorithms act—sometimes unavoidably—on matters involving significant risks to human life. As machine perception and planning increasingly replace human judgment, questions arise about how these systems behave in critical situations, how their decisions should be evaluated, and what forms of legal and moral responsibility can be attributed to them.
The progression toward higher levels of automation, particularly at SAE Levels 3–5 [1], challenges long-established assumptions about accountability, agency and the role of human discretion in traffic [2]. In scenarios where a collision cannot be avoided, the system may influence who is harmed, to what extent, and under what conditions [3]. Although such outcomes result from algorithmic processes rather than human intention, they nonetheless possess clear normative significance. This study therefore examines a central practical dilemma: how should an autonomous vehicle “decide” when it faces an unavoidable accident in which human lives are at stake, and how can the resulting harm—or its avoidance—be interpreted within moral and legal frameworks?
Technological development in this domain cannot be separated from normative inquiry. Emerging debates highlight that autonomous vehicle decisions depend not only on engineering design, sensor accuracy and system optimization, but also on the underlying value structures embedded—implicitly or explicitly—into algorithmic behavior. Classic philosophical tools such as the trolley dilemma and the doctrine of double effect remain useful not as literal blueprints for programming, but as analytical instruments for uncovering the moral assumptions and expectations that inform public and legal discourse. At the same time, the regulatory landscape, particularly through the European Union’s AI Act, increasingly requires transparency, human oversight and procedural accountability, indicating that the law recognizes the normative stakes of algorithmic decision-making even without prescribing substantive moral rules.
Accordingly, this study addresses three core research questions: (1) whether autonomous systems are capable of making morally relevant distinctions under real-world sensor and perception constraints; (2) how legal and normative frameworks—especially the AI Act—shape, authorize or limit algorithmic decision-making; and (3) whether a unified model can be developed that integrates moral theory, technical feasibility and regulatory oversight into a coherent normative–technical framework. Unlike previous literature, which typically examines ethical programming, legal standards or engineering limitations in isolation, this paper provides an integrated analysis combining normative theory, sensor-level constraints, ethics-by-design approaches, explainability tools and institutional mechanisms such as education, auditing and certification. Its key contribution lies in proposing a comprehensive ecosystem model for the moral governance of autonomous vehicles, offering a foundation for future engineering and regulatory practice.

2. Research Framework and Analysis Strategy

The study was conducted within a normative-analytical methodological framework, which lies at the intersection of legal theory, moral philosophy, and technological ethics. The aim of the study is not to collect empirical data or draw statistical conclusions, but to critically reconstruct and systematize theoretical concepts, dilemmas, and arguments. The main focus is on mapping the normative structures behind algorithmic decision-making, with particular emphasis on philosophical doctrines relating to the protection of human life and moral responsibility.
The analysis applies a qualitative method of analyzing thought experiments, with particular emphasis on different versions of the trolley problem and the principle of double effect, which serve as normative models for understanding the moral decisions of autonomous vehicles. Through the philosophical and legal interpretation of these conceptual constructs, the normative dilemmas raised by algorithmic decisions can be reconstructed. The methodology is also closely related to the casuistry approach (i.e., the practical ethical tradition of analyzing moral cases), which allows for the theoretical examination and comparison of paradigmatic cases.
The study is based on English-language academic literature, including philosophical classics and analyses by contemporary authors on ethics and legal theory. In addition, relevant documents from the European Union’s legal framework regulating artificial intelligence (AI Act) and a theoretical interpretation of regulatory principles have been included. The methodological approach aims to explore the convergence of these normative and institutional perspectives and to present practical legal and social responses to theoretical dilemmas in a structured manner.
The approach also takes into account that autonomous systems are not merely technological constructs but enter a new normative space through their social institutionalization. Accordingly, the study aims to provide a kind of meta-level framework for understanding what ethical foundations and legal mechanisms can legitimize the decision-making capabilities of such systems, especially when they are forced to make life-and-death choices.

3. Normative Discussion

The moral and legal interpretation of autonomous vehicles’ decision-making mechanisms raises not only technical but also profound normative questions. The study shows that classic moral dilemmas in philosophy (especially the trolley problem) are still relevant, but they can only be applied to a limited extent to the complexity of real-life traffic situations. At the same time, analyzing these dilemmas helps sensitize engineers and legislators involved in development to the nature of moral decisions. In the field of algorithmic decision-making, it is becoming clear that encoding legal norms and moral preferences involves inevitable compromises and raises the question of reevaluating the moral content of law. Technological approaches ultimately point in two directions: a balance must be found between minimalism, which seeks to avoid dilemmas, and the risky but necessary undertaking of ethical programming—all while establishing a legal framework that ensures the social acceptance of algorithmic decisions.

3.1. The Legacy of the Doctrine of Double Effect and the Trolley Problem—From Philosophy to Software?

The examination of the inevitable negative consequences of an action has long been a subject of debate in philosophy [4]. St. Thomas Aquinas can be considered a precursor to the trolley dilemma, as he believed that human actions have a “golden rule”: bonum ex integra causa, malum ex quovis defectu [5]. Nevertheless, he also focused on situations where there is no right answer because the problem always causes some inevitable evil [6]. The philosopher held that murder committed in self-defense can sometimes be permissible, or more precisely, that killing an attacker is justifiable. This case is unique because it results in two effects: one primarily good and the other primarily bad—the good effect is the actor’s intention (to save their own life), while the bad effect is unintended but unavoidable (praeter intentionem). In moral theology, this principle is known as the doctrine of double effect (DDE), which states that a harmful consequence may be morally acceptable if: 1. the act itself is morally good or at least neutral; 2. the harmful effect is not intended, only foreseen; 3. the good is not achieved by means of the harmful effect; and 4. there is a proportionate moral reason behind the decision [5,7]. This “test” later became an important milestone in casuistry, through which life-and-death dilemmas could be explained morally. Separated from moral theology, the doctrine began to conquer the field of ethics, and with its help, ethicists sought to explain problems such as the killing of innocents (how to justify an action whose unintended consequence is the death of another person). Death is therefore a bad effect that is unintended but inevitable [8,9].
One of the best-known (but now much-debated) frames for the ethical assessment of autonomous vehicles is the trolley dilemma. This thought experiment originally appeared in Philippa Foot’s 1967 paper, which discussed the moral application of the principle of double effect in relation to abortion [10]. The basic problem is whether it is morally justifiable to sacrifice one person’s life in exchange for saving several others. Foot’s example is the classic switchman case: a runaway tram is heading towards five people, but if we switch the points, there is only one person standing on the other track—and he will die. The main question is: should we actively intervene and kill one person to save five, or should we be passive and simply let the train kill the five people “on its own” [11]? Foot’s theory emphasizes the distinction between negative and positive duties: it is not the same to harm someone through active action as it is to fail to help someone through inaction.
The most important developer of Foot’s thought experiment was Judith Jarvis Thomson, who created new versions: for example, pushing the fat man standing on the bridge, or the “loop” case, where sacrificing the man on the loop track directly stops the tram, thus saving the other five [12]. With these examples, the author points out that the moral judgment of using someone as a means is different from someone “just” becoming a victim of events—even if the outcome is numerically the same. Different versions of the problem (such as the “Loop,” and “Transplant” cases) demonstrate that our intuitive moral judgment varies significantly depending on whether death occurs actively or passively. For example, many people consider pushing a fat man off a bridge to be more morally reprehensible than switching a switch, even if the outcome is the same [13,14].
In the literature, Lin pointed out that nowadays we should imagine the classic thought experiment not with trams, but with self-driving cars [15]. Shortly thereafter, several unfortunate accidents involving Uber and Tesla cars operating in self-driving mode confirmed that the trolley dilemma is a useful tool for expressing the tension between the boundaries of law, morality, and technology [16]. The main question remains the same in relation to the trolley case and traffic emergencies involving self-driving cars: what is the right thing to do, and how do we judge death as an unintended consequence? The difference between intelligent vehicles and traditional vehicles is that self-driving cars do not need to cooperate with humans; the machine itself “decides” through algorithms. For designers of self-driving software, decision-making constraints have now become a reality. In recent years, there has been growing criticism questioning the applicability of the trolley dilemma in the context of AI-based transportation systems. Criticisms most often concern the level of abstraction of the thought experiment, its irrelevance, and the social distortions it brings to the surface.
It would be impossible to summarize all the criticisms, so we will now provide a brief overview of the most relevant ones. It is clear from the above description that trolley dilemmas are generally limited to binary decisions, but real-life traffic situations are much more complex and involve multiple variables (e.g., speed, braking distance, traffic violations, environmental conditions, etc.). The thought experiment does not authentically reflect the reality of road decisions, as there are countless dynamic factors on the roads and, as a result, several possible maneuvers may arise [17]. Compared to the example reduced to two fixed outcomes, in reality the specific outcome of dangerous situations is uncertain, the effectiveness of emergency maneuvers is not guaranteed, and it is impossible to calculate in advance who would be injured [18]. Other studies have found that dilemma situations are statistically extremely rare in modern autonomous systems. The focus of development is not on ethical choices, but on collision avoidance, i.e., preventing trolley dilemma-type situations [19].
The results of the highly influential Moral Machine Experiment were published in 2018, after nearly 10 million people worldwide took part in an online dilemma questionnaire. The project confirmed that moral decisions are strongly influenced by culture, social status, and personal values. There were differing results in terms of whether people were willing to sacrifice the elderly for the young, pedestrians for passengers, or rule-followers for rule-breakers [20], and it also pointed out that certain preferences for “socially useful” roles can be observed (e.g., the life of a doctor is more valuable than that of a homeless person). In fact, Edmonds had already touched on this years earlier when, in relation to the trolley dilemma without self-driving cars, he argued that moral intuition is not universal but culturally determined and even socially and psychologically variable [14].
The social distortions arising in connection with the resolution of paradigmatic cases raise some rather sensitive issues—these have already been raised in connection with the Moral Machine. In situations where the system “assigns value” to human lives (e.g., young vs. old), there is a risk that the decision-making logic will become discriminatory, even if unintentionally. Such “coded morality” is not only ethically questionable, but may also be legally contestable, especially in light of fundamental rights principles. All this leads us to the conclusion that the thought experiment distracts attention from issues of institutional and social justice in the field of transportation [21]. However, the “social contract” that prevails in transportation must include not only moral but also political legitimacy: people need to know what principles a car will use to “make decisions” and they need to be able to agree to them as social actors [22].
The common denominator of these perspectives can also be seen in the fact that the thought experiment mistakenly presents high-stakes ethical decisions and describes ethical decision-making in a misleading way [23]—despite this, however, the methodological role of the trolley dilemma remains significant. Today, the literature tends to view the thought experiment as a means of raising awareness of moral considerations and giving them space in the design of algorithms [24]. The thought experiment should not be seen as a model of decision-making practice, but rather as a means of clarifying various moral beliefs, thereby providing clear information that can be used in practice. The case of the runaway train and the other case variants may be extreme, but mathematics and physics are not without extreme examples either [25]. Exploring dilemmas is therefore a useful exercise even if they do not offer concrete and reassuring solutions, as they show what types of questions cannot be answered by technology or law alone. For this reason, the further questions raised by the analysis are as follows: what patterns and norms are used in programming?

3.2. Algorithmic Decision-Making and Legal-Moral Coding

How the software built into self-driving cars “decides” is not only an ethical issue, but also a legal one. It is clear that during programming, the vehicle is programmed to be a rule-abiding vehicle, meaning that during the learning process, the self-driving vehicle will learn the legal regulations governing the world of transportation, and it will have to judge real-life scenarios based on this. Its participation in the world of transportation involves numerous “decisions” (machine decisions), which are formulated and implemented with the help of the applicable transportation legislation.
The question is what methods are available for vehicles to acquire traffic know-how. The main goal of the developments is to reduce traffic accidents with the spread of self-driving vehicles, thereby improving road safety and social mobility. To this end, various learning methods and algorithmic techniques are used during development to enable the vehicle to sense its environment and make decisions through safe navigation [26]. Deep learning is particularly noteworthy, which is mostly done with convolutional neural networks (CNN); these are used for image processing and interpreting the environment, and are extremely helpful in obstacle, vehicle, and pedestrian detection, for example [27]. Supervised learning works with labeled data sets, which can be useful for detecting traffic signs or recognizing objects [28]. In reinforcement learning (RL), the self-driving vehicle learns what the optimal decision is in a simulated environment, receiving continuous feedback based on its actions. This can be an important method, especially for learning more complex traffic situations [29]. It is well known that AVs are equipped with various sensors, so during the learning process, sensor fusion learns to combine and use data from these sensors (camera, LiDAR, radar, etc.) while driving [30]. In addition to the myriad of methods, it is also important to mention the area of decision-making and control, which is best taught to the vehicle through behavior trees; this can be supplemented by deep learning, mainly for the purpose of properly handling dynamic traffic situations [31]. It is also not possible to provide an exhaustive list of methods, but the point is that their combination enables self-driving cars to perceive all dynamic elements in their environment and react to unexpected situations. During development, it is also important that the vehicle be able to learn from real, live environmental data, so experts use various simulation tests [32].
Taking all this into account, at first glance it is difficult to imagine teaching vehicles that learn in this way such high-level moral commands (e.g., “never intentionally kill an innocent person”; “taking one human life is morally preferable to causing the deaths of five people by braking”), which may arise in a trolley dilemma-like situation. The aforementioned MIT research has already pointed out that moral norms and preferences are heterogeneous [33], so opinions differ considerably as to which human life is right to save (depending on geographical, cultural, age, social, and other factors). It is very difficult to incorporate these into a single algorithm or to select a consensus moral principle, as there is no “universal morality” about what is right in these extreme traffic situations. Human moral decision-making is also context-dependent, vague, and cannot be algorithmized. Philosophers have long debated fundamental questions (mainly utilitarian and deontological ethicists “clash”) but have not arrived at a clear moral “formula.” Moral preferences cannot therefore be translated into a formal specification [34]. If we try to do so anyway, there is a risk that the program will become too rigid and blindly apply moral prescriptions, ignoring the specific circumstances of the case. In such situations, the software of an autonomous vehicle cannot exercise flexible judgment like a human being.
In addition to this “moral coding,” let us look at the situation regarding the acquisition of traffic rules. As already mentioned, traffic is primarily based on legal regulations, which can be taught to self-driving software using machine learning algorithms. This study does not deal with the algorithmic learning of these rules (although this is undoubtedly an exciting and complex research question) but focuses on the question of what role these rules play in traffic in relation to or alongside moral norms. In extreme situations such as the trolley problem, the question arises as to why the self-driving car “decided” as it did, i.e., why did the maneuver cause the death of person X or Y? The difficulty is that we are faced with a lack of explainability: if a deep learning system similar to self-driving software causes an accident, it is difficult to answer the question of why afterwards. Machine decision-making has many special characteristics (and here again, the differences can be formulated in comparison to human decision-making): data-centric operation is based on a different grammar, and its building blocks are information and behavior (rather than meaning and action); there is no community practice to aid decision-making—the machine can only resort to the interpretation given during programming; the machine does not know the freedom of deliberation; software does not have the same capacity for abstraction as humans; legal language and the application of legal rules do not recognize narratives and the meta-texts behind the rules, which often help humans navigate the maze of argumentation and decision-making [35,36]. Researchers are therefore calling for the application of AI design principles such as “Explainable AI”, i.e., explainable, verifiable algorithms [37], and “value alignment”, i.e., aligning AI values with human values [38]. However, there has been no breakthrough in this area either: “fully ethical” machine development remains an unsolved problem, and compromises will likely be necessary during the design process.
In modern constitutional states, it is a fundamental principle that the law does not necessarily sanction all moral evils and does not prescribe all moral goods [39]—there is a certain separation (e.g., bad thoughts are not punishable, self-sacrifice is not mandatory, etc.). In the case of autonomous vehicles, however, it seems that programming forces the law to take a moral stance. The legal question is: is it right for the law to “codify morality” through machine decisions? Some believe it is, because this guarantees socially acceptable behavior and public trust. This view is reminiscent of the naturalistic tradition of law, according to which law reflects ethical values [40]. Others, however, warn that the law can become inflexible if it prescribes morality too strictly: what if, in a specific case, the legally prescribed algorithm leads to a worse outcome than human judgment would? For example, imagine an extreme situation where the law prohibits the sacrifice of any life, so the car cannot swerve—but as a result, five people die. In this case, the morality prescribed by law has paradoxically caused greater harm. This is, of course, an extreme example, but it shows that an inflexible ethical code carries risks [41].
There is a view that when an algorithm decides on life and death, it is in fact an exercise of political power. If this decision is made by the legislator (e.g., stating that the software cannot kill the passenger under any circumstances), then the moral choice becomes part of the democratic process. However, if engineers at private companies decide on the code, then there is a democratic deficit: private actors determine whose life takes priority in a crisis. This also affects social justice, as companies have significant economic and political power and do not make decisions under the same conditions of “mutual vulnerability” as individual drivers would. A multinational company can afford to lobby for its own interests or shift responsibility, while an ordinary driver cannot [42]. It is therefore important that representatives of society (legislators) are involved in determining the principles governing algorithms—this will ensure some kind of collective responsibility [43]. Of course, the task is not easy: politicians are reluctant to take responsibility for “deciding who the car should kill,” as this is unpopular. However, they can at least provide a framework at the level of basic principles.
In clarifying the philosophical roots of the thought experiment, we also touched upon the possible discrepancy between what is legally right and what is morally right [44]. From a legal theory perspective, the problem of “tragic choices” arises: the law generally seeks to avoid condoning the killing of an innocent person, even in an emergency. This is a deeply rooted principle, with the aim of preventing instrumentalization (treating anyone as a means to an end). At the same time, from a moral point of view, we may sometimes consider the lesser evil acceptable (see utilitarianism). In the relationship between law and morality, the question in relation to autonomous cars is: to what extent can the law allow for “moral utilitarianism” [45]? Legal systems generally take a deontological stance: it is forbidden to kill and it is also forbidden to differentiate between lives. The task of legal theory here is to clarify the extent to which the public good (saving more lives) can override individual rights (the prohibition of—intentionally—taking a life). Although there is no simple answer to this question, modern legal systems are unanimously based on the principle of equal dignity, so it is likely that the protection of individual rights will continue to be of paramount importance in most legal cultures [46]. Therefore, the law will tend to restrict what the software can do (negative prescriptions: what is not allowed) rather than positively prescribe a “kill the lesser” strategy.
The above line of reasoning supports the view that the legal and moral justification of actions, i.e., the relationship between law and morality, takes on a new dimension in the case of autonomous vehicles. The law is forced to take a stand on moral dilemmas and strike a balance between protecting individuals from machine decisions and enabling the life-saving potential of technology to be exploited. From a legal theory perspective, this issue also involves the law giving legitimacy to algorithmic decisions—after all, if citizens know that the machine operates on the basis of principles laid down in law (and thus agreed upon by the public), they are more likely to accept the result, even if it is tragic. If, on the other hand, the decision-making mechanism is unclear or driven by private interests, it undermines the rule of law and public trust [47].
The question is: where do regulations concerning the decision-making algorithms of self-driving vehicles stand today, and is there a starting point that responds in some way to the dilemmas discussed above? One of the most significant current developments in the regulatory framework for the moral decisions of autonomous vehicles is the European Union’s regulation on artificial intelligence, the AI Act. The aim of the regulation is to provide legally binding, horizontally applicable regulation for the development, marketing, and use of artificial intelligence systems. The regulation is based on a four-tier risk classification system that distinguishes between prohibited, high-risk, restricted, and low-risk AI systems. Autonomous vehicles fall into the high-risk category, which entails strict compliance, traceability, and documentation requirements. This means that manufacturers must meet strict requirements: they must carry out extensive risk management procedures, ensure human oversight, transparency, safe design, and accountability after a failure. The AI Act also prohibits certain specific practices, such as the use of AI that seriously violates fundamental rights or subliminally manipulates people [48].
A central procedural safeguard introduced by the AI Act for high-risk systems is the requirement of human oversight [48]. This requirement is not only technical but also has legal and ethical significance. The requirement for human control recognizes that algorithmic decision-making is not free from moral and social value judgments, and therefore it is necessary to preserve human responsibility. The regulation is not neutral on this point: transparency, accountability, and respect for fundamental rights as principles stem from the European human rights tradition and represent a distinctly normative orientation. The regulation not only serves security and reliability but also seeks to integrate the principles of the rule of law and human rights into the operation of AI systems [49].
While these obligations are often discussed as legal or institutional requirements, in practice they also demand concrete technical integration into the software and system architecture of autonomous vehicles. Traceability, for instance, must be ensured through multi-layered decision-logging systems that go beyond traditional event data recorders. AI Act–compliant traceability requires documenting the inputs and intermediate states of perception and planning modules, retaining the parameters that influence decision trees or neural network outputs, and maintaining an audit trail that enables independent post-incident reconstruction [50]. Likewise, human oversight must be operationalized through supervisory control layers: at lower levels of automation via driver monitoring systems that assess attention and readiness to intervene, and at higher automation levels through “human-on-the-loop” mechanisms allowing remote operators to enforce safe fallback modes or override certain system behaviors [51]. Transparency requires modular, well-documented system design, including clear specification of the operational design domain (ODD), fallback strategies, and the normative assumptions embedded in the software’s decision-making constraints [52]. Together, these mechanisms illustrate how the AI Act’s procedural safeguards translate directly into technical design choices.
However, the legal regulation does not provide a direct answer to the content of moral decisions that may be made by autonomous vehicles. The AI Act does not define moral algorithms, does not encode the correct answer to dilemmas, and does not seek to create moral artificial intelligence. Instead, it prescribes procedural and institutional safeguards and requires that social values, fundamental rights, and the minimization of harm be taken into account in AI development. The emphasis is therefore not on the moral basis of individual decisions, but on the framework for legitimizing decisions and ensuring that they comply with the norms accepted by European society [53].

3.3. Technological Development Trends: Avoiding Dilemmas vs. Ethical Programming

Designers of autonomous vehicles generally follow two principal strategies when confronting situations that might involve unavoidable harm: they either seek to avoid the necessity of making morally charged decisions, or they attempt to implement explicit ethical decision rules into the vehicle’s decision architecture [54]. These approaches (often labelled, respectively, as dilemma avoidance and ethical programming) appear conceptually distinct, yet in practice they form part of a continuum of technological, legal and moral considerations.
Dilemma avoidance rests on a cautious, minimalist philosophy that aims to prevent the autonomous vehicle from entering morally problematic scenarios in the first place. This strategy involves designing the vehicle to adopt defensive driving principles: maintaining safe following distances, reducing speed in uncertain settings, and behaving conservatively in complex environments [55]. In critical moments, braking and attempting to stop is treated as the primary response. From a technical perspective, this corresponds to the encoding of hierarchical priorities in the behavior-planning module, where emergency braking is a high-priority or hard constraint, and steering is permitted only when the system can predict with sufficient confidence that the maneuver will not introduce additional, unpredictable risks. This logic makes the vehicle’s actions more predictable and aligns with deontological legal reasoning, which places emphasis on avoiding intentional harm. It also resonates with legal caution and public acceptance: if the system does not actively choose between potential victims, the question of explicit moral responsibility appears less acute. The well-known debate around the so-called “Mercedes rule”—according to which passenger protection was initially suggested as an overriding priority, before being rejected in Germany—shows how sensitive such issues are and how strongly legal norms constrain technical design [56,57].
The minimalist strategy also has technological advantages, because the relevant systems (automatic emergency braking, pedestrian detection, and collision avoidance) are already widely used. Ethical decision-making modules, by contrast, are far more complex and less mature. However, dilemma avoidance has inherent limitations. Not all dangerous situations can be resolved through braking alone. In many real-world cases (such as the “tunnel problem”) the direction in which the vehicle steers, even minimally, influences who is injured. A strictly passive strategy may also create secondary harms: a vehicle that stops in the middle of a roadway might cause subsequent collisions, leading to multiple casualties. Critical scenarios are too varied to be reliably managed by a purely conservative algorithmic response [58]. These limitations highlight why dilemma avoidance, although technologically appealing and normatively cautious, cannot fully eliminate the moral burden of autonomous decision-making.
The opposing school of thought contends that the autonomous vehicle will inevitably face cases in which it must evaluate competing harms, and therefore it is better to prepare for such situations consciously. Ethical programming attempts to encode moral principles directly into the vehicle’s decision-making processes [59]. Proponents argue that autonomous vehicles have advantages in crisis situations—stable processing, no emotional interference, and precise sensing—which make their decisions more predictable than those of human drivers. Since the absence of explicit rules also amounts to a kind of decision, these scholars argue that an “ethics switch” or “ethics setting” is needed. Some proposals adapt Rawlsian ideas to algorithmic form, emphasizing the protection of the most vulnerable; others rely on principles derived from legal doctrines, such as the requirement that the vehicle may deviate from its normal trajectory only if the deviation does not disproportionately infringe anyone’s fundamental rights [60]. Further contributions emphasize responsibility allocation. If, for example, a motorcyclist breaks the law by running a red light and collides with an autonomous vehicle, it seems counterintuitive to require the system to hit an innocent cyclist in order to save the motorcyclist and passenger. Responsibility-sensitive approaches therefore attempt to integrate legal norms and fairness into ethical decision-making, arguing that the vehicle should not override the logic of traffic regulations for the sake of utilitarian optimization [61,62].
Yet ethical programming quickly encounters significant technical and conceptual challenges. Implementing complex moral doctrines, such as the doctrine of double effect, requires the system to distinguish between intended and merely foreseen harm, to assess proportionality, and to determine whether a given harm is a means or a side effect. These distinctions may be represented in symbolic rule-based or hybrid architectures, but are beyond the capabilities of end-to-end neural networks, which lack any representation of intention. Even in hybrid systems, real-time decision-making faces severe constraints: the vehicle has only milliseconds to act in emergency conditions, during which it cannot infer moral salience, vulnerability or responsibility. Moreover, tools designed to support ethical alignment in system development—notably Value Sensitive Design (VSD) and Explainable AI (XAI)—face substantial limitations. VSD requires designers to identify relevant values in advance; however, autonomous vehicles operate in highly dynamic environments where such values cannot reliably be interpreted by the system [63]. XAI, meanwhile, cannot generate meaningful real-time explanations under the temporal constraints of driving, and post hoc explanations often fail to capture the true causal pathways in high-dimensional neural models. As a result, ethical programming often collapses into simplified harm-minimization models, which conflict with legal norms grounded in human dignity and the prohibition of instrumentalization [64].
A further challenge arises from the physical and computational limitations of autonomous vehicle sensor systems. Even if an AV were equipped with a sophisticated moral framework, its ability to apply moral principles would remain limited by what it can perceive. LiDAR point clouds deteriorate in rain, fog or snow; camera systems struggle with glare, shadows, low light, and occlusion; and radar, while robust, lacks the granularity required to distinguish between vulnerable road users and inanimate objects [65]. Sensor fusion pipelines introduce temporal delays, meaning that the system processes the world with an inherent latency that can reach dozens of milliseconds—an interval that is highly significant in fast-moving traffic scenarios. These constraints make it exceedingly difficult for the system to reconstruct the morally relevant features of a situation: it cannot consistently determine how many individuals are present, which of them are especially vulnerable, who bears responsibility for creating the situation, or whether a given obstacle is a person, an animal or an object [66]. Moral theories assume access to such distinctions, but AV systems cannot reliably obtain them. As a consequence, many hypothetical ethical choices are not technically implementable, because the system lacks the perceptual grounding necessary to execute them. This discrepancy between moral abstraction and sensor-based reality underscores the fundamental challenge of expecting machine morality in environments where the underlying information is uncertain, incomplete, or ambiguous.
These technological strategies both reflect and reinforce underlying legal-philosophical commitments. Dilemma avoidance corresponds to a deontological orientation, which prioritizes negative duties and rejects decisions that instrumentalize individuals. Ethical programming, meanwhile, presupposes a form of utilitarian reasoning, insofar as it evaluates competing harms and seeks to reduce total negative outcomes [67]. However, neither strategy is fully compatible with the regulatory approach of the EU AI Act. The Act does not take a stance on substantive moral doctrines; instead, it imposes procedural safeguards such as traceability, human oversight, transparency and risk management. These requirements demand technical operationalization—decision logging, supervisory control layers and modular explainability—yet avoid endorsing any particular moral theory. Consequently, the Act simultaneously constrains and shapes the development of both dilemma avoidance and ethical programming, without providing substantive guidance on moral prioritization.
Technological development therefore proceeds between two imperfect extremes: one seeks to remove moral decision-making from the equation, while the other attempts to articulate and encode moral rules [68]. In practice, a combination of the two may prove most realistic. Advanced autonomous systems might be designed to avoid ethically fraught situations as far as possible, while simultaneously incorporating a limited, socially legitimized and legally compliant framework to guide decisions when such situations cannot be prevented. Ultimately, this challenge leads back to the broader question of how law, technology and morality intersect—and to the need for a normative model that is both technically feasible and democratically legitimate.

4. Discussion

The preceding analysis has shown that the moral and legal dilemmas associated with autonomous vehicles cannot be resolved by isolating technological, normative, or regulatory components. Rather, these dimensions mutually shape and constrain one another. The classical philosophical tension between deontological and utilitarian models appears in technical form through the contrast between dilemma avoidance and ethical programming, while the limits of perception systems and the procedural orientation of the EU AI Act further restrict what can be expected from machine decision-making in morally charged situations. These observations indicate that neither purely moral nor purely technical perspectives offer an adequate account of how autonomous vehicles should be designed and governed. A more comprehensive framework is required—one that can integrate normative principles, engineering constraints, and institutional safeguards (this integrated framework is illustrated in Figure 1).
A key insight emerging from the analysis is that meaningful moral decision-making presupposes perceptual accuracy. Ethical algorithms can operate only on the basis of information that is actually available to the system; they cannot take vulnerability, responsibility, or proportionality into account if these features are not reliably detectable by sensors. This gap between moral theory and perceptual reality suggests that the traditional framing of autonomous vehicle ethics (centered on hypothetical trolley-like dilemmas) must be broadened to incorporate the epistemic limits of machine perception. In this sense, the constraints identified in sensor fusion, latency, environmental sensitivity and classification accuracy are not merely engineering challenges but foundational limits on the feasibility of implementing substantive moral doctrines in real time.
At the same time, legal regulation does not resolve these challenges. The EU AI Act introduces risk-based obligations, emphasizing traceability, transparency, human oversight and accountability, yet it refrains from endorsing substantive moral principles for decision-making in unavoidable harm scenarios. This procedural orientation reflects a broader normative hesitation: democratic legal systems are reluctant to encode value-laden decisions about whose life should be protected in which circumstances. Regulators therefore tend to impose structural safeguards (documentation, monitoring, certification) rather than mandating particular ethical outcomes. The result is a framework that constrains system development, but leaves unresolved the deeper normative questions concerning how ethical priorities should be represented or adjudicated within an autonomous vehicle’s architecture.
Against this background, it is helpful to conceptualize the emerging field as an integrated normative–technical ecosystem rather than a collection of isolated components. Such an ecosystem involves several interdependent layers. At its foundation lies a normative system, which includes societal values, human rights standards and legal prohibitions that set boundaries for permissible algorithmic behavior. These values do not dictate specific decisions but provide the constraints within which technological design must operate. Building on this, ethics-by-design methodologies, such as Value Sensitive Design, offer a structured way to incorporate these constraints into system development from the earliest design stages. They encourage engineers to identify relevant values, articulate trade-offs, and build systems that reflect these commitments, even though their effectiveness is limited by the uncertainties and contextual variations inherent in driving environments.
A further component is explainability and transparency, which serve as bridges between normative commitments and operational systems. XAI techniques can support post hoc auditability and verification even if they cannot provide fully faithful real-time explanations. These methods are essential for establishing public trust, enabling regulators to assess compliance, and allowing independent experts to reconstruct decision pathways in the aftermath of incidents. Yet XAI remains only one part of a broader ecosystem: genuine accountability also requires institutional mechanisms for review, verification, and certification.
For this reason, the final layer of the integrated model consists of ethics education, auditing, and certification systems, which translate normative and technical considerations into enforceable practices. Engineers, developers and testing personnel must receive training that familiarizes them with the ethical and legal implications of system design decisions. Auditing practices (both internal and external) ensure that the system meets required safety and transparency standards, while certification bodies evaluate the alignment between technical performance and normative frameworks, including compliance with the obligations imposed by the EU AI Act. Together, these mechanisms create a feedback loop: audits identify shortcomings, which inform future design revisions; certification requirements encourage manufacturers to internalize normative constraints; and education fosters professional responsibility and anticipatory ethical thinking.
Understanding these elements as a unified ecosystem makes it possible to see how the normative, technical and institutional components of autonomous vehicle development interact. The normative system provides the values and prohibitions that define acceptable design choices; ethics-by-design approaches operationalize these values during development; XAI contributes to monitoring, verification and public transparency; and education, auditing and certification embed these practices in stable institutional structures. This integrated model does not eliminate the underlying tensions between deontological and utilitarian reasoning, nor does it resolve the perceptual limits that constrain real-time moral decision-making. However, it offers a more realistic and comprehensive framework for governing autonomous systems—one that aligns legal safeguards, engineering practices and social expectations without demanding that autonomous vehicles perform ethically perfect decision-making in situations where such decisions may not be technically possible.

5. Conclusions

The development of autonomous vehicles is not only a technological milestone but also an unprecedented normative challenge, raising questions about the moral and legal legitimacy of algorithmic decision-making. This study set out to examine these tensions within the conceptual framework of classic philosophical dilemmas (most prominently the trolley problem and the principle of double effect) and to explore how legal systems can contribute to the normative structuring of machine behavior. Positioned at the intersection of moral philosophy, legal theory and technological ethics, the analysis adopted a normative-analytical approach to illuminate the complex interplay between ethical principles and the technical realities of autonomous systems.
A central finding of the study is that algorithmic decisions cannot be reduced to purely technical operations. Whether developers aim to avoid dilemmas through conservative behavioral strategies or attempt to encode moral priorities explicitly, every autonomous system embodies normative assumptions that influence real-world outcomes. Although classical thought experiments do not translate directly into programming rules, they remain valuable tools for revealing the moral intuitions, legal expectations and social concerns that inform public debates on autonomous mobility.
At the same time, the research shows that current autonomous systems do not—and, given technological constraints, cannot—operate on the basis of fully articulated moral algorithms. In practice they rely on optimization strategies rooted in physics, sensor fusion and probabilistic modeling, which often fail to capture morally relevant distinctions such as vulnerability, responsibility or proportionality. The limitations of LiDAR, cameras, radar and perception pipelines mean that the machine’s representation of its environment is necessarily incomplete, making real-time moral evaluation technically infeasible in many scenarios.
Legal regulation, particularly the European Union’s AI Act, acknowledges this normative dimension but responds primarily through procedural safeguards. Rather than instructing systems on how to resolve unavoidable harm scenarios, the Act emphasizes traceability, transparency, human oversight and accountability. This procedural orientation reflects the broader reluctance of democratic legal systems to encode substantive moral doctrines into technical artifacts, while nevertheless establishing the institutional structures within which system development must occur.
These observations suggest that meaningful governance of autonomous vehicles requires a broader, integrative framework. The study argued that neither dilemma avoidance nor ethical programming alone can provide an adequate response. Instead, a socially legitimized normative structure is needed—one that aligns legal prohibitions, human rights values, engineering constraints and institutional mechanisms. Such a framework can be conceptualized as an ecosystem in which normative principles guide system design through ethics-by-design methodologies; explainability tools support oversight and auditability; and education, auditing and certification bind these elements into stable professional and regulatory practices.
Ultimately, autonomous vehicles will not merely be technological actors moving through physical space but also normative actors embedded within legal and moral orders. One of the defining challenges of the coming decades will be to integrate their decisions into a framework that preserves human dignity, ensures public trust and fosters responsible technological innovation. This requires not only advances in engineering and regulation but also a collective social effort to articulate and maintain the values that should guide autonomous systems in their interactions with the world.

Author Contributions

Conceptualization, L.P.; methodology, L.P.; validation, I.L.; formal analysis, L.P.; resources, L.P. and I.L.; writing—original draft preparation, L.P. and I.L.; writing—review and editing, L.P. and I.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. SAE International. SAE Standard J3016_201806; Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. SAE International: Warrendale, PA, USA, 2018.
  2. Hubbard, W. Drivers of effective laws for automated vehicles. Villanova Law Rev. 2025, 70, 115. [Google Scholar] [CrossRef]
  3. Cascetta, E.; Cartenì, A.; Di Francesco, L. Do autonomous vehicles drive like humans? A Turing approach and an application to SAE automation Level 2 cars. Transp. Res. Part C 2022, 134, 103499. [Google Scholar] [CrossRef]
  4. Aristotle. Nikomakhoszi Etika; Európa: Budapest, Hungary, 1987. [Google Scholar]
  5. Aquinas, T. Summa Theologiae I–II. 2017. Available online: https://www.newadvent.org/summa/ (accessed on 1 June 2025).
  6. Cavanaugh, T.A. Aquinas’s account of double effect. Philosophy 1997, 33, 107–121. [Google Scholar] [CrossRef][Green Version]
  7. Mangan, J.T. An historical analysis of the principle of double effect. Theol. Stud. 1949, 10, 41–61. [Google Scholar] [CrossRef]
  8. Di Nucci, E. Aristotle and double effect. J. Anc. Philos. 2014, 8, 20–48. [Google Scholar] [CrossRef]
  9. Di Nucci, E. Eight Arguments Against Double Effect. 2014. Available online: https://philpapers.org/archive/DINEAA.pdf (accessed on 1 June 2025).
  10. Černý, D. The Principle of Double Effect: A History and Philosophical Defense; Routledge: London, UK, 2020. [Google Scholar]
  11. Foot, P. The problem of abortion and the doctrine of double effect. In Ethical Theory: An Anthology; Shafer-Landau, R., Ed.; Wiley: Hoboken, NJ, USA, 2013; pp. 536–542. [Google Scholar]
  12. Thomson, J.J. The trolley problem. Yale Law J. 1985, 94, 1395–1415. Available online: https://openyls.law.yale.edu/server/api/core/bitstreams/2549174b-4728-45bf-bf64-01384116e48c/content (accessed on 18 December 2025). [CrossRef]
  13. Pődör, L. “Trolleyology” and autonomous vehicles—Moral and legal questions on the application of the doctrine of double effect. Cent. Eur. Pap. 2021, 9, 69–82. [Google Scholar] [CrossRef]
  14. Edmonds, D. Would You Kill the Fat Man? Princeton University Press: Princeton, NJ, USA, 2014. [Google Scholar]
  15. Lin, P. Why ethics matters for autonomous cars? In Autonomous Driving: Technical, Legal and Social Aspects; Maurer, M., Gerdes, C.J., Lenz, B., Winner, H., Eds.; Springer: Berlin/Heidelberg, Germany, 2016; pp. 69–85. [Google Scholar]
  16. Nyholm, S. The ethics of crashes with self-driving cars: A roadmap, I. Philos. Compass 2018, 13, e12507. [Google Scholar] [CrossRef]
  17. Nyholm, S.; Smids, J. The ethics of accident-algorithms for self-driving cars: An applied trolley problem? Ethical Theory Moral Pract. 2016, 19, 1275–1289. [Google Scholar] [CrossRef]
  18. Himmelreich, J. Never mind the trolley: The ethics of autonomous vehicles in mundane situations. Ethical Theory Moral Pract. 2018, 21, 669–684. [Google Scholar] [CrossRef]
  19. Geisslinger, M.; Poszler, F.; Betz, J.; Lütge, C.; Lienkamp, M. Autonomous driving ethics: From trolley problem to ethics of risk. Philos. Technol. 2021, 34, 1033–1055. [Google Scholar] [CrossRef]
  20. Awad, E.; Dsouza, S.; Kim, R.; Schulz, J.; Henrich, J.; Shariff, A.; Bonnefon, J.; Rahwan, I. The Moral Machine experiment. Nature 2018, 563, 59–64. [Google Scholar] [CrossRef]
  21. Kirchmair, L. How to regulate moral dilemmas involving self-driving cars: The 2021 German Act on autonomous driving, the trolley problem, and the search for a role model. Ger. Law J. 2023, 24, 1184–1208. [Google Scholar] [CrossRef]
  22. Wu, S.S. Autonomous vehicles, trolley problems, and the law. Ethics Inf. Technol. 2020, 22, 1–13. [Google Scholar] [CrossRef]
  23. Crain, G. Three shortcomings of the trolley method of moral philosophy. J. Ethics Soc. Philos. 2023, 26, 420–443. [Google Scholar] [CrossRef]
  24. Paulo, N. The trolley problem in the ethics of autonomous vehicles. Philos. Q. 2023, 73, 1046–1066. [Google Scholar] [CrossRef]
  25. Stoner, I.; Swartwood, J. In defense of the trolley method. J. Ethics Soc. Philos. 2025, 30, 509–518. [Google Scholar] [CrossRef]
  26. Dhaif, Z.S.; El Abbadi, N.K. A review of machine learning techniques utilised in self-driving cars. Iraqi J. Comput. Sci. Math. 2024, 5, 205–219. [Google Scholar] [CrossRef]
  27. Bachute, M.R.; Subhedar, J.M. Autonomous driving architectures: Insights of machine learning and deep learning algorithms. Mach. Learn. Appl. 2021, 6, 100164. [Google Scholar] [CrossRef]
  28. Taherdoost, H. Beyond supervised: The rise of self-supervised learning in autonomous systems. Information 2024, 15, 491. [Google Scholar] [CrossRef]
  29. Dinneweth, J.; Boubezoul, A.; Mandiau, R.; Espié, S. Multi-agent reinforcement learning for autonomous vehicles: A survey. Auton. Intell. Syst. 2022, 2, 27. [Google Scholar] [CrossRef]
  30. Yeong, D.J.; Velasco-Hernandez, G.; Barry, J.; Walsh, J. Sensor and sensor fusion technology in autonomous vehicles: A review. Sensors 2021, 21, 2140. [Google Scholar] [CrossRef] [PubMed]
  31. Conejo, C.; Puig, V.; Morcego, B.; Navas, F.; Milanés, V. Behavior trees in functional safety supervisors for autonomous vehicles. IEEE Trans. Intell. Transp. Syst. 2024, in press. [Google Scholar] [CrossRef]
  32. Hu, J.; Wang, Y.; Cheng, S.; Xu, J.; Wang, N.; Fu, B.; Ning, Z.; Li, J.; Chen, H.; Feng, C.; et al. A survey of decision-making and planning methods for self-driving vehicles. Front. Neurorobot. 2025, 19, 1451923. [Google Scholar] [CrossRef]
  33. Bago, B.; Kovacs, M.; Protzko, J.; Nagy, T.; Kekecs, Z.; Palfi, B.; Adamkovic, M.; Adamus, S.; Albalooshi, S.; Albayrak-Aydemir, N.; et al. Situational factors shape moral judgements in the trolley dilemma in Eastern, Southern and Western countries in a culturally diverse sample. Nat. Hum. Behav. 2022, 6, 880–895. [Google Scholar] [CrossRef]
  34. Héder, M. A contractarian ethical framework for developing autonomous vehicles. ERCIM News 2019, 118, 46–47. [Google Scholar]
  35. MacCormick, N.; Summers, R.S. (Eds.) Interpreting Statutes: A Comparative Study; Routledge: London, UK, 1991. [Google Scholar]
  36. Sadegh, M.; Klös, V.; Vogelsang, A. Cases for explainable software systems: Characteristics and examples. In Proceedings of the 2021 IEEE 29th International Requirements Engineering Conference Workshops (REW), Notre Dame, IN, USA, 20–24 September 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 181–187. [Google Scholar] [CrossRef]
  37. Mankodiya, H.; Jadav, D.; Gupta, R.; Tanwar, S.; Hong, W.C.; Sharma, R. OD-XAI: Explainable AI-based semantic object detection for autonomous vehicles. Appl. Sci. 2022, 12, 5310. [Google Scholar] [CrossRef]
  38. Gabriel, I.; Ghazavi, V. The challenge of value alignment: From fairer algorithms to AI safety. In Oxford Handbook of Digital Ethics; Véliz, C., Ed.; Oxford University Press: Oxford, UK, 2023. [Google Scholar] [CrossRef]
  39. Hart, H.L.A. Positivism and the separation of law and morals. Harv. Law Rev. 1958, 71, 593–629. [Google Scholar] [CrossRef]
  40. Cecchini, D.; Brantley, S.; Dubljevic, V. Moral judgment in realistic traffic scenarios: Moving beyond the trolley paradigm for ethics of autonomous vehicles. AI Soc. 2023, 40, 1037–1048. [Google Scholar] [CrossRef]
  41. Li, L.; Zhang, J.; Wang, S.; Zhou, Q. A study of common principles for decision-making in moral dilemmas for autonomous vehicles. Behav. Sci. 2022, 12, 344. [Google Scholar] [CrossRef]
  42. Smith, G.J.D. The politics of algorithmic governance in the black box city. Big Data Soc. 2020, 7, 2053951720935983. [Google Scholar] [CrossRef]
  43. Zhan, H.; Wan, D. Ethical considerations of the trolley problem in autonomous driving: A philosophical and technological analysis. World Electr. Veh. J. 2024, 15, 404. [Google Scholar] [CrossRef]
  44. Sandel, M.J. Justice: What’s the Right Thing to Do? Farrar, Straus and Giroux: New York, NY, USA, 2009. [Google Scholar]
  45. Schurr, A.; Moran, S. The presence of automation enhances deontological considerations in moral judgments. Comput. Hum. Behav. 2023, 140, 107590. [Google Scholar] [CrossRef]
  46. Liu, P.; Liu, J. Selfish or utilitarian automated vehicles? Deontological evaluation and public acceptance. Int. J. Hum.–Comput. Interact. 2021, 37, 1231–1242. [Google Scholar] [CrossRef]
  47. Feess, E.; Muehlheusser, G. Autonomous vehicles: Moral dilemmas and adoption incentives. Transp. Res. Part B 2024, 181, 102894. [Google Scholar] [CrossRef]
  48. Regulation (EU) 2024/1689 of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act). Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689 (accessed on 1 December 2025).
  49. Mallinson, D.J.; Azevedo, L.; Best, E.; Robles, P.; Wang, J. The future of AI is in the states: The case of autonomous vehicle policies. Bus. Polit. 2024, 26, 180–199. [Google Scholar] [CrossRef]
  50. Schnurr, D. Effective Implementation of Requirements for High-Risk AI Systems Under the AI Act; Transparency and Appropriate Accuracy; CERRE Issue Paper. 2025. Available online: https://cerre.eu/wp-content/uploads/2025/02/Effective-Implementation-of-Requirements-for-High-Risk-AI-Systems-Under-the-AI-Act_FINAL-1.pdf (accessed on 3 December 2025).
  51. Kandikatla, L.; Radeljić, B. AI and human oversight. arXiv 2025, arXiv:2510.09090. [Google Scholar] [CrossRef]
  52. Kelly, J.; Zafar, S.A.; Heidemann, L.; Zacchi, J.V.; Espinoza, D.; Mata, N. Navigating the EU AI Act: A Methodological Approach to Compliance for Safety-Critical Products. arXiv 2024, arXiv:2403.16808v2. [Google Scholar] [CrossRef]
  53. Keser, M.; Shoeb, Y.; Knoll, A. How could generative AI support compliance with the EU AI Act? A review for safe automated driving perception. In Proceedings of the 2024 IEEE International Conference on Vehicular Electronics and Safety (ICVES), Ahmedabad, India, 17–19 December 2024; pp. 1–6. [Google Scholar] [CrossRef]
  54. Poszler, F.; Geisslinger, M.; Lütge, C. Ethical Decision-Making for Self-Driving Vehicles: A Proposed Model and List of Value-Laden Terms that Warrant (Technical) Specification. Sci. Eng. Ethics 2024, 30, 47. [Google Scholar] [CrossRef]
  55. Liu, G.; Sheng, J.; Tao, Z. Application and design of a decision-making model in ethical dilemma for self-driving cars. Sci. Rep. 2025, 15, 8187. [Google Scholar] [CrossRef]
  56. Cunneen, M.; Mullins, M.; Murphy, F.; Shannon, D.; Furxhi, I.; Ryan, C. Autonomous vehicles and avoiding the trolley (dilemma): Vehicle perception, classification, and the challenges of framing decision ethics. Cybernet. Syst. 2020, 51, 59–80. [Google Scholar] [CrossRef]
  57. Mercedes-Benz Group. Safety First for Automated Driving. Available online: https://www.connectedautomateddriving.eu/blog/safety-first-for-automated-driving/ (accessed on 13 December 2025).
  58. Tang, J.; Luo, X.; Chen, J.; Yuan, Y.; Loo, J. An Ethical Decision-Making Algorithm for Autonomous Vehicles During an Inevitable Collision. In Proceedings of the ICBAR 2024: 2024 4th International Conference on Big Data, Artificial Intelligence and Risk Management, New York, NY, USA, 15–17 November 2024; pp. 1077–1081. [Google Scholar] [CrossRef]
  59. Suryana, L.E.; Rahmani, S.; Calvert, S.C.; Zgonnikov, A.; van Arem, B. A Framework for Ethical Decision-Making in Automated Vehicles through Human Reasons-Based Supervision. arXiv 2025, arXiv:2507.23308. [Google Scholar]
  60. Rafiee, A.; Wu, Y.; Sattar, A. Philosophical and legal approach to moral settings in autonomous vehicles: An evaluation. In Social Licence and Ethical Practice; Breakey, H., Ed.; Emerald: Bingley, UK, 2023; pp. 95–114. [Google Scholar] [CrossRef]
  61. Lawlor, R. The ethics of automated vehicles: Why self-driving cars should not swerve in dilemma cases. Res Publica 2022, 28, 193–216. [Google Scholar] [CrossRef]
  62. Horváthy, B. Autonomous vehicles and EU private international law. In EU Business Law and Digital Revolution; Glavanits, J., Horváthy, B., Knapp, L., Eds.; Széchenyi István University: Győr, Hungary, 2019; pp. 233–254. [Google Scholar]
  63. Ullah, I.; Zheng, J.; Severino, A.; Jamal, A. Assessing the Barriers and Implications of Autonomous Vehicles: Implementation in Sustainable Cities. Sustain. Futures 2025, 9, 100564. [Google Scholar] [CrossRef]
  64. Malik, A.Z.; Naz, N.S.; Ahmed, F.; Saleem, M.; Farooq, M.S.; Rehman, A.U.; Ismael, W.M.; Khan, M.A. Enhancing Smart City Mobility through Real-Time Explainable AI in Autonomous Vehicles. Sci. Rep. 2025, 15, 42118. [Google Scholar] [CrossRef]
  65. Zhang, Y.; Carballo, A.; Yang, H.; Takeda, K. Perception and Sensing for Autonomous Vehicles under Adverse Weather Conditions: A Survey. ISPRS J. Photogramm. Remote Sens. 2023, 196, 146–177. [Google Scholar] [CrossRef]
  66. Hayes, S.; Sharma, S.; Eising, C. Velocity Driven Vision: Asynchronous Sensor Fusion Birds-Eye View Models for Autonomous Vehicles. arXiv 2024, arXiv:2407.16636. [Google Scholar] [CrossRef]
  67. Kasap, A. Philosophical Grounds for Regulating the Lawful Operation of Autonomous Vehicles. Int. Rev. Law Comput. Technol. 2025, in press. [Google Scholar] [CrossRef]
  68. Schäffner, V. Between Real World and Thought Experiment: Framing Moral Decision-Making in Self-Driving Car Dilemmas. Humanist. Manag. J. 2021, 6, 25–44. [Google Scholar] [CrossRef]
Figure 1. An integrated normative governance model for autonomous vehicle decision-making.
Figure 1. An integrated normative governance model for autonomous vehicle decision-making.
Futuretransp 06 00005 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pődör, L.; Lakatos, I. According to Whose Morals? The Decision-Making Algorithms of Self-Driving Cars and the Limits of the Law. Future Transp. 2026, 6, 5. https://doi.org/10.3390/futuretransp6010005

AMA Style

Pődör L, Lakatos I. According to Whose Morals? The Decision-Making Algorithms of Self-Driving Cars and the Limits of the Law. Future Transportation. 2026; 6(1):5. https://doi.org/10.3390/futuretransp6010005

Chicago/Turabian Style

Pődör, Lea, and István Lakatos. 2026. "According to Whose Morals? The Decision-Making Algorithms of Self-Driving Cars and the Limits of the Law" Future Transportation 6, no. 1: 5. https://doi.org/10.3390/futuretransp6010005

APA Style

Pődör, L., & Lakatos, I. (2026). According to Whose Morals? The Decision-Making Algorithms of Self-Driving Cars and the Limits of the Law. Future Transportation, 6(1), 5. https://doi.org/10.3390/futuretransp6010005

Article Metrics

Back to TopTop