Next Article in Journal
Determination of the Centre of Gravity of Electric Vehicles Using a Static Axle-Load Method
Previous Article in Journal
Evaluating Autonomous Truck Adoption: An Elasticity-Based Model of Demand, Modal Shift, and Emissions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comprehensive Analysis of Safety Failures in Autonomous Driving Using Hybrid Swiss Cheese and SHELL Approach

by
Benedictus Rahardjo
1,*,
Samuel Trinata Winnyarto
2,
Firda Nur Rizkiani
2 and
Taufiq Maulana Firdaus
3
1
Business Engineering Program, Industrial Engineering Department, BINUS ASO School of Engineering, Bina Nusantara University, Jakarta 11480, Indonesia
2
Department of Industrial Management, National Taiwan University of Science and Technology, Taipei 106335, Taiwan
3
Department of Information System, Telkom University, Bandung 40257, Indonesia
*
Author to whom correspondence should be addressed.
Future Transp. 2026, 6(1), 21; https://doi.org/10.3390/futuretransp6010021
Submission received: 2 December 2025 / Revised: 13 January 2026 / Accepted: 13 January 2026 / Published: 15 January 2026

Abstract

The advancement of automated driving technologies offers potential safety and efficiency gains, yet safety remains the primary barrier to higher-level deployment. Failures in automated driving systems rarely result from a single technical malfunction. Instead, they emerge from coupled organizational, technical, human, and environmental factors, particularly in partial and conditional automation where human supervision and intervention remain critical. This study systematically identifies safety failures in automated driving systems and analyzes how they propagate across system layers and human–machine interactions. A qualitative case-based analytical approach is adopted by integrating the Swiss Cheese model and the SHELL model. The Swiss Cheese model is used to represent multilayer defensive structures, including governance and policy, perception, planning and decision-making, control and actuation, and human–machine interfaces. The SHELL model structures interaction failures between liveware and software, hardware, environment, and other liveware. The results reveal recurrent cross-layer failure pathways in which interface-level mismatches, such as low-salience alerts, sensor miscalibration, adverse environmental conditions, and inadequate handover communication, align with latent system weaknesses to produce unsafe outcomes. These findings demonstrate that autonomous driving safety failures are predominantly socio-technical in nature rather than purely technological. The proposed hybrid framework provides actionable insights for system designers, operators, and regulators by identifying critical intervention points for improving interface design, operational procedures, and policy-level safeguards in autonomous driving systems.

1. Introduction

The development of automated driving technology represents a significant milestone in the evolution of intelligent transportation systems. Several pressing societal and environmental challenges motivate this development, including traffic accidents, congestion, energy consumption, and emissions. Automated driving technologies have therefore been widely recognized as a promising approach to improving transportation safety and efficiency [1]. Vehicles equipped with such technologies, commonly referred to as autonomous or automated vehicles (AVs), can assist human drivers or perform driving tasks independently depending on the level of automation [2]. Controlled decisions such as acceleration, deceleration, lane changes, and parking, may be executed either by the human driver or by the automated system. The effectiveness of these decisions depends fundamentally on the vehicle’s ability to perceive its surrounding environment, including pedestrians, cyclists, other vehicles [3,4], traffic signals, and context-specific zones such as school areas [5]. Based on the distribution of monitoring and control responsibilities between human drivers and automated systems, vehicles are commonly classified as non-automated, partially automated, highly automated, or fully automated.
A substantial body of research has examined AVs as a means of improving economic and environmental outcomes in transportation systems [6]. Anticipated benefits include enhanced road safety, given that a large proportion of traffic fatalities are attributed to human error [7], improved travel experience through reduced driving workload and time savings [8], autonomous parking capabilities [9], and increased accessibility for populations such as people with disabilities and young users [10]. Empirical studies have reported that human factors including distraction, speeding, and impaired driving contribute to most traffic crashes [11], leading to expectations that automation could significantly reduce accident rates. Indeed, partially automated technologies such as forward collision warnings, lane departure alerts, and adaptive lighting systems have been estimated to reduce crash-related injuries and fatalities by up to 33% [12]. AV technologies are also expected to enhance social inclusion by enabling independent mobility for individuals who are unable to drive conventional vehicles, thereby improving quality of life and social participation [13].
Beyond safety and accessibility, AVs have been associated with potential reductions in fuel consumption and emissions through improved fleet management [14,15,16], congestion mitigation [17], and more efficient parking arrangements [18]. Additional benefits include stress reduction for commuters [19] and overall traffic flow improvements that may benefit both automated and non-automated vehicles [20,21]. Energy savings can be achieved through smoother acceleration and deceleration profiles, optimized speed management, reduced travel time, and lighter vehicle designs, which are particularly advantageous for electric vehicles enabled by automated driving technologies [22]. Furthermore, advanced lateral control schemes for AVs have been shown to yield improvements in pavement sustainability [23].
Despite these anticipated benefits, fully autonomous vehicles remain far from widespread commercial deployment, with safety concerns consistently identified as the dominant barrier. Public perception studies indicate significant variation in attitudes toward AV safety, influenced by demographic and cultural factors. For example, young, highly educated individuals tend to express greater optimism regarding AV safety [24], while comparative studies have shown more pessimistic perceptions in Western Europe than in several Asian countries [25]. Safety concerns have also been shown to directly affect users’ willingness to adopt automated driving technologies [26]. Importantly, the introduction of automation does not eliminate human involvement but instead transforms the human role into supervision, monitoring, and emergency intervention, particularly in partially and conditionally automated driving contexts [2].
As automation levels increase, regulatory frameworks must redefine the responsibilities of human drivers and system designers, especially in situations involving system malfunctions or disengagements [27]. Questions such as whether a human driver should continuously monitor the environment or immediately regain control following system failure remain unresolved [11]. In practice, AV safety is shaped by a complex combination of technological and social factors, including automation level definitions, regulatory policies, vehicle design, road and traffic conditions, weather, and human–machine interaction quality. Consequently, isolated technical improvements are insufficient to ensure safety; rather, a systematic and comprehensive risk mitigation approach is required.
Given the increasing deployment of automated driving systems and the growing availability of accident and disengagement data, it is urgent to critically examine existing evidence on AV-related safety failures. Although considerable effort has been devoted to advancing automated driving technologies, updated and integrative safety analyses remain limited, particularly those that address how failures propagate across system layers and human–machine interfaces. A thorough understanding of the underlying causes of system failures is essential for engineers, regulators, and other stakeholders involved in AV development and deployment.
Therefore, the objective of this study is to systematically identify potential safety failures in automated driving systems and analyze the interactions among system components. To achieve this goal, this study proposes a comprehensive risk analysis framework that integrates two well-established safety and human factors models: the Swiss Cheese model and the SHELL model. The Swiss Cheese model is employed to represent multilayer system defenses and latent conditions, while the SHELL model is used to structure human–system interaction failures across software, hardware, environment, and liveware dimensions. By combining these perspectives, this study aims to provide a structured understanding of how safety failures emerge in automated driving systems. The main contributions of this study are as follows:
  • A hybrid safety analysis framework is proposed by integrating the Swiss Cheese Model (layered defenses and holes) with the SHELL model (interface-based ergonomics: Software–Hardware–Environment–Liveware–Liveware) to analyze AV failures as socio-technical phenomena rather than isolated malfunctions.
  • A cross-layer interpretation is developed to explain how interface mismatches (e.g., Software–Liveware or Environment–Liveware) can create or exacerbate vulnerabilities within specific layers, thereby increasing the likelihood of multi-layer alignment pathways leading to unsafe outcomes.
  • A structured synthesis of AV safety failure factors is provided to support clearer diagnosis and communication for engineering, operational, and regulatory stakeholders, and to generate targeted recommendations grounded in the hybrid analysis logic.
The literature on AV and risk management is reviewed in Section 2. The integration of the well-known Swiss Cheese and SHELL models related to Situational Awareness (SA) is comprehensively analyzed in Section 3. A case-based analysis is presented in Section 4. An in-depth review and analysis of potential safety failures in AVs are investigated in Section 5. The final section discusses the results of the AV safety study and future research opportunities.

2. Related Works

This section outlines the key concepts and prior works relevant to this study. It begins with an overview of autonomous vehicle technologies and automation levels, followed by risk management approaches, and concludes with the development of a hybrid analytical framework combining the Swiss Cheese and SHELL models. The purpose of this section is to establish the theoretical and methodological foundation used to analyze safety failures in autonomous vehicle systems.

2.1. Autonomous Vehicles (AVs)

Autonomous Vehicles (AVs) have great potential to improve driving efficiency using lighter and safer vehicles, and by emphasizing the concepts of shared mobility and on-demand services [28]. Although there can be substantial benefits from this usage, there can also be undesirable side effects, making it important to manage their introduction to achieve effective results [29]. It is important to define AVs correctly so that regulation makers can minimize the impact on traditional road users, including pedestrians, bicycle riders, and even construction workers. Automated vehicles have different automation levels, which are closely related to the safety of AVs. Automated vehicles vary in degree depending on the complexity of the technology used, the perception range of the surroundings, and whether a human or vehicle system participates in the driving process. A five-level automation hierarchy has been defined by the United States National Highway Traffic Safety Administration (NHTSA) for automative engineering [30]. The automated levels in this system are divided into five categories, numbered 0 to 4. There is no automation at level 0, where the drivers are in complete control of the vehicle. As the highest level of automation, level 4, the vehicle is fully autonomous and capable of monitoring external conditions and performing all driving operations on its own. AV development activities in this field have been categorized into level 3, which is limited self-driving automation, where drivers can sometimes take over driving. As a result of early guidance from Society of Automotive Engineers (SAE), NHTSA adopted a more widely accepted definition of AVs in recent years. This definition is continuously updated by SAE [31]. In SAE’s automotive automation standard, vehicles are categorized into six categories from 0 (no automation) to 5 (full automation). Levels represent the degree of automation required by a system after the human factor has been considered. Automotive manufacturers, regulators, and policymakers have widely adopted the six levels of driving automation defined by SAE [32,33]. As a result of the interaction between the human driver and the automation system, the following driving tasks are divided into these automation levels: (i) executing steering and throttle control, (ii) monitoring the driving environment, (iii) reverting to Dynamic Driving Task (DDT), and (iv) assessing the capabilities of different autonomous driving models. In levels 0–2 the human driver performs part or all the DDT, whereas levels 3–5 represent conditional, high, and full automation of driving, which means that systems can handle all the DDT while the driver is actively engaged in driving. Current development of AVs is largely based on detailed descriptions of these levels of vehicle automation. Table 1 shows the six levels of SAE driving automation [34].
Different organizations define automation levels differently, which implies that both human operators and vehicle systems can be involved in the driving process at different levels. Consequently, AVs can pose different safety concerns depending on their size, capacity, and technologies. In the absence of automation, partial automation, or high automation modes, AV safety may be compromised due to the interaction between the operator and the machine. As AVs operate in fully automated modes, the reliability of hardware and software becomes crucial. The more autonomous systems are incorporated into vehicles, the more complex they become, posing challenges for safety, reliability, and stability. A theoretical analysis of AV errors will therefore be indispensable to understanding the current AV safety situation and predicting future AV safety levels.

2.2. Risk Management

In the deployment of AVs, safety challenges are beyond those faced by traditional automobiles. Without human oversight, they are solely responsible for perception, decision-making, and control in a dynamic environment. It is therefore critical that any system faults or misjudgments be addressed immediately, as they can cause fatal accidents involving passengers, pedestrians, and other road users [35]. Several high-profile accidents involving AVs prototypes have underscored the importance of improving the safety and resilience of the systems. These incidents demonstrate that errors are not only caused by faulty perception or wrong planning, but also by latent software defects, sensor degradation, or unexpected interactions between subsystems [36]. The key to addressing these challenges is to incorporate a safe and secure system design into every layer of the design process. It can start with perception algorithms and control policies, and end with architectural redundancy and testing protocols. The principle of fail-operational behavior has become a central component of AV design [37].
AVs operate in open-world environments with a high uncertainty level, incomplete information, and unstructured events, thereby requiring real-time risk assessment models [38], robust fault detection and diagnosis systems [39], and resilient decision-making frameworks [40]. A robust validation and verification strategy is equally important, as it can evaluate AVs across a broad range of operational scenarios, beyond a simple miles-based metric [41]. As a result, AV systems must not only be safe and reliable in this context, but also be accepted by society, be regulated, and be responsibly integrated into public infrastructure, ensuring social acceptance and regulatory approval. Moreover, the European Union has established the Guidelines for Trustworthy AI, which define seven essential requirements, including human agency and oversight, technological robustness and safety, privacy and data governance, transparency, accountability, diversity and fairness, as well as societal and environmental well-being. These principles are intended to guide the responsible design, development, and deployment of AI-based systems, including autonomous driving technologies [42]. A special emphasis was placed on studies that identify AV risks based on qualitative characteristics. The growing presence of AVs in the market has raised concerns, despite their many advantages. Despite this, there are also potential risks, uncertainties, and problems associated with it [43].
Recent studies have increasingly highlighted that many safety challenges in AVs are closely related to the use of advanced AI-based perception, decision making, and control systems [44,45]. As AVs rely heavily on machine learning and deep neural networks, failures may arise not only from hardware malfunctions but also from data bias, model opacity, distributional shifts, and unexpected interactions between AI components and human users [44]. To address these challenges, recent research has emphasized the role of explainable artificial intelligence, AI failure taxonomies, and runtime monitoring methods to improve transparency, diagnosability, and trustworthiness of AV systems [45]. These AI-related safety methods provide critical insights into how latent failures emerge and propagate across system layers, reinforcing the need for systematic, multi-layer safety analysis framework.
As AVs have become more mainstream in recent years, interest in social acceptance and assessment of their risk categories have grown steadily. Several literature reviews have examined trust in autonomous vehicles, analyzed the potential causes of accidents associated with them, and classified the risks derived from uncertainty [46]. To explain the causal relationship between AV security risk factors and their importance, some researchers have used the structural equation modelling method [47]. Park and Park [48] and Meyer et al. [49] also used a risk assessment method to assess security risks associated with AVs. Furthermore, the novelty of this research lies in its approach, which considers all potential risks, and explores their relationship. Our study examines AVs from a wide range of perspectives, including human, environmental, technical, and security considerations. To conduct this analysis, this study reviewed a comprehensive amount of literature.

3. Hybrid Analytical Framework Using Swiss Cheese and SHELL Models

This study employs a qualitative case-based research design that aims to analyze safety failures in AVs systems by integrating two well-established models: the Swiss Cheese model and SHELL model. The Swiss Cheese model is the most widely used accident analysis model. In this model, the concept of layers of safety defense is used, where holes in the layers represent weaknesses or deficiencies due to latent errors, e.g., organizational errors and environmental factors. When the holes in each layer of defense are aligned, a hazard can develop into an accident. Most accidents observed are not caused by a single failure within a system, but they result from a combination of several failures at once [50]. In Figure 1, the Swiss Cheese model illustrates four key concepts: active failures, latent conditions, defensive barriers, and holes.
The SHELL model is a method for describing how interactive systems behave and their relationship with human factors [51]. Throughout the SHELL model, humans are considered a vital, integral, and indispensable component of production and service. In this way, the human component is central to the model and connected to other elements. A careful analysis of the interfaces between the center person and the other four components (hardware, software, environment, and liveware) in this model emphasizes the interfaces rather than the components themselves. The SHELL model has five elements as shown in Figure 2. These elements are as follows:
  • Hardware consists of the physical and non-human components of a system, such as the machinery and tools utilized by operators and vehicles.
  • Software refers to all nonphysical, intangible aspects of an organic system, such as rules, policies, and procedures of an organization.
  • Environment includes both internal and external factors. In terms of the internal environment, several factors affect the working environment, including climate, temperature, vibrations, and noise. In the external environment, factors such as social, political, and economic influence are reflected.
  • Liveware is a human-centric solution that looks at teamwork, communication, leadership, and so on.
  • SHELL model’s center is also known as liveware, and it is the core component.
Figure 2. The SHELL Model.
Figure 2. The SHELL Model.
Futuretransp 06 00021 g002
This hybrid approach provides a comprehensive view of safety risks by capturing systemic defense layer failures from the Swiss Cheese model and human-system-environment interactions performed by the SHELL model. Rather than using primary data collection methods such as surveys or experiments, this research focuses on secondary data sources, including official accident investigation reports, scholarly articles, and technical publications related to real-world AV incidents. Additionally, this hybrid approach enables a multidimensional exploration of failures in AV systems by analyzing how system interactions, such as those between humans and automation or between software and hardware, can create vulnerabilities when layers of defense fail to operate effectively. By applying this dual-model approach to real-world accident cases, this study provides structured insight into the sequence of failures and identifies potential intervention points for enhancing AV safety performance. The Swiss Cheese model is used to map how failures can penetrate the layers of system defenses intended to prevent accidents. In the context of AVs, the four main layers analyzed are: perception (sensors and data input), planning (navigation algorithms), control (system execution), and humans (drivers or system supervisors). Each layer has potential weaknesses, acts as the cheese holes that, if aligned, could lead to systemic failure. To complement this analysis, the SHELL model is used, which focuses on the interaction between humans (liveware) and four main elements: software (S), hardware (H), environment (E), and other humans (L). In the context of AVs, these interactions can occur between the driver and the autopilot system (L-S), the driver and hardware such as the steering wheel or pedals (L-H), the driver and road or weather conditions (L-E), and communication between vehicle occupants (L-L). Identifying failures at these interaction levels helps explain why the system is not performing as intended, even though it appears normal from a technological perspective.
A human factors analysis is then conducted to examine how the driver or user’s perception, understanding, and prediction influence the course of the incident. The model used is Endsley’s SA, which is divided into three stages: perception, comprehension, and projection. In the perception stage, the focus is on the extent to which the system user can recognize important elements of the surrounding environment, such as the presence of other vehicles, traffic signs, or warnings from the system. The second stage is comprehension, which involves processing and interpreting the information obtained within the context of the overall situation. Failures at this stage often occur when the driver misinterprets system warnings or overreliance on the capabilities of the AV. The third stage, projection, relates to the ability to predict the consequences of current actions. In many cases, prediction failures lead to delays or errors in making appropriate decisions, such as failing to take control when the autopilot system encounters complex situations. Recent human factors research in intelligent vehicles highlights driver SA as a critical determinant of safe transitions between automated and manual control, with implications for interface design, trust, and engagement [52,53]. This SA analysis strengthens the findings from Swiss Cheese and SHELL by adding a cognitive dimension to the causes of failure.
Based on the integrated results of the Swiss Cheese, SHELL, and SA approaches as shown in Figure 3, this study identifies recurrent critical failure paths in AV systems. To strengthen the explanation of how these models are combined, this study explicitly integrates five Swiss Cheese layers—governance and policy, perception, planning and decision-making, control and actuation, and human-related operational roles—with four SHELL interfaces: Liveware-Software, Liveware-Hardware, Liveware-Environment, and Liveware-Liveware. A cross-layer mapping approach was applied, in which failures observed in real-world AV incident data were categorized by both system layer and human-system interaction interface. For instance, perception failures were linked with L-H issues such as sensor degradation or miscalibration, while communication breakdowns between driver and system were associated with L-S challenges such as low-salience alerts or mode confusion. This mapping reveals how concurrent weaknesses across systemic layers and human-related interactions can align and result in safety incidents. By combining these models with Endsley’s SA framework, the analysis further incorporates cognitive dimensions such as the operator’s ability to perceive, comprehend, and project risk in real time.
These findings are then used to formulate risk mitigation strategies. These strategies are designed to close failure loopholes at each layer of defense, improve the quality of interactions between system elements, and refine user interfaces and training to maintain SA. Mitigation recommendations focus on three levels: technical, procedural, and human. At the technical level, improvements include enhancing the system detection capabilities, redesigning the interface, and providing more intuitive feedback. At the procedural level, updating user instructions and strengthening system override protocols are the primary focus. Meanwhile, from the human perspective, it is crucial to provide training that emphasizes understanding the limits of autonomous systems and scenarios where human intervention is most needed. With mitigation strategies that directly target the root of the problem, it is hoped that future AV systems will be safer and more reliable.

4. Case-Based Analysis

This study adopts a qualitative multiple-case research design based on secondary documentary data to examine safety failures in autonomous driving systems. Rather than aiming for statistical generalization, the case-based approach seeks analytical generalization by identifying recurrent failure mechanisms and interaction patterns across real-world incidents.

4.1. Case Selection and Data Sources

Cases were sourced from the U.S. NHTSA Standing General Order crash reporting database, supplemented by publicly available official accident investigation reports and technical disclosures where applicable. The temporal scope of the analysis covers incidents reported between October 2024–October 2025. Cases were included if they met the following criteria:
  • Involvement of vehicles operating with ADS or SAE level 2 ADAS engaged at or immediately prior to the incident;
  • Availability of narrative descriptions detailing system behavior and human involvement;
  • Relevance to safety failures involving perception, decision-making, control, or human–machine interaction.
Cases with insufficient documentation or purely mechanical failures unrelated to automation were excluded.
As part of its Standing General Order, the NHTSA mandates that manufacturers and operators of vehicles equipped with AVs systems or SAE level 2 advanced driver assistance systems report crashes to the agency. Through the General Order, the NHTSA receives timely and transparent notifications from manufacturers and operators of real-world accidents involving Autonomous Driving Systems (ADS) and Advanced Driver Assistance Systems (ADAS) vehicles at level 2. This data will enable NHTSA to conduct further investigations and enforce vehicle safety laws in the event of crashes raising safety concerns about ADS or level 2 ADAS technologies. If NHTSA discovers a safety defect in a vehicle, it will take the necessary action to remove unsafe vehicles from public roads or remediate them as needed.
In the General Order, the NHTSA examined whether manufacturers of ADS and level 2 ADASs, including prototypes of those vehicles and equipment, were meeting their statutory obligations to ensure that their vehicles and equipment are free from defects that pose unreasonable risks to motor vehicle safety. Before the General Order was implemented, NHTSA had limited sources of notifications of crashes from manufacturers, including developers, which resulted in a high level of inconsistency and delays. There are different types of automation systems that are considered when categorizing crashes based on their engagement at the time of the crash. As part of previous versions of the General Order, reporting entities classified onboard automation systems as ADS while they were level 2 ADAS. The NHTSA is in the process of working with reporting entities to correct this information and to improve data quality going forward.
In prior General Orders, ADS crashes were presented in the chart as shown in Figure 4 using the ADS-equipped data field. A comparison of reported crashes involving ADAS-equipped vehicles at level 2 is presented in Figure 5. To illustrate overall temporal patterns, linear trendlines were fitted to monthly crash counts using ordinary least squares regression. These trendlines are intended solely for descriptive visualization and do not imply causal relationships. The third amended General Order streamlined the reporting requirements of the prior orders to maintain a focus on safety. ADS and ADAS crashes are now presented in the charts by using the type of automation system engaged prior to crash onset and at the time of the incident, respectively. A level 2 ADAS chart indicates a crash involvement in level 2 automation system, since the reporting entity indicated ADAS as the type of automation system involved in the incident report. If multiple versions of a report are available for a particular entity, the values of the most recent report are used. Occasionally, more than one report can be submitted for one crash, resulting in duplicate counts if sufficient information cannot be determined to match the reports. NHTSA data show that crashes involving both ADS and level 2 ADAS-equipped vehicles tend to increase over time, indicating that automation has not sufficiently reduced crash frequency.

4.2. Analytical Procedure

The analysis followed a structured two-stage coding process. In the first stage, each case was analyzed using the Swiss Cheese model by mapping observed failures onto five defensive layers: governance and policy, perception, planning and decision-making, control and actuation, and human–machine interface. This step identified where systemic defenses were weakened or breached.
In the second stage, failures identified at each layer were further examined using the SHELL model to classify interface-level breakdowns across Liveware-Software, Liveware-Hardware, Liveware-Environment, and Liveware-Liveware interactions. This enabled a detailed examination of how human-system mismatches contributed to failure propagation. Following within-case analysis, a cross-case synthesis was conducted to identify recurring alignment pathways and failure patterns. These patterns form the empirical basis for the hybrid Swiss Cheese-SHELL framework proposed in this study.

5. Discussion

This section discusses the key patterns of safety failures identified through the hybrid Swiss Cheese–SHELL analysis. By examining real-world AV incidents, we highlight how technical weaknesses and human-system interactions contribute to these failures, and how the findings can inform practical safety improvements.

5.1. An Integrated Swiss Cheese—SHELL Framework

First, we use Swiss Cheese model to study failures in AVs. The goal is to characterize how multilayer barrier failures align to produce autonomous driving safety incidents, identify the most common alignment pathways, and prioritize the highest-impact mitigations. The Swiss Cheese model in AVs failures is depicted in Figure 6. The following is a discussion of each slice needed to check for holes in every incident.
  • Governance and policy
The Department of Motor Vehicles of the State of California reports safety risks exposed by accidents and disengagements to the public during on-road testing [54]. When an AV disengages during a test on the road, this does not necessarily lead to traffic accidents, but it represents a risk event that requires the human operator to step in and take control of the vehicle [55]. Apple classifies disengagement events into two general types: manual takeover and software disengagement. A manual takeover occurred when the AV operators determined that manual control was necessary, instead of automated control. In some cases, these types of events may be the result of actual driving conditions, such as emergency vehicles, construction zones, or unexpected objects around the roads. In software, disengagement is caused when an issue with perception, motion planning, controls, or communication is detected. For instance, if a sensor is unable to detect and track an object in the surrounding environment adequately, a human driver will take over driving. Whenever the decision layer fails to generate a motion plan, and the actuator fails to respond in a timely or appropriate manner, disengagement will occur. Nevertheless, different manufacturers may define disengagement events differently, so for some companies, the reported disengagement events may not be complete. There may be significant differences, primarily due to the maturity of autonomous technology. Despite that, the possibility that the definition of disengagement during on-road testing contributes to the difference cannot be ruled out. Policymakers can play a significant role in defining disengagement events in a way that considers perception errors, decision errors, actions errors, and system faults, among others.
  • Perception
A perception layer acquires data from multiple sensors to obtain a real-time picture of the environment and make decisions [56]. Sensor technology plays a major role in determining the development of AVs, especially its complexity, reliability, suitability, and maturity [57]. Various sensors can be used to detect the environment, including light detection and ranging (LiDAR) sensors, cameras, radars, ultrasonic sensors, contact sensors, and global positioning systems. Various sensing technologies have their own functions and capabilities [58]. The perception of the other road users, traffic signals, and other hazards may be disturbed if AVs misperceive their status, location, or movement, or detect potential hazards.
A variety of factors, including hardware, software, and communication, can cause a perception error. Sensing technology plays an important role in perception; therefore, perception errors can result from hardware, including sensors. Especially when sensors fail or degrade, it can compromise the server’s perception, lead to confusion when making decisions, and create risky driving conditions. Considering this, the development of reliable and fault-tolerant sensing technology may provide a solution to these problems. Additionally, perception errors may also derive from software malfunctions that could affect decision and action layers, potentially leading to mission failing or causing safety issues. With AVs approaching full automation, communication errors will become increasingly important. In some cases, communication errors are caused by problems between the AVs and corresponding infrastructures [59], other road users [58], and the internet [60]. Modern transportation systems require effective interpersonal communication [61]. Communication is the foundation of AVs, enabling coordinated movements and ensuring the safety of all road users, including pedestrians, bicyclists, and construction workers [62]. The methods of communication include gestures, facial expressions, and vehicular devices, as well as cultural, contextual, and experiential factors all play a role in how these messages are comprehended. These factors also represent AV technology’s most significant challenges [61].
  • Planning and decision
All information processed in the perception layer is analyzed by the decision layer. Decisions are made and actions are taken based on the information generated by the decision layer [63]. The decision-making system relies upon SA for both short- and long-term planning. Several tasks are included in short-term planning, including trajectory generation, obstacle avoidance, event management, and maneuver management [64]. In the meantime, route planning and mission planning play a vital role in long-term planning [65,66].
There are two main causes of decision errors: system failures and human error. The effective use of AV requires only that it takes over driving or warning the drivers whenever necessary, with a minimal number of false alarms and acceptable positive performance, such as safety [67]. A significant reduction in false alarm rates can be achieved while maintaining adequate accuracy and safety as AV technology improves over time [68]. If the algorithm is incapable of detecting all the hazards effectively and efficiently, the safety of AVs will be compromised. Due to drivers taking over from the automated vehicle, there may be a few seconds before the driver can take over control [69]. This adds uncertainty to the safe operation of the AV.
  • Control and actuation
The reliability of an AV system depends on its architecture, hardware, and software. It is important to note, however, that AV architecture is highly dependent upon the level of automation, and therefore, AV safety may vary from one stage to another. It is also possible to find variations in AV architecture even at the same automation level across studies. A typical AV comprises a sensor-based perception system, an algorithm-based decision system, an actuator-based actuation system, and the interconnections between systems [70]. As a rule of thumb, all components of an AV should function well to ensure its safety.
A traditional engine’s steering wheel, throttle, or brake is controlled by the action controller after the decision layer transmits the command [71]. Moreover, the actuators are also monitoring the feedback variables and based on the feedback information, making new actuation decisions.
  • Human–machine interface
Human–machine interfaces (HMIs) for vehicles typically serve as support for the role of human dominance. Even though modern driving assistance systems allow vehicles to take over control in some situations, the typical HMI has not changed radically in the past few decades. As deep learning neural network technologies become more prevalent in automotive applications, multi-modal communications between drivers and vehicles can be enabled effectively [72]. An intuitive multimodal HMI supports smooth switching between human manual control and automated operation, allowing drivers to interact with Avs easily. In contrast to the steering wheel of a vehicle, AVs do not have a standard multimodal HMI. It should also be noted that multimodal communications differ from buttons and knobs, which are less susceptible to cultural variations across countries. Original Equipment Manufacturers (OEMs) in the automotive industry can distribute their typical HMI across many countries and markets with very little adaptation. However, OEMs need to adapt to multimodal HMIs based on cultural impacts, driving habits, social cognition, and the legal traffic system. There is an elaborate description of design methodologies for HMI systems at various levels of the AVs [18]. In partially driving AVs (SAE level 2) as well as conditionally driving AVs (SAE level 3), multimodal HMI is intended not only to reduce driver fatigue, but also to maintain a certain level of the driver’s engagement to ensure that when the switching occurs, the driver can take over control quickly. Multimodal HMI systems function as a trade-off curve between two design goals.
Figure 6. Hybrid Swiss Cheese and SHELL Approach.
Figure 6. Hybrid Swiss Cheese and SHELL Approach.
Futuretransp 06 00021 g006
Following the safety failures analysis performed by the Swiss Cheese model, we use the SHELL model to systematically identify, code, classify, and prioritize human-system interface failures that contribute to safety incidents in AVs. Human actors, such as safety drivers and other road users, are at the center of the core conceptual framing, serving as liveware. The four interfaces in the SHELL model are discussed as follows.
  • Liveware-Software
The liveware-software (L-S) interface refers to the interactions, communication, and dependencies between human actors (liveware) and the software components of an autonomous driving system. The software term here covers the user interface, the automation logic, tool chains, development pipelines, and telemetry and diagnostics systems. Failures in L-S occur when software fails to appropriately present, manage, or support human interaction with those functions, or when human expectations/actions misalign with what the software is doing. Table 2 shows common ways L-S failures show up in AV safety incidents.
When the software interface fails as an L-S hole, it reduces the effectiveness of one of the defensive layers in the Swiss Cheese model, such as the HMI layer or the human-monitoring barrier. Then that hole must align with failures in other layers, such as perception and planning, for a safety failure to occur. For instance, poor alert design combined with sensor occlusion and misprediction will lead to a collision because the human did not see the pedestrian in time and was not sufficiently alerted or able to intervene.
  • Liveware-Hardware
The liveware-hardware (L-H) refers to the interactions, dependencies, and potential mismatches between human actors (liveware) and the physical components and mechanical or electronic hardware of an AV system. Hardware includes sensors, actuators, mechanical components and mounting elements, calibration and alignment, and maintenance infrastructures or tools. If hardware malfunctions, degrades, are poorly maintained or installed, or are calibrated badly, human operators must respond. Failures at this interface can degrade perception, lead to misinterpretation of the environment, and delay or prevent the taking of corrective actions. Table 3 shows the most common types of hardware-related failures in AVs settings.
In Swiss Cheese model, L-H failures are holes in the hardware or perception or actuation layer defenses. For instance, a misaligned LiDAR-camera extrinsic calibration, such as hardware hole, can allow perception to misclassify or mislocalize an object as another hole in the perception layer. The hardware error, combined with a planning module that does not account for uncertainty, amplifies the hole alignment error. If the human operator or safety driver is not notified or misses notice due to user interface issues, then that layer of defense fails, letting the hazard through. Therefore, L-H holes often trigger or magnify failures in higher layers, such as perception, prediction, planning, and control. Analyses that neglect hardware issues may underestimate the chance of hole alignment leading to collisions or near-misses.
  • Liveware-Hardware
The liveware-environment (L-E) interface concerns how human operators, such as safety drivers, interact with, are affected by, or must adapt to the environmental conditions under which autonomous driving systems operate. Environment in this case spans ambient conditions, road geometry and infrastructure, traffic complexity, and regulatory, legal or societal context. The human component (liveware) must perceive, understand, anticipate, and respond effectively, given the environmental context. If the environment degrades or presents surprises, those human processes can fail. Table 4 shows typical failure modes in L-E, how they happen, and how they can undermine safety in autonomous driving systems.
Within the Swiss Cheese model, L-E holes often act as enablers or amplifiers. They weaken upstream defenses or create more severe consequences of other holes. For instance, under heavy rain as the environment hole, sensor perception as the hardware or software layer degrades, leading to miss pedestrians. If human safety drivers are fatigued or alert user interface is low in salience, takeover is delayed or absent, then it will lead to collision. A poor road geometry (curves) and faded lane markings degrade lane detection (perception), and low lighting (environment) worsens visibility, leading to system misalignments, operator misinterpretations, and the planning layer may assume more confidence than warranted. Other example of L-E holes in the Swiss Cheese model are regulatory or temporary signage issues as construction zones might not be encoded in maps, system may not know the change, and even human may see it, but system’s behavior may lag or misinterpret. Thus, L-E holes often set the stage for active failures elsewhere or force human operators to operate higher workload or with degraded SA, increasing probability of error.
  • Liveware-Liveware
The liveware-liveware (L-L) interface refers to human-to-human interactions that are relevant to safety in autonomous driving systems. These human actors may include a safety or fallback driver, remote or teleoperators, operations or fleet management staff, engineers or developers maintaining the system, first responders or regulators, and other road users such as pedestrians, human-driven vehicles, cyclists, especially when informal communication (gestures and eye contact) matters. L-L covers communication, coordination, shared SA, handover protocols, expectations, organizational culture, rules of engagement, escalation procedures, and informal behavioral norms among these agents. Table 5 shows some typical failure modes in L-L.
The L-L holes often serve as latent or active contributors in multi-layer failure alignment in the Swiss Cheese model. Some pathways are automation layers (software or perception) may detect hazards, but if remote operator or safety driver is not aware, i.e., handed over poorly, then that layer fails due to human intervention being late or incorrect. Hardware degradation or environmental condition degrade system performance. If operators or maintenance teams do not communicate the issue to safety drivers, then the hole persists. Regulatory or culture lapses at the organizational or governance layer may lead to vague roles, leaving no one in charge in incidents. Informal communication, such as gestures and expectations among road users, interacts with environment or perception. If non-human road users misinterpret AV behavior, or human drivers do not understand AV behavior, leading to risky situations.
While the Swiss Cheese model and the SHELL model have been widely applied independently in safety-critical domains, such as aviation and healthcare, their combined application has not been formally proposed in the context of autonomous driving systems. This study intentionally integrates the two models to address the complex socio-technical nature of AV safety failures. The Swiss Cheese model provides a system-level perspective by illustrating how latent failures align across multiple defensive layers, including governance, perception, planning, control, and human–machine interface. However, it does not explicitly capture the detailed mechanisms of human–system interaction failures. Conversely, the SHELL model focuses on interface-level mismatches between humans and software, hardware, environment, and other humans, but lacks a structured representation of how these failures propagate across system defenses. By integrating the two models, this hybrid approach enables a unified analysis that links interface-level human factors to system-level failure pathways, offering a more comprehensive and empirically grounded understanding of autonomous driving safety incidents. This integration emerged directly from the analysis of real-world AV incident data, where failures were rarely isolated and instead resulted from the alignment of technical, organizational, and human factors. Table 6 presents the core operational contribution of the hybrid framework by explicitly mapping SHELL interface failures to the Swiss Cheese model layers, the nature of the resulting vulnerability, and their safety implications. This mapping transforms the conceptual integration into an applied analytical tool.

5.2. Research Contributions

AVs represent a transformative technology with the potential to reshape mobility, but their safe deployment remains a critical concern. Understanding how and why safety failures occur in AVs especially from both technical and human–organizational standpoints—is essential for building trust, improving regulations, and reducing real-world risks. Based on the detailed analysis presented above, this study makes the following key contributions:
  • A structured hybrid safety analysis framework
This study develops and applies a hybrid Swiss Cheese–SHELL framework that systematically links system-level defense layers with human–system interaction failures, enabling a more comprehensive understanding of autonomous driving safety incidents.
  • Explicit mapping of failure propagation across AV defense layers
There are five layers identified in the Swiss Cheese model. By decomposing autonomous driving into governance, perception, planning and decision, control and actuation, and human–machine interface layers, this work demonstrates how failures propagate and align across layers, rather than occurring as isolated faults.
  • Extension of the SHELL model to autonomous driving contexts
There are four interfaces analyzed in the SHELL model. The study adapts the classical SHELL model to autonomous driving by detailing how Liveware–Software, Liveware–Hardware, Liveware–Environment, and Liveware–Liveware interactions contribute to safety failures, supported by concrete failure modes observed in real-world incidents.
  • Identification of human involvement beyond direct driving tasks
The analysis highlights that human contribution to AV safety failures extends beyond on-board driving, encompassing remote operators, maintenance teams, organizational communication, and interactions with other road users, emphasizing the socio-technical nature of AV systems.
  • Integration of technical, organizational, and human perspectives for mitigation
The findings provide a foundation for targeted mitigation strategies by showing how technical degradation, environmental complexity, interface design, and communication breakdowns jointly influence risk, offering actionable insights for system designers, operators, and regulators.
These contributions are derived from in-depth analysis of real-world AV incident data sourced from NHTSA. The study reveals that safety failures often stem from the alignment of latent failures across governance, perception, planning, control, and HMI layers. Moreover, human factors—such as unclear alerting, sensor misalignment, or remote operation breakdowns—play a major role in these incidents, demonstrating that AV safety is not merely a technical problem but also socio-technical.

6. Conclusions

This study comprehensively analyzes safety failures in autonomous driving systems by integrating the Swiss Cheese and SHELL models to capture both systemic and human–machine interaction risks. The hybrid approach identified critical alignment pathways across multiple defensive layers, such as governance and policy, perception, planning and decision, control and actuation, and human–machine interface. This shows how failures in each can interact to cause accidents. By applying the SHELL model, the analysis further revealed that liveware interactions with software, hardware, environment, and other humans serve as key determinants of failure probability. The combined approach demonstrated that most failures in AVs are not isolated technological malfunctions, but rather the result of complex interdependencies among human, organizational, and technical systems.
The integration of the Swiss Cheese and SHELL models provides a novel, multidimensional approach for understanding and mitigating AV safety failures. Unlike traditional analyses that focus solely on technical or human factors, this study bridges both domains to highlight how latent organizational weaknesses, interface design flaws, and environmental uncertainties collectively influence AV reliability. This approach contributes to the field by offering a structured diagnostic method that can be adopted by regulators, engineers, and researchers for comprehensive AV safety evaluations. It also strengthens the theoretical linkage between systemic risk management and human factors engineering in the context of intelligent transportation systems.
Future research should build upon this hybrid approach by incorporating quantitative validation using real-world accident datasets and simulation-based testing. Further investigation is needed into how to integrate adaptive artificial intelligence, human SA, and machine learning reliability metrics to enhance proactive risk detection. Moreover, future studies should explore cross-cultural and regulatory differences that affect human–machine trust and decision-making in autonomous systems. Expanding the hybrid Swiss Cheese and SHELL approach into other domains, such as aerial drones or maritime automation, would also offer broader insights into human-centered safety assurance for next-generation autonomous technologies. Overall, this study lays the groundwork for a human-centered, system-aware methodology to support safer autonomous technologies.

Author Contributions

Investigation, methodology, validation, writing—original draft, B.R.; conceptualization, visualization, S.T.W.; writing—review and editing, F.N.R. and T.M.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original data presented in the study are openly available at https://www.nhtsa.gov/laws-regulations/standing-general-order-crash-reporting (accessed on 2 October 2025).

Acknowledgments

The authors gratefully acknowledge the constructive suggestions provided by both the editors and reviewers.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ADASAdvanced Driver Assistance Systems
ADSAutonomous Driving Systems
AVsAutonomous Vehicles
DDTDynamic Driving Task
HMIHuman–machine interfaces
L-ELiveware-Environment
L-HLiveware-Hardware
L-LLiveware-Liveware
L-SLiveware-Software
NHTSANational Highway Traffic Safety Administration
OEMsOriginal Equipment Manufacturers
SAESociety of Automotive Engineers

References

  1. Haboucha, C.J.; Ishaq, R.; Shiftan, Y. User preferences regarding autonomous vehicles. Transp. Res. Part C Emerg. Technol. 2017, 78, 37–49. [Google Scholar] [CrossRef]
  2. Favaro, F.M.; Nader, N.; Eurich, S.O.; Tripp, M.; Varadaraju, N. Examining accident reports involving autonomous vehicles in California. PLoS ONE 2017, 12, e0184952. [Google Scholar] [CrossRef]
  3. Millard-Ball, A. Pedestrians, autonomous vehicles, and cities. J. Plan. Educ. Res. 2018, 38, 6–12. [Google Scholar] [CrossRef]
  4. Palmeiro, A.R.; Kint, S.v.d.; Vissers, L.; Farah, H.; Winter, J.C.F.d.; Hagenzieker, M. Interaction between pedestrians and automated vehicles: A wizard of oz experiment. Transp. Res. Part F Traffic Psychol. Behav. 2018, 58, 1005–1020. [Google Scholar] [CrossRef]
  5. Davis, L.C. Optimal merging into a high-speed lane dedicated to connected autonomous vehicles. Phys. A Stat. Mech. Its Appl. 2020, 555, 124743. [Google Scholar] [CrossRef]
  6. Stern, R.E.; Chen, Y.; Churchill, M.; Wu, F.; Monache, M.L.D.; Piccoli, B.; Seibold, B.; Sprinkle, J.; Work, D.B. Quantifying air quality benefits resulting from few autonomous vehicles stabilizing traffic. Transp. Res. Part D Transp. Environ. 2019, 67, 351–365. [Google Scholar] [CrossRef]
  7. Singh, S. Critical Reasons for Crashes Investigated in the National Motor Vehicle Crash Causation Survey. No. DOT HS 812 115. 2018. Available online: https://trid.trb.org/view/1507603 (accessed on 1 October 2025).
  8. Nagy, V.; Horvath, B. The effects of autonomous buses to vehicle scheduling system. Procedia Comput. Sci. 2020, 170, 235–240. [Google Scholar] [CrossRef]
  9. Tian, L.-J.; Sheu, J.-B.; Huang, H.-J. The morning commute problem with endogenous shared autonomous vehicle penetration and parking space constraint. Transp. Res. Part B Methodol. 2019, 123, 258–278. [Google Scholar] [CrossRef]
  10. Lee, Y.-C.; Mirman, J.H. Parents’ perspectives on using autonomous vehicles to enhance children’s mobility. Transp. Res. Part C Emerg. Technol. 2018, 96, 415–431. [Google Scholar] [CrossRef]
  11. Fagnant, D.J.; Kockelman, K. Preparing a nation for autonomous vehicles: Opportunities, barriers and policy recommendations. Transp. Res. Part A Policy Pract. 2015, 77, 167–181. [Google Scholar] [CrossRef]
  12. Anderson, J.M.; Kalra, N.; Stanley, K.D.; Sorensen, P.; Samaras, C.; Oluwatola, O.A. Autonomous Vehicle Technology: A Guide for Policymakers; Rand Corporation: Santa Monica, CA, USA, 2016. [Google Scholar]
  13. Bennett, R.; Vijaygopal, R.; Kottasz, R. Willingness of people who are blind to accept autonomous vehicles: An empirical investigation. Transp. Res. Part F Traffic Psychol. Behav. 2020, 69, 13–27. [Google Scholar] [CrossRef]
  14. Zhang, T.Z.; Chen, T.D. Smart charging management for shared autonomous electric vehicle fleets: A puget sound case study. Transp. Res. Part D Transp. Environ. 2020, 78, 102184. [Google Scholar] [CrossRef]
  15. Zhang, L.; Chen, F.; Ma, X.; Pan, X. Fuel economy in truck platooning: A literature overview and directions for future research. J. Adv. Transp. 2020, 2020, 2604012. [Google Scholar] [CrossRef]
  16. Lokhandwala, M.; Cai, H. Siting charging stations for electric vehicle adoption in shared autonomous fleets. Transp. Res. Part D Transp. Environ. 2020, 80, 102231. [Google Scholar] [CrossRef]
  17. Overtoom, I.; Correia, G.; Huang, Y.; Verbraeck, A. Assessing the impacts of shared autonomous vehicles on congestion and curb use: A traffic simulation study in the Hague, Netherlands. Int. J. Transp. Sci. Technol. 2020, 9, 195–206. [Google Scholar] [CrossRef]
  18. Ge, X.; Li, X.; Wang, Y. Methodologies for evaluating and optimizing multimodal human-machine-interface of autonomous vehicles. In Proceedings of the WCX World Congress Experience 2018, Detroit, MI, USA, 10–12 April 2018. [Google Scholar]
  19. Koopman, P.; Wagner, M. Autonomous vehicle safety: An interdisciplinary challenge. IEEE Intell. Transp. Syst. Mag. 2017, 9, 90–96. [Google Scholar] [CrossRef]
  20. Xu, C.; Ding, Z.; Wang, C.; Li, Z. Statistical analysis of the patterns and characteristics of connected and autonomous vehicle involved crashes. J. Saf. Res. 2019, 71, 41–47. [Google Scholar] [CrossRef]
  21. Nemeth, B.; Bede, Z.; Gaspar, P. Control strategy for the optimization of mixed traffic flow with autonomous vehicles. IFAC-Pap. 2019, 52, 227–232. [Google Scholar] [CrossRef]
  22. Phan, D.; Bab-Hadiashar, A.; Lai, C.Y.; Crawford, B.; Hoseinnezhad, R.; Jazar, R.N.; Khayyam, H. Intelligent energy management system for conventional autonomous vehicles. Energy 2020, 191, 116476. [Google Scholar] [CrossRef]
  23. Chen, F.; Song, M.; Ma, X. A lateral control scheme of autonomous vehicles considering pavement sustainability. J. Clean. Prod. 2020, 256, 120669. [Google Scholar] [CrossRef]
  24. Hulse, L.M.; Xie, H.; Galea, E.R. Perceptions of autonomous vehicles: Relationships with road users, risk, gender and age. Saf. Sci. 2018, 102, 1–13. [Google Scholar] [CrossRef]
  25. Moody, J.; Bailey, N.; Zhao, J. Public perceptions of autonomous vehicle safety: An international comparison. Saf. Sci. 2020, 121, 634–650. [Google Scholar] [CrossRef]
  26. Lee, J.; Lee, D.; Park, Y.; Lee, S.; Ha, T. Autonomous vehicles can be shared, but a feeling of ownership is important: Examination of the influential factors for intention to use autonomous vehicles. Transp. Res. Part C Emerg. Technol. 2019, 107, 411–422. [Google Scholar] [CrossRef]
  27. Mordue, G.; Yeung, A.; Wu, F. The looming challenges of regulating high level autonomous vehicles. Transp. Res. Part A Policy Pract. 2020, 132, 174–187. [Google Scholar] [CrossRef]
  28. Silva, Ó.; Cordera, R.; González-González, E.; Nogués, S. Environmental impacts of autonomous vehicles: A review of the scientific literature. Sci. Total Environ. 2022, 830, 154615. [Google Scholar] [CrossRef]
  29. Legacy, C.; Ashmore, D.; Scheurer, J.; Stone, J.; Curtis, C. Planning the driverless city. Transp. Rev. 2019, 39, 84–102. [Google Scholar] [CrossRef]
  30. Richards, D.; Stedmon, A. To delegate or not to delegate: A review of control frameworks for autonomous cars. Appl. Ergon. 2016, 53, 383–388. [Google Scholar] [CrossRef]
  31. SAE Standard J3016_202104; SAE International Recommended Practice, Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. SAE: Warrendale, PA, USA, 2021.
  32. Paddeu, D.; Shergold, I.; Parkhurst, G. The social perspective on policy towards local shared autonomous vehicle services (LSAVS). Transp. Policy 2020, 98, 116–126. [Google Scholar] [CrossRef]
  33. Garavello, M.; Goatin, P.; Liard, T.; Piccoli, B. A multiscale model for traffic regulation via autonomous vehicles. J. Differ. Equ. 2020, 269, 6088–6124. [Google Scholar] [CrossRef]
  34. United States Department of Transportation. Automated Vehicles for Safety; United States Department of Transportation: Washington, DC, USA, 2019. [Google Scholar]
  35. Jeong, Y. Fault detection with confidence level evaluation for perception module of autonomous vehicles based on long short term memory and Gaussian Mixture Model. Appl. Soft Comput. 2023, 149, 111010. [Google Scholar] [CrossRef]
  36. Madala, K.; Do, H. Functional safety hazards for machine learning components in autonomous vehicles. In Proceedings of the 2021 4th IEEE International Conference on Industrial Cyber-Physical Systems (ICPS), Victoria, BC, Canada, 10–12 May 2021; pp. 225–230. [Google Scholar]
  37. Tan, H.; Zhao, F.; Zhang, W.; Liu, Z. An evaluation of the safety effectiveness and cost of autonomous vehicles based on multivariable coupling. Sensors 2023, 23, 1321. [Google Scholar] [CrossRef]
  38. Yang, W.; Li, C.; Zhou, Y. A path planning method for autonomous vehicles based on risk assessment. World Electr. Veh. J. 2022, 13, 234. [Google Scholar] [CrossRef]
  39. Liu, Z.; Liu, X.; Li, Q.; Zhang, Z.; Gao, C.; Tang, F. Strategies for coordinated merging of vehicles at ramps in new hybrid traffic environments. Sustainability 2025, 17, 4522. [Google Scholar] [CrossRef]
  40. Li, G.; Yang, Y.; Zhang, T.; Qu, X.; Cao, D.; Cheng, B.; Li, K. Risk assessment based collision avoidance decision-making for autonomous vehicles in multi-scenarios. Transp. Res. Part C Emerg. Technol. 2021, 122, 102820. [Google Scholar] [CrossRef]
  41. Katrakazas, C.; Quddus, M.; Chen, W.H. A new integrated collision risk assessment methodology for autonomous vehicles. Accid. Anal. Prev. 2019, 127, 61–79. [Google Scholar] [CrossRef] [PubMed]
  42. Cannarsa, M. Ethics Guidelines for Trustworthy AI. In The Cambridge Handbook of Lawyering in the Digital Age; DiMatteo, L.A., Janssen, A., Ortolani, P., de Elizalde, F., Cannarsa, M., Durovic, M., Eds.; Cambridge Law Handbooks: Cambridge, UK; Cambridge University Press: Cambridge, UK, 2021; pp. 283–297. [Google Scholar]
  43. Yang, K.; Li, B.; Shao, W.; Tang, X.; Liu, X.; Wang, H. Prediction failure risk-aware decision-making for autonomous vehicles on signalized intersections. IEEE Trans. Intell. Transp. Syst. 2023, 24, 12806–12820. [Google Scholar] [CrossRef]
  44. Joshi, H.C.; Kumar, S. Artificial intelligence failures in autonomous vehicles: Causes, implications, and prevention. Computer 2024, 57, 18–30. [Google Scholar] [CrossRef]
  45. Atakishiyev, S.; Salameh, M.; Yao, H.; Goebel, R. Explainable artificial intelligence for autonomous driving: A comprehensive overview and field guide for future research directions. IEEE Access 2024, 12, 101603–101625. [Google Scholar] [CrossRef]
  46. Yang, K.; Tang, X.; Li, J.; Wang, H.; Zhong, G.; Chen, J.; Cao, D. Uncertainties in onboard algorithms for autonomous vehicles: Challenges, mitigation, and perspectives. IEEE Trans. Intell. Transp. Syst. 2023, 24, 8963–8987. [Google Scholar] [CrossRef]
  47. Prasetio, E.A.; Nurliyana, C. Evaluating perceived safety of autonomous vehicle: The influence of privacy and cybersecurity to cognitive and emotional safety. IATSS Res. 2023, 47, 160–170. [Google Scholar] [CrossRef]
  48. Park, S.; Park, H. PIER: Cyber-resilient risk assessment model for connected and autonomous vehicles. Wirel. Netw. 2022, 30, 4591–4605. [Google Scholar] [CrossRef]
  49. Meyer, S.F.; Elvik, R.; Johnsson, E. Risk analysis for forecasting cyberattacks against connected and autonomous vehicles. J. Transp. Secur. 2021, 14, 227–247. [Google Scholar] [CrossRef]
  50. Thonon, H.; Espeel, F.; Frederic, F.; Thys, F. Overlooked guide wire: A multicomplicated Swiss Cheese Model example. Analysis of a case and review of the literature. Acta Clin. Belg. 2019, 75, 193–199. [Google Scholar] [CrossRef] [PubMed]
  51. Lin, Q.; Wang, D. Facility layout planning with SHELL and Fuzzy AHP method based on human reliability for operating theatre. J. Healthc. Eng. 2019, 1, 8563528. [Google Scholar] [CrossRef]
  52. Capallera, M.; Angelini, L.; Meteier, Q.; Khaled, O.A.; Mugellini, E. Human-vehicle interaction to support driver’s situation awareness in automated vehicles: A systematic review. IEEE Trans. Intell. Veh. 2023, 8, 2551–2567. [Google Scholar] [CrossRef]
  53. Ulahannan, A.; Thompson, S.; Jennings, P.; Birrell, S. Using glance behaviour to inform the design of adaptive HMI for partially automated vehicles. IEEE Trans. Intell. Transp. Syst. 2022, 23, 4877–4892. [Google Scholar] [CrossRef]
  54. Lv, C.; Cao, D.; Zhao, Y.; Auger, D.J.; Sullman, M.; Wang, H.; Dutka, L.M.; Skrypchuk, L.; Mouzakitis, A. Analysis of autopilot disengagements occurring during autonomous vehicle testing. IEEE/CAA J. Autom. Sin. 2018, 5, 58–68. [Google Scholar] [CrossRef]
  55. Dixit, V.V.; Chand, S.; Nair, D.J. Autonomous vehicles: Disengagements, accidents and reaction times. PLoS ONE 2016, 11, e0168054. [Google Scholar] [CrossRef]
  56. Huang, W.L.; Wang, K.; Lv, Y.; Zhu, F.H. Autonomous vehicles testing methods review. In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016; pp. 163–168. [Google Scholar]
  57. Ilas, C. Electronic sensing technologies for autonomous ground vehicles: A review. In Proceedings of the 8th International Symposium on Advanced Topics in Electrical Engineering (ATEE), Bucharest, Romania, 23–25 May 2013; pp. 1–6. [Google Scholar]
  58. Sarmento, A.; Garcia, B.; Coriteac, L.; Navarenho, L. The autonomous vehicle challenges for emergent market. In Proceedings of the 26th SAE BRASIL International Congress and Display, São Paulo, Brazil, 7–9 November 2017. [Google Scholar]
  59. Gonzalez, D.; Perez, J.; Milanes, V.; Nashashibi, F. A review of motion planning techniques for automated vehicles. IEEE Trans. Intell. Transp. Syst. 2016, 17, 1135–1145. [Google Scholar] [CrossRef]
  60. Gerla, M.; Lee, E.-K.; Pau, G.; Lee, U. Internet of vehicles: From intelligent grid to autonomous cars and vehicular clouds. In Proceedings of the 2014 IEEE World Forum on Internet of Things (WF-IoT), Seoul, Republic of Korea, 6–8 March 2014; pp. 241–246. [Google Scholar]
  61. Stanciu, S.C.; Eby, D.W.; Molnar, L.J.; Louis, R.M.S.; Zanier, N.; Kostyniuk, L.P. Pedestrians/bicyclists and autonomous vehicles: How will they communicate? Transp. Res. Rec. 2018, 2672, 58–66. [Google Scholar] [CrossRef]
  62. Wang, K.; Li, G.; Chen, J.; Long, Y.; Chen, T.; Chen, L.; Xia, Q. The adaptability and challenges of autonomous vehicles to pedestrians in urban China. Accid. Anal. Prev. 2020, 145, 105692. [Google Scholar] [CrossRef] [PubMed]
  63. Ha, T.; Kim, S.; Seo, D.; Lee, S. Effects of explanation types and perceived risk on trust in autonomous vehicles. Transp. Res. Part F Traffic Psychol. Behav. 2020, 73, 271–280. [Google Scholar] [CrossRef]
  64. Yu, S.; Puchinger, J.; Sun, S. Two-echelon urban deliveries using autonomous vehicles. Transp. Res. Part E Logist. Transp. Rev. 2020, 141, 102018. [Google Scholar] [CrossRef]
  65. Liu, Y.; Whinston, A.B. Efficient real-time routing for autonomous vehicles through Bayes correlated equilibrium: An information design framework. Inf. Econ. Policy 2019, 47, 14–26. [Google Scholar] [CrossRef]
  66. Ryan, C.; Murphy, F.; Mullins, M. Spatial risk modelling of behavioural hotspots: Risk-aware path planning for autonomous vehicles. Transp. Res. Part A Policy Pract. 2020, 134, 152–163. [Google Scholar] [CrossRef]
  67. Ritchie, O.T.; Watson, D.G.; Griffiths, N.; Misyak, J.; Chater, N.; Xu, Z.; Mouzakitis, A. How should autonomous vehicles overtake other drivers? Transp. Res. Part F Traffic Psychol. Behav. 2019, 66, 406–418. [Google Scholar] [CrossRef]
  68. Rupp, J.D.; King, A.G. Autonomous driving—A practical roadmap. In Proceedings of the SAE Convergence 2010, Detroit, MI, USA, 13–15 April 2010. [Google Scholar]
  69. Calvi, A.; D’Amico, F.; Ferrante, C.; Ciampoli, L.B. A driving simulator study to assess driver performance during a car-following maneuver after switching from automated control to manual control. Transp. Res. Part F Traffic Psychol. Behav. 2020, 70, 58–67. [Google Scholar] [CrossRef]
  70. Sauras-Perez, P.; Gil, A.; Gill, J.S.; Pisu, P.; Taiber, J. VoGe: A voice and gesture system for interacting with autonomous cars. In Proceedings of the WCX™ 17: SAE World Congress Experience, Detroit, MI, USA, 4–6 April 2017. [Google Scholar]
  71. Lee, S.; Kim, Y.; Kahng, H.; Lee, S.-K.; Chung, S.; Cheong, T.; Shin, K.; Park, J.; Kim, S.B. Intelligent traffic control for autonomous vehicle systems based on machine learning. Expert Syst. Appl. 2020, 144, 113074. [Google Scholar] [CrossRef]
  72. Yan, B.; Roberts, I.P. Advancements in millimeter-wave radar technologies for automotive systems: A signal processing perspective. Electronics 2025, 14, 1436. [Google Scholar] [CrossRef]
  73. Monsaingeon, N.; Caroux, L.; Langlois, S.; Lemercier, C. Multimodal interface and reliability displays: Effect on attention, mode awareness, and trust in partially automated vehicles. Front. Psychol. 2023, 14, 1107847. [Google Scholar] [CrossRef] [PubMed]
  74. El-Dabaja, S.; McAvoy, D.; Naik, B. Alert! Automated vehicle (AV) system failure—Drivers’ reactions to a sudden, total automation disengagement. In Advances in Human Aspects of Transportation. AHFE 2020; Stanton, N., Ed.; Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2018; Volume 1212. [Google Scholar]
  75. Matos, F.; Bernardino, J.; Durães, J.; Cunha, J. A survey on sensor failures in autonomous vehicles: Challenges and solutions. Sensors 2024, 24, 5108. [Google Scholar] [CrossRef]
  76. Yan, G.; Liu, Z.; Wang, C.; Shi, C.; Wei, P.; Cai, X.; Ma, T.; Liu, Z.; Zhong, Z.; Liu, Y.; et al. OpenCalib: A multi-sensor calibration toolbox for autonomous driving. Softw. Impacts 2022, 14, 100393. [Google Scholar] [CrossRef]
  77. Peng, P.; Pi, D.; Yin, G.; Wang, Y.; Xu, L.; Feng, J. Automatic miscalibration detection and correction of LiDAR and camera using motion cues. Chin. J. Mech. Eng. 2024, 37, 50. [Google Scholar] [CrossRef]
  78. Fursa, I.; Fandi, E.; Musat, V.; Culley, J.; Gil, E.; Teeti, I.; Bilous, L.; Sluis, I.V.; Rast, A.; Bradley, A. Worsening perception: Real-time degradation of autonomous vehicle perception performance for simulation of adverse weather conditions. arXiv 2021, arXiv:2103.02760. [Google Scholar] [CrossRef]
  79. Aloufi, N.; Alnori, A.; Basuhail, A. Enhancing autonomous vehicle perception in adverse weather: A multi objectives model for integrated weather classification and object detection. Electronics 2024, 13, 3063. [Google Scholar] [CrossRef]
  80. Tahir, N.U.A.; Zhang, Z.; Asim, M.; Chen, J.; ELAffendi, M. Object detection in autonomous vehicles under adverse weather: A review of traditional and deep learning approaches. Algorithms 2024, 17, 103. [Google Scholar] [CrossRef]
  81. Kim, J.; Park, B.-j.; Kim, J. Empirical analysis of autonomous vehicle’s LiDAR detection performance degradation for actual road driving in rain and fog. Sensors 2023, 23, 2972. [Google Scholar] [CrossRef]
  82. Kamtam, S.B.; Lu, Q.; Bouali, F.; Haas, O.C.L.; Birrell, S. Network latency in teleoperation of connected and autonomous vehicles: A review of trends, challenges, and mitigation strategies. Sensors 2024, 24, 3957. [Google Scholar] [CrossRef]
  83. Xing, Y.; Chen, L.; Cao, D.; Hang, P. Toward human-vehicle collaboration: Review and perspectives on human-centered collaborative automated driving. Transp. Res. Part C Emerg. Technol. 2021, 128, 103199. [Google Scholar] [CrossRef]
  84. Goodall, N. Non-technological challenges for the remote operation of automated vehicles. Transp. Res. Part A Policy Pract. 2020, 142, 14–26. [Google Scholar] [CrossRef]
  85. Zhanguzhinova, S.; Makó, E.; Borsos, A.; Sándor, Á.P.; Koren, C. Communication between autonomous vehicles and pedestrians: An experimental study using virtual reality. Sensors 2023, 23, 1049. [Google Scholar] [CrossRef]
Figure 1. The Swiss Cheese Model. The bold arrow depicts the possibility of a failure happened through the holes of each layer and ultimately leads to an accident. The dashed line refers to the missing or failed defenses as the holes of each layer.
Figure 1. The Swiss Cheese Model. The bold arrow depicts the possibility of a failure happened through the holes of each layer and ultimately leads to an accident. The dashed line refers to the missing or failed defenses as the holes of each layer.
Futuretransp 06 00021 g001
Figure 3. Conceptual Model of the Hybrid Swiss Cheese-SHELL Approach.
Figure 3. Conceptual Model of the Hybrid Swiss Cheese-SHELL Approach.
Futuretransp 06 00021 g003
Figure 4. Automated Driving System Crashes from October 2024 to October 2025.
Figure 4. Automated Driving System Crashes from October 2024 to October 2025.
Futuretransp 06 00021 g004
Figure 5. Advanced Driver Assistance System Crashes from October 2024 to October 2025.
Figure 5. Advanced Driver Assistance System Crashes from October 2024 to October 2025.
Futuretransp 06 00021 g005
Table 1. Driving Automation Level.
Table 1. Driving Automation Level.
LevelDescriptionExplanation
0No automationAll driving tasks are accomplished by the human operator.
1Driver assistanceThe human operator controls the vehicle, but driving is assisted by the automation system.
2Partial driving automationThe vehicle incorporates a combination of automated functions, though the operator remains in charge of monitoring the environment and controlling the driving process.
3Conditional driving automationIf the vehicle needs to be operated, the human operator must be ready to take action at any time.
4High driving automationThe automated system may operate the vehicle automatically if certain conditions are met, but a human driver may be able to take over the controls.
5Full driving automationAutomated vehicles can drive under all conditions, but the human driver may still have control over the vehicle in some instances.
Table 2. Liveware-Software Failures in Automated Vehicles.
Table 2. Liveware-Software Failures in Automated Vehicles.
Failure ModeCause and MechanismContribution to Risk
Ambiguous or low-salience alerts [73]Warnings are too subtle, use poor wording or icons, or are hidden in the user interfaceHumans may not notice or misinterpret urgency, leading to delays, no takeovers, or no interventions.
Alert overload or false alarms [74]Many irrelevant or false alerts, frequent nuisance warnings.Humans become desensitized, leading to misses or ignores real alerts.
Mode confusion or lack of transparencyHumans do not know what level of automation is active, what the system can or cannot handle, or what the current operational boundary is.They over-trust or under-trust the system; poor decision-making when intervention is needed.
Poor visualization of planned path/intended behaviorSoftware does not show what the vehicle is planning to do, such as path and obstacle prediction, or does so in a confusing or latent way.Humans cannot predict what a system will do, making it harder to anticipate or override unsafe behavior.
Delayed or missing diagnostics or telemetry feedbackSystem fails to log or display critical data, such as sensor degradation, model confidence, errors, in a way the human can monitor.Unexpected failures: latent defects accumulate without detection.
Inadequate fail-safe or fallback user interface logicSoftware does not gracefully degrade or notify humans early when performance drops.By the time humans try to intervene, the situation may have worsened beyond safe recovery.
Table 3. Liveware-Hardware Failures in Automated Vehicles.
Table 3. Liveware-Hardware Failures in Automated Vehicles.
Failure ModeCause and MechanismContribution to Risk
Sensor miscalibration both intrinsic and extrinsic [75]Manufacturing errors; mechanical vibration; thermal changes; shock; mounting shifts; aging.Leads to misaligned sensor data (e.g., LiDAR points shifted relative to the camera), causing perception errors (false negatives or positives); wrong object localization; mismatched fusion.
Temporal misalignment or unsynchronized sensors [76]Different sensor clocks; latency; buffering delays; poor timestamping.Sensor fusion becomes unreliable; data from different sensors about the same event arrive out of sync, leading to wrong tracking or misestimated motion, and delayed detection
Sensor degradation or obstructionDirt, water, ice, snow on lens; glare; lens cracks; occlusion; sensor aging; degradation in optics; mechanical wear.Reduces detection quality or even causes blank spots; misses for vulnerable road users; increases noise; may force fallback modes (i.e., slower speeds, more conservative maneuvers) or lead to collision if fallback is inadequate.
Actuator hardware failure or latencyMechanical wear; hydraulic issues; electronic failures; degraded brakes; delayed steering actuation; lost or noisy signals.Even if perception and planning are perfect, the inability to follow commands safely leads to crashes or inability to avoid hazards.
Mechanical mounting or vibration [77]Loose mountings; chassis vibrations; thermal expansion; shocks from rough roads.Sensor orientation drifts; camera/LiDAR misalignment; physical damage; unintended motion artifacts lead to false motion or blurred images.
Maintenance or hardware health lapseInfrequent calibration; skipping hardware checks; poor environmental protection; damage unnoticed; replacement delays.Latent conditions accumulate; small misalignments grow; coupling with other failures (e.g., perception) increases risk.
Table 4. Liveware-Environment Failures in Automated Vehicles.
Table 4. Liveware-Environment Failures in Automated Vehicles.
Environmental FactorFailure ModeCause and MechanismContribution to Risk
Lighting or glare or shadows or low light [78]Poor visibility; glare saturating camera sensors or misleading optical perception; deep shadows hiding objects.Humans and system may misperceive, miss pedestrians, road markings, and lane edges; human safety drivers are more stressed and slower to detect hazards.Delayed detection, incorrect judgments, and missed cues increase the risk of collision or near-miss.
Adverse weather, such as rain, fog, snow, and mist [79,80]Reduced sensor performance, such as optics glare, LiDAR scatter, radar less accurate, also wet or slippery surfaces and water sprays obstruct vision and sensors.System perception modules degrade; humans must compensate, but human reaction time slows, and they may misjudge distances or surface grip.Higher accident risk due to miscalculation, slipping, false or missed detections; more frequent unexpected conditions.
Road geometry or infrastructure irregularities [81]Poor or faded lane markings; nonstandard signage; switchbacks or curves; tunnels with variable lighting; wet or iced road segments; construction detours.Human expectation or mental model mismatch; system may not be robust in unfamiliar geometry; the operator may misread environment.If lane detection fails, signage is misread, or road curvature is poorly anticipated, then planning or control can be mistaken, leading to unsafe maneuvers.
Mixed or unpredictable traffic and road user behaviorUnexpected pedestrian crossings; erratic human driver, and unpredictability increases; non-compliance with rules.Humans may be surprised; the system must anticipate human behavior; operators need high SA and may need to override or intervene.Unmodeled behavior can create scenarios that the system or operator is unprepared for, leading to collisions.
Regulatory or infrastructure mismatchSignage or lighting not compliant with standard; inconsistent enforcement; legal ambiguity over AV behavior.Human uncertainty in lawful or expected behavior; the system may struggle to track or verify environmental changes as temporary signs.Discrepancies between anticipated road rules and the actual environment can lead to incorrect decisions by both human and machine.
Table 5. Liveware-Liveware Failures in Automated Vehicles.
Table 5. Liveware-Liveware Failures in Automated Vehicles.
Failure ModeCause and MechanismContribution to Risk
Poor handover or takeover communication [82]The safety driver or remote operator is not properly briefed correctly when control is transferred; delayed or unclear signals of intent between the automation and human.If humans do not know the system state, they may misinterpret the expectation of automation or fail to act in time.
Lack of coordination among teams [83] The development, operations, maintenance, and teleoperation center does not share critical information about known sensor degradation, recent software updates, and weather warnings.Human actors may operate under different assumptions; mitigation designed for a specific purpose; unnoticed latent conditions persist.
Unclear responsibilitiesWho is in command during failure? Who monitors what? What is the role of remote, onboard safety drivers, and fleet operations when incidents occur?Ambiguity leads to delays, overlaps, or gaps in response; human actors may hesitate or take conflicting actions.
Communication breakdowns in remote operations [84]Latency or loss of communications, misunderstanding of video feed and real world; remote operator has less context; operations staff are not synchronized with what the remote operator sees.Remote operators may act on outdated or incomplete information, misjudge situations, miscommunicate with other humans, or delay interventions.
Informal norm or culture issuesA weak safety culture leads to operators underreporting near misses; developers may not document known issues; operators or safety drivers over-trust or under-trust automation based on past beliefs rather than data.Latent problems persist, safety margins erode; when multiple holes align, failure is inevitable.
Interaction with other road users mismanaged [85]Pedestrians expecting eye contact; human drivers expecting specific behavior; AV’s behavior ambiguous; lack of signaling to others.Other road users may misinterpret AV behavior, leading to unsafe crossings, collisions, or unexpected maneuvers.
Table 6. Cross-Mapping of SHELL Interface Failures to Swiss Cheese Defense Layer Vulnerabilities in Autonomous Driving Systems.
Table 6. Cross-Mapping of SHELL Interface Failures to Swiss Cheese Defense Layer Vulnerabilities in Autonomous Driving Systems.
SHELL InterfaceSA Affected LevelFailure MechanismSwiss Cheese Model Affected LayerHole in Defense LayerSafety Implication
Liveware-SoftwareLevel 2Low-salience or ambiguous takeover alertsHuman–machine interfaceDelayed or absent human interventionMissed takeover leads to collision or near-miss
Mode confusion or automation opacityPlanning and decision-makingOverestimation of system capabilityUnsafe reliance on automation in edge cases
Liveware-HardwareLevel 1Sensor miscalibration or temporal misalignmentPerceptionInaccurate object detection or localizationLate or incorrect hazard recognition
Actuator latency or degradationControl and actuationInability to execute avoidance maneuversFailure to brake or steer in time
Liveware-EnvironmentLevel 1–2Adverse weather or poor lightingPerceptionReduced sensing reliabilityVulnerable road users undetected
Non-standard road geometry or construction zonesPlanning and decision-makingIncorrect trajectory or maneuver planningUnsafe lane changes or path selection
Liveware-LivewareLevel 3Poor handover communication between automation and driverHuman–machine interfaceUnclear responsibility during transitionsHesitation or conflicting control actions
Organizational communication gapsGovernance and policyLatent defects persist unmitigatedRecurrent failures across fleet
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rahardjo, B.; Winnyarto, S.T.; Rizkiani, F.N.; Firdaus, T.M. A Comprehensive Analysis of Safety Failures in Autonomous Driving Using Hybrid Swiss Cheese and SHELL Approach. Future Transp. 2026, 6, 21. https://doi.org/10.3390/futuretransp6010021

AMA Style

Rahardjo B, Winnyarto ST, Rizkiani FN, Firdaus TM. A Comprehensive Analysis of Safety Failures in Autonomous Driving Using Hybrid Swiss Cheese and SHELL Approach. Future Transportation. 2026; 6(1):21. https://doi.org/10.3390/futuretransp6010021

Chicago/Turabian Style

Rahardjo, Benedictus, Samuel Trinata Winnyarto, Firda Nur Rizkiani, and Taufiq Maulana Firdaus. 2026. "A Comprehensive Analysis of Safety Failures in Autonomous Driving Using Hybrid Swiss Cheese and SHELL Approach" Future Transportation 6, no. 1: 21. https://doi.org/10.3390/futuretransp6010021

APA Style

Rahardjo, B., Winnyarto, S. T., Rizkiani, F. N., & Firdaus, T. M. (2026). A Comprehensive Analysis of Safety Failures in Autonomous Driving Using Hybrid Swiss Cheese and SHELL Approach. Future Transportation, 6(1), 21. https://doi.org/10.3390/futuretransp6010021

Article Metrics

Back to TopTop