Abstract
The advancement of automated driving technologies offers potential safety and efficiency gains, yet safety remains the primary barrier to higher-level deployment. Failures in automated driving systems rarely result from a single technical malfunction. Instead, they emerge from coupled organizational, technical, human, and environmental factors, particularly in partial and conditional automation where human supervision and intervention remain critical. This study systematically identifies safety failures in automated driving systems and analyzes how they propagate across system layers and human–machine interactions. A qualitative case-based analytical approach is adopted by integrating the Swiss Cheese model and the SHELL model. The Swiss Cheese model is used to represent multilayer defensive structures, including governance and policy, perception, planning and decision-making, control and actuation, and human–machine interfaces. The SHELL model structures interaction failures between liveware and software, hardware, environment, and other liveware. The results reveal recurrent cross-layer failure pathways in which interface-level mismatches, such as low-salience alerts, sensor miscalibration, adverse environmental conditions, and inadequate handover communication, align with latent system weaknesses to produce unsafe outcomes. These findings demonstrate that autonomous driving safety failures are predominantly socio-technical in nature rather than purely technological. The proposed hybrid framework provides actionable insights for system designers, operators, and regulators by identifying critical intervention points for improving interface design, operational procedures, and policy-level safeguards in autonomous driving systems.