Next Article in Journal
Efficient and Scalable Authentication Framework for Internet of Drones (IoD) Networks
Previous Article in Journal
Behaviour-Based Digital Twin for Electro-Pneumatic Actuator: Modelling, Simulation, and Validation Through Virtual Commissioning
Previous Article in Special Issue
Precise Position Estimation of Road Users by Extracting Object-Specific Key Points for Embedded Edge Cameras
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

iRisk: Towards Responsible AI-Powered Automated Driving by Assessing Crash Risk and Prevention

by
Naomi Y. Mbelekani
* and
Klaus Bengler
School of Engineering and Design, Technical University of Munich, 85748 Garching, Germany
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(12), 2433; https://doi.org/10.3390/electronics14122433
Submission received: 6 May 2025 / Revised: 10 June 2025 / Accepted: 12 June 2025 / Published: 14 June 2025

Abstract

:
Advanced technology systems and neuroelectronics for crash risk assessment and anticipation may be a promising field for advancing responsible automated driving on urban roads. In principle, there are prospects of an artificially intelligent (AI)-powered automated vehicle (AV) system that tracks the degree of perceived crash risk (as either low, mid, or high) and perceived safety. As a result, communicating (verbally or nonverbally) this information to the user based on human factor aspects should be reflected. As humans and vehicle automation systems are prone to error, we need to design advanced information and communication technologies that monitor risks and act as a mediator when necessary. One possible approach is towards designing a crash risk classification and management system. This would be through responsible AI that monitors the user’s mental states associated with risk-taking behaviour and communicates this information to the user, in conjunction with the driving environment and AV states. This concept is based on a literature review and industry experts’ perspectives on designing advanced technology systems that support users in preventing crash risk encounters due to long-term effects. Equally, learning strategies for responsible automated driving on urban roads were designed. In a sense, this paper offers the reader a meticulous discussion on conceptualising a safety-inspired ‘ergonomically responsible AI’ concept in the form of an intelligent risk assessment system (iRisk) and an AI-powered Risk information Human–Machine Interface (AI rHMI) as a useful concept for responsible automated driving and safe human–automation interaction.

1. Introduction

In the era of advanced information and communication technology, everything is connected. Humans are asked to manage interactions amongst people, between people and artificial systems, and among systems themselves in context awareness of their environment and operational situations. Context awareness refers, in information and communication technologies, to a capability to consider the situation of entities, which may be users or devices, but are not limited to those.
In the past years, there has been a major shift from manually driven vehicles (outmoded or archaic vehicle operation) to automation-assisted driven vehicles. Vehicle automation systems (or even automated driving systems: ADS [1]) are a promising form of intelligent vehicles and Internet of Vehicles (IoV), thus signalling and strengthening the next generation of connected car technologies. The transportation industry is going through a major shift (with the aim being full autonomous or self-driving vehicles) to evolve and revolutionise the transportation industry and user experiences (UX). However, there are still some major challenges to overcome. One of the many issues cautioned due to this change is that of automation-induced effects over long-term exposure that may trigger nuanced risk-based behaviours from end-users. Categorically, landborne and airborne vehicle automation (automated driving and flying) comes with new sets of challenges. Fundamentally, we find ourselves exposed to automation-induced risks or risky behavioural adaptations (BAs). Therefore, focus is placed on risk-based human factors in automated driving and highlighting the challenges induced by the long-term exposure of automation on user behaviour. Furthermore, emphasis is on human–computer interaction (HCI) with multi-actuated intelligent automated vehicles (AVs).
The aim is to provide researchers and practitioners in different and complementary domains with knowledge that confronts long-term effects and BAs, by exhibiting AI-powered solutions. For example, generative enhanced Artificial Intelligence (AI), Large Language Model (LLM), Deep Learning (DL), and Machine Learning (ML) research [2] from a Human Factors engineering point of view. Accordingly, emphasis is on methods and strategies for designing augmented user experiences (UXs), for example [3].
Essentially, the aim is to reduce crash risk behaviour during automated driving contexts, including perceived risk [4,5]. As a result, enhance perceived safety in AVs [6], anticipate and prevent perceived crash risks on urban roads. These contemporary sets of challenges require contemporary sets of strategies to sustain safety long-term. Therefore, human-factor intelligence on automated driving is important to consider (see Ref. [7]).

Contribution

This paper contributes to advancing user experience (UX) and responsible automated driving by integrating neurotechnology and AI-powered solutions. It focuses on enhancing driver concentration, driving experience, UX, task engagement, situational awareness (SA), mental focus, and reaction times. Furthermore, it examines the development of user-interactive machine learning (ML) systems for responsible automated driving and analyses perceived safety or risk based on user states during automated driving. In principle, this research contributes to interactive ML from a human factors perspective, thereby bridging the gap between artificial intelligence and human-centred design to support satisfactory UX and responsible automated driving.
A novel framework is introduced, combining an AI-powered risk assessment system (iRisk) with a risk-information human–machine interface (AI rHMI), to improve the safety of AI-driven automated vehicles. The real-time monitoring of automated vehicle (AV) user states using neurotechnology and AI is proposed as a means to encourage responsible behaviour. The system architecture and components are presented, demonstrating the robustness of the concept prototype. The framework is designed to enable responsible AI-powered automated driving by systematically assessing crash risks and implementing preventive measures, in real time. This paper conceptualises a system for responsible automated driving and safe human–automation interaction.
While the term AI is used throughout this paper to describe the system’s capabilities, it is important to note that the methodology primarily employs ML techniques, specifically DL and neural networks. These are subfields of AI concerned with enabling systems to learn from data and improve over time without being explicitly programmed.
The iRisk system and AI rHMI employ critical engineering safety value chains. The approach leverages innovation to accelerate the development of responsible AI tools to address the challenges associated with automated driving. So, iRisk and AI rHMI enables a uniquely holistic, agile, and secure UX. Human–automation knowledge attributes include the following:
  • Human Factor Requirements Engineering: De-risking crashes with AI-powered behavioural requirements analysis. Improve human users and automation system/software interaction efficiency through automated, execution-ready, and comprehensive analysis. Induce transparent processes with a human-centred knowledge tool that extracts knowledge on automated driving crash databases.
  • ADS Engineering and Error Management: Enhance automation architecture decisions and accelerate design maturity with AI-driven information. Optimize efficiency with AI-augmented Design and Process Failure Mode and Effects Analysis (D-FMEA and P-FMEA) and streamlined support strategies.
In recent years, original equipment manufacturers (OEMs) have been increasingly challenged by the rising crash risks associated with their innovations, the growing complexity and customisation of automated driving systems (ADS), and the continuous and rapid evolution of AV technologies. The success of an OEM in the market depends significantly on its ability to swiftly adapt to these changes through efficient innovation and strong competence in AV systems and services. Perceived risk analysis, when conducted using innovative tools, can contribute to success during the New Product Development (NPD) process, particularly when applying methodologies such as Design Failure Mode and Effects Analysis (D-FMEA) and Process Failure Mode and Effects Analysis (P-FMEA).
This paper aims to develop strategies for integrating neurotechnology and AI-powered solutions to enhance AV development and user experience (UX) design, thereby addressing key challenges—especially in light of the continuous growth of automation in contemporary society. To tackle these challenges, benchmarks for the proposed iRisk system and AI rHMI are discussed in Section 2 and Section 3. The limitations are presented in Section 4 and the conclusion is presented in Section 5 as a final remark. To develop resourceful and resilient strategies, it is necessary to adopt advanced learning experience frameworks that go beyond naïve or ad hoc approaches. These should be grounded in novel design strategies aimed at anticipating, qualifying, quantifying, and consulting on risk.
This section aims to evaluate knowledge on perceived risks—specifically concerning crash causation, prediction, and prevention—from a human factor perspective, see Figure 1. Subsequently, it seeks to formulate applicable human-centred and ergonomically responsible AI systems that support responsible automated driving. A brief summary of the ongoing conceptual operationalisation is provided in the following section (Section 3).
The objective is to design responsible systems capable of supporting long-term human–automation interaction in the domain of automated driving. Given that currently available AV systems are either prototypes unsuitable for daily use or commercial products not tailored to users’ specific needs, this research aims to co-design socially responsible AVs that fulfil the following requirements:
  • Setup new repeated measures and long-term research direction of in-vehicle socially cognitive robots and IoV, etc.
  • Address specific psychological user needs and risk-based human factors.
  • Adhere to the principle of form follows function and align with agile engineering.
  • Iteratively refined via use in real-world settings and adaptable to evolving contexts.
  • Usable, learnable, and applicable to different levels of automation.
These design choices aim to develop AV systems that integrate more effectively into human environments and provide tailored functionality, thereby supporting safe interactions and enabling long-term field research through the exploration of extended human–automation interaction. This approach contributes an interdisciplinary and design-driven perspective at the intersection of human factors, computer science, and design. It draws upon expertise in areas such as participatory design, co-design, product design, and hardware and interaction prototyping.

2. Crash Risk Cause Analysis, Prediction, and Prevention: Human Factors and Long-Term Effects

In the context of task-oriented problem-solving, AV users typically adopt a range of approaches to learning how to operate automated driving systems (ADS), in line with the design intention to minimise reliance on extensive manuals [8]. Given the serious implications identified through decades of safety science research, perceived safety has emerged as a critical focus within transportation engineering, where the overarching goal remains as the reduction of injuries and fatalities [9]. As a result, sustaining road safety continues to be a highly prioritised objective, driving the development of diverse risk analysis strategies from a range of disciplinary perspectives [10,11,12,13]. Crash data has been widely examined to evaluate safety in urban environments. Notably, reports from the National Highway Traffic Safety Administration (NHTSA) and the National Transportation Safety Board (NTSB) have offered comprehensive insights into recurring patterns and causes of AV-related incidents.
These findings underscore the necessity of considering automated driving crashes through a human-factor lens [14]. For instance, NTSB reports have identified causal factors closely linked to user interaction with automation, such as misuse, overreliance, over-trust, complacency, task disengagement, cognitive workload, distraction, and skill atrophy. In addition to these behavioural elements, operational factors—including reactive and proactive safety mechanisms, behavioural safety strategies, risk hot spots, and safety zones—must also be taken into account. Environmental and system-level factors, such as road type and design, traffic density, and adverse weather conditions (e.g., rain, snow, fog, lighting), further complicate the risk landscape in AV contexts.
These factors (see Figure 2 and Figure 3) are critical to investigate, as they directly relate to crash risk. Notably, crash risk has been defined as a “probabilistic and penalising contingency on a specific class of responses (like driving)” [15] (p. 236), and risk-taking as “making responses to that class” [15] (p. 236). In this context, certain user behaviours toward automation—such as over-trust, complacency, high cognitive workload, fatigue, and limitations in learning—are considered risky, with the primary distinction being the degree of crash risk they entail. As noted by Ref. [15], this risk—defined as the probability of accident involvement—varies between individuals, leading to differing perceptions of safety and danger. Crash risk can be categorised into physical, cognitive, or psychological dimensions.
  • Qualitatively, risk reflects a state of uncertainty or potential threat within the traffic system, which, under specific conditions, may lead to accidents.
  • Quantitatively, risk denotes the probability of danger materialising into an accident and is commonly used to assess safety conditions prior to an incident [12].
In response to these challenges, it is critical that both users and automation remain ‘in-the-loop’, rather than being isolated from the driving task.
Situational awareness and engagement—when necessary—are essential for supporting a collaborative human–automation driving experience. This concept informs the development of a collaborative interactive loop, in which both the human driver and the automated system contribute to shared decision-making. The proposed iRisk system and AI rHMI are designed to support this collaborative paradigm by continuously assessing crash risk in real time based on user state, AV state, and driving context. iRisk identifies moments where human intervention or behavioural recalibration is necessary. Simultaneously, AI rHMI provides tailored, context-aware feedback that keeps users engaged, informed, and aware of potential hazards. This dual mechanism helps maintain a dynamic, collaborative relationship between the driver and automation, thereby reducing crash risk and enhancing the overall safety and reliability of automated driving systems.
To address these multifaceted safety challenges, the aim is to integrate human factors into the core of risk assessment and communication. Thus, it systematically evaluates user states and contextual risk indicators in real time, leveraging neurotechnology, neuroergonomics, and AI to detect on-road conditions associated with unsafe behaviour, such as cognitive overload or disengagement. Concurrently, it delivers adaptive, personalised risk feedback to users, designed ergonomically to support awareness, trust calibration, and responsible decision-making. By grounding the system design in findings from human factors and safety research, iRisk and AI rHMI contribute to the development of AVs that are not only technically advanced but also behaviourally aligned with human needs, promoting safer and more responsible automated driving.

2.1. The Link to Human Factors and Behavioural Adaptations

Learning experience, mental models, trust, and acceptance are crucial to consider.

2.1.1. Learning Experiences

Reliance on ‘trial and error’, especially in automated driving, may adversely affect safety [13]. This aligns with the ongoing development of AV systems designed to provide intuitive, ‘walk-up-and-use’ experiences. However, this approach often still requires users to consult manuals or seek external support, indicating that full comprehension of AV systems is not immediate. Designers face the challenge of balancing ease of learning with operational efficiency, particularly as users’ skills evolve over time. Many users tend to delay engaging with AV system tutorials until confronted with a “real” driving task, often preferring to scan for high-level information before returning for detailed guidance. This behaviour highlights the need for training approaches that support both quick overviews and in-depth learning.
Some existing tutorials are limited by forced linear progression or intrusive interfaces, reinforcing the importance of human-centred design and the application of human factor principles. Since formal, application-specific training remains rare, there is an increasing demand for learning resources integrated with on-road scenarios to provide relevant, contextualised support.
OEMs typically offer a range of resources, such as manuals, online help, and helplines, to cater to diverse support needs. These include “Getting Started” guides, tutorials, and detailed user manuals. Experienced AV users often draw on a combination of these materials, using them strategically to understand how to operate systems or solve specific problems, demonstrating just-in-time, task-oriented learning strategies. Such users show a well-developed ability to locate relevant information, experiment effectively, and seek appropriate support when necessary. While most do not claim complete expertise, experienced users exhibit practical proficiency with AV systems. This suggests that structured training and guided resources could substantially assist novice users in mastering automation technologies and reducing associated risks.
The effectiveness of safety support systems depends heavily on their design. Automated driving task-oriented learning, often conducted under time constraints, benefits from intuitive and non-intrusive resources. Challenges arise when support systems obscure essential task interfaces or employ unfamiliar terminology, thereby increasing cognitive load and discouraging use. Consequently, accessible and well-structured support materials, such as minimal manuals that facilitate both rapid scanning and deeper exploration, are essential.
Exploratory learning in this context tends to be driven by immediate tasks rather than pure curiosity. Users focus on efficient learning strategies tailored to achieving driving goals, influenced by factors such as time pressure, task urgency, and resource availability. Novice users generally lack consistent learning approaches, in contrast to experienced users who apply structured, context-aware strategies when adapting to new AV technologies. Experienced users typically follow one of two approaches: tutorial-driven learning for unfamiliar systems, or task-oriented exploration when systems resemble those they have used before. They also alternate between “task mode” (focusing on completing driving tasks) and “tool mode” (practising tool use through repetition). Learning preferences are shaped by perceived system complexity, highlighting the need for tailored strategies.
Essentially, consideration must be given to learnability, understandability, operability, and usability. Understanding how, when, where, and why users learn provides a foundation for designing effective, supportive learning experiences that improve safe user experience (UX) with AV technologies.
The iRisk system and AI rHMI are designed with these learning and user experience principles in mind. By continuously monitoring user states such as cognitive workload, attention, and engagement through neurotechnology, iRisk identifies moments where users may require additional support or intervention. This real-time assessment enables AI rHMI to provide adaptive, context-aware risk information and guidance, tailored to the user’s current understanding and situational demands.
This dynamic feedback loop supports just-in-time learning by offering relevant information and alerts precisely when needed, reducing cognitive overload and avoiding intrusive or disruptive notifications. By aligning with users’ natural learning behaviours—such as scanning for high-level information or switching between task and tool modes—the system facilitates a smoother learning curve, encourages safe engagement, and builds trust and acceptance over time.
Moreover, AI rHMI’s human-centred design ensures that risk communication is intuitive and accessible, reinforcing mental models and promoting responsible automation use. This integration fosters an environment where novice users receive structured guidance that complements their evolving skills, while experienced users benefit from tailored support that respects their proficiency, eventually enhancing safety and user satisfaction in automated driving.

2.1.2. Mental Models and Behavioural Adaptations

Trust is recognised as a critical factor in behavioural adaptation models, with the user’s mental model of the operational task playing a central role [22]. Research indicates that users often modify their behaviour in ways unintended by system designers [23]. For example, when using Adaptive Cruise Control (ACC)—a system that maintains a set speed and distance from a lead vehicle—users adapted by engaging more in secondary tasks, paying less attention to driving, and reacting more slowly to hazardous situations [23]. These BAs were found to correlate with personality traits such as sensation seeking and locus of control, where individuals with an internal locus of control responded faster to hazards than those with an external locus of control [23].
Studies on BA to automation have predominantly focused on driving parameters such as time headway, mean speed, and distraction levels [22]. However, these parameters do not have the same impact when investigating BA in higher levels of automated driving [23]. The evolution of mental models, trust, and acceptance is crucial to assess in the context of vehicle automation systems [24]. As noted by Ref. [25], “a stronger orientation in the development and introduction of automated systems, especially considering existing mental models, will increase acceptance and system trust.” Furthermore, Ref. [26] argued that “dependability and usability of systems are not isolated constructs,” emphasising the need for future research into the dynamic relationship between these core attributes.
The importance of considering learning phases and the effects of automation was highlighted by Ref. [27]. It is necessary to examine both procedural and structural effects associated with behavioural adaptation. While a system can be designed to encourage responsible use through ease of learning, it can equally foster irresponsible usage if improperly calibrated. This necessitates identifying the key factors to consider when evaluating learning effects, particularly through the lens of the power law of learning. It also involves understanding the conditions under which users accept (adopt) or reject (disuse) automation over time.
Although these insights are valuable, they also illustrate the challenge in defining learning effects as they relate to crash risks across temporal and situational contexts. This prompts further inquiry into diverse user mechanisms, demographic influences, perceived learning objectives, and contextual factors of use [28]. The ultimate goal is to address the knowledge gap concerning the validity and persistence of learning effects by considering a broad spectrum of outcomes.
The iRisk and AI rHMI systems directly address these challenges by actively monitoring and adapting to the user’s evolving mental models, trust levels, and behavioural adaptations. Through real-time assessment of cognitive and emotional states using neurotechnology, iRisk detects when a user’s trust may be misaligned, either excessively high, leading to complacency, or too low, resulting in underuse or rejection of automation.
The AI rHMI interfaces with users by delivering personalised, context-sensitive risk information and guidance that reinforces accurate mental models and fosters appropriate trust calibration. By providing feedback tailored to the user’s current behaviour and psychological state, the system helps prevent unintended behavioural adaptations that compromise safety, such as distraction or overreliance. Moreover, iRisk and AI rHMI support learning phases by offering adaptive interventions that evolve alongside the user’s experience with the automation system, promoting responsible use and mitigating risks associated with miscalibrated trust or flawed mental models. This dynamic, human-centred approach not only enhances system dependability and usability but also closes the loop between user cognition and automated driving behaviours, thus contributing to safer, more accepted, and user-tailored automated driving experiences.

2.1.3. Trust, Reliance, and Acceptance

Trust is typically conceptualised in terms of automation and reliance within automated driving systems (ADS) [29]. It has been argued that the adaptation process differs between Level 2 (L2) and Level 3 (L3) automation, since, with L2, users “occasionally experience system actions that only affect parts of the driving task while they are otherwise driving manually” [23]. In contrast, at L3, the user’s “role changes fundamentally, removing them from the driving task completely” [23]. Therefore, different awareness processes can be expected at different levels of automation, and as automation matures, BA is likely to occur more rapidly. Consequently, for highly AVs, a faster BA process might be anticipated. When examining acceptance criteria, the predictors of adoption intention or behavioural intention to use are perceived trust, perceived usefulness (PU), perceived ease of use (PEOU), perceived intelligence, perceived safety, and anthropomorphism (for specific systems), etc.
Trust plays a pivotal role in BA to L3 automation, as well as in the overall acceptance of the system [23]. Trust in automation is a key factor influencing system usage and adoption [29,30]. Accordingly, “inappropriate reliance associated with misuse and disuse depends, in part, on how well trust matches the true capabilities of the automation” [30] (p. 56). Trust is also closely linked to misuse, which may be either intended or unintended. Unintended misuse refers to the unintentional exploitation of automation by users, which can lead to harmful system exploitation. Intended misuse, by contrast, involves users knowingly employing the system beyond its intended scope for personal intent or gain, thus deliberately overriding system boundaries.
Factors such as learned habits, cognitive overload, fatigue, complacency, and engagement in non-driving related activities (NDRAs) [31] further influence misuse behaviours. These adverse effects can lead to overuse or inappropriate exploitation of automation, increasing crash risks.
The iRisk and AI rHMI systems are designed to address trust and acceptance issues by continuously monitoring user cognitive and emotional states, including indicators of fatigue, workload, and engagement. By detecting signs of over-trust, complacency, or misuse—whether intended or unintended—iRisk provides timely risk assessments that inform adaptive interventions via AI rHMI.
This dynamic interaction helps recalibrate user trust in real time by delivering context-sensitive feedback and warnings tailored to the user’s current state and behaviour, mitigating risks associated with inappropriate reliance. Furthermore, the systems promote safer behavioural adaptation by supporting responsible engagement with automation, thus enhancing user acceptance while preventing exploitation or neglect of system capabilities. Through this human-centred, neurotechnology-powered approach, iRisk and AI rHMI contribute to fostering appropriate trust calibration, ensuring safer, more reliable automated driving experiences.

2.2. The Link to Regulations, System Factors, Operating Environment

The debate over standards and regulations to address the “do no harm” principle versus the laissez-faire approach, commonly adopted by developers and OEMs remains a contentious issue. While a hands-off approach may benefit some OEMs in accelerating innovation, it raises significant concerns regarding the establishment of a level playing field and the creation of minimum performance benchmarks for all stakeholders. The challenge lies in balancing the encouragement of innovative AV design with the necessity of maintaining comprehensibility and safety, particularly for novice or risk-prone users.
The need for varying degrees of regulatory oversight is increasingly recognised as essential to harmonise terminology, operational limitations, and performance specifications, with a central emphasis on safety. Despite potential drawbacks, there is a clear need for a balanced regulatory strategy, one that encourages innovation while avoiding unnecessary restrictions. A promising development in this regard is the incorporation of human factor requirements engineering into AV system design, representing a forward-thinking step towards formalising standards and regulations in the field.
The evolving landscape of vehicle automation has brought regulatory and governance issues to the forefront. One of the core debates centres on whether AV regulations should be based primarily on the evaluation of behaviour, risk, benefits, and outcomes, or on the development process itself. The North American perspective prioritises the quality of services delivered by AV systems, often downplaying the importance of the design process. In contrast, the European viewpoint emphasises the novelty and integrity of the development process, asserting that it should inform regulatory criteria.
Within this discourse, advocates of a precautionary regulatory stance argue for recognising and addressing the uncertainties inherent in emerging technologies. They caution against a purely outcome-based regulatory model, which may fail to detect critical issues during development. The concept of substantial equivalence—comparing new AV systems with their predecessors—further complicates regulatory processes, especially where disagreements about the nature and implications of innovations arise. This can lead to controversy, particularly in labelling and classification.
The issue of interpretability standards has also emerged as a focal point. Diverse stakeholders, including OEMs, regulators, users, and civil society organisations often hold differing views on what constitutes adequate interpretability. What may be an acceptable explanation for an engineer may not meet the expectations of a sceptical NGO or the general public. Moreover, in scenarios involving extensive, multidimensional datasets, offering clear and intelligible explanations may be impractical or impossible. This creates a fundamental tension between the pursuit of efficiency and the need for interpretability, one that hampers both structural learning and broader societal understanding of AVs.
The political dimensions of interpretability are especially salient in domains such as image recognition for highly automated vehicles. In such cases, algorithms can exhibit behaviour that is perplexing even to their creators. Unsupervised machine learning, particularly in deep neural networks, presents significant opacity. Instances such as Google’s misclassification of human images, where engineers were unable to explain the error, exemplify the limitations of current human understanding of autonomous learning systems. One Netflix executive aptly described algorithmic insights as akin to “a ghost in the machine.” Despite years of research into unsupervised learning, understanding how deep neural networks interpret visual data remains incomplete.
Early governance concerns in vehicle automation also touch on the ethical implications of delegating critical decision-making to algorithms. This has raised complex questions about the responsibilities of developers and OEMs. With the rise of deep learning, the potential to outsource life-critical decisions to machines introduces the risk of “codified irresponsibility”—where algorithmic opacity serves as a shield against liability. This scenario risks institutionalising irresponsibility and undermines accountability, making it easier for OEMs to evade scrutiny and impede collective learning.
The iRisk and AI rHMI systems are conceived as proactive contributions to this governance discourse. They offer a model for embedding responsibility, transparency, and interpretability into the architecture of automated driving systems. By continuously monitoring the cognitive and emotional states of users, iRisk supports risk prediction and behavioural assessment in real time, allowing both human and system to remain ‘in the loop’ and jointly responsible.
In parallel, AI rHMI provides an interface that communicates risk in a human-centred, interpretable manner, bridging the gap between system complexity and user understanding. This enhances explainability by translating system logic into actionable, context-specific information. Furthermore, the design of iRisk and AI rHMI aligns with human factor requirements engineering and precautionary principles, supporting traceability, accountability, and informed decision-making.
As regulations evolve, such systems can serve as reference models for building responsible AVs—helping developers demonstrate regulatory compliance, facilitate social acceptance, and avoid the pitfalls of opaque or unaccountable automation. They help forge a path toward safer, more ethical and transparent automated driving.

2.3. Certifying Safety and De-Risking Automated Driving Through AI-Powerd Countermeasures

Risk evaluation based on historical crash data follows established prediction theories and models that analyse crash variability, infer causal patterns, and estimate future trends and potential outcomes [12]. Currently, crash risk prediction relies on measures such as crash frequency, injury severity, and crash rates derived from statistical and econometric modelling approaches. However, to effectively understand user behaviour—whether deemed risky or safe—in relation to automation, it is essential to incorporate the analysis of long-term behavioural effects. Evaluating these long-term effects provides valuable insight into how human interaction with automated systems evolves over time, and how behavioural adaptation may influence overall system safety.
In this context, the iRisk system enhances traditional crash risk evaluation by integrating real-time behavioural monitoring with historical data patterns, enabling a more dynamic and user-centred approach to understanding and mitigating long-term safety risks in automated driving.
The aim is to examine co-risk factors and track long-term effects (see Figure 4), such as delayed takeovers by users, misuse, and other risk-prone behaviours. Numerous studies have investigated risk encounters with AVs [32,33,34,35,36], as well as user perceptions of safety in automated vehicles [37,38]. The consensus across much of this research is that most frequently occurring risk encounters are directly linked to human factors. NTSB reports consistently reveal that human error and risk-taking behaviours are central to the causes of fatal crashes. Therefore, engineering responsible interactions across all scenarios in which risk might emerge demands significant innovation and precaution.
To support safety, it is necessary to rate potential risk encounters and evaluate the criticality of safety-related situations. As noted by several experts, even the most comprehensive user training cannot fully eliminate the likelihood of misuse or misbehaviour. For example, Refs. [39,40] highlight that
“They [users] learn the speed, they learn at speed—how fast do I have to act, how often does it flash, how fast do I have to look up, and they actually get a little faster as they learn, and then that makes them feel more comfortable, because they do not want it to escalate.”
This illustrates how BA may inadvertently foster complacency or overconfidence, especially in response to repeated exposure to AV systems. While studies focusing on hardware performance—such as sensor reaction time or braking efficiency—are important, they may not significantly enhance the predictive capacity for user-related risks [12]. To better understand risk across varying scenarios, it is necessary to systematically evaluate the effects of automation from a comprehensive, behavioural perspective. This includes identifying knowledge gaps and defining research directions that address both short-term usability and long-term behavioural impact. The increasing integration of AVs introduces complex situational challenges that must be accounted for to fully realise the potential of automation.
Central to this is the exploration of interaction design strategies (IxDS), which consider the holistic relationship between humans and automated systems. The Keep It Simple and Simpler (KISS) principle [41], as illustrated in Figure 5, provides one such design framework aimed at ensuring intuitive and low cognitive–load interactions.
The iRisk and AI rHMI systems are specifically designed to address these challenges. By evaluating co-risk factors in real time—such as indicators of misuse, distraction, or delayed responses—iRisk supports a proactive safety architecture. It enables systematic risk rating and situational criticality assessments based not only on system data but also on human behaviour patterns over time. The AI rHMI then acts on this information to support the user with timely, interpretable feedback, reducing the risk of misbehaviour or unsafe adaptations. Through continuous monitoring and interaction refinement, the system promotes responsible engagement, even in repetitive or high-speed scenarios where users may otherwise become desensitised. This approach embodies a shift from reactive safety systems to anticipatory, user-aware automation.
In addition, it is crucial to account for diverse user groups, including both tech-ignorant and tech-enthusiast individuals. Misuse remains a pressing concern, as human error continues to be a leading cause of traffic accidents [42], as also emphasised by the NHTSA and NTSB. Within this context, interactive ML systems and DL hold significant potential to learn the various human factors that influence user behaviour and subsequently develop models that support more responsible automated driving.
Similar to the domain of human–robot interaction (HRI), these systems must be capable of learning in real-world environments that are observable, dynamic, and continuous [43]. Such adaptive learning environments provide the foundational building blocks for configuring safety-aware systems, prototypes that are equipped with not only “deprograming” risky behaviours but also “reprograming” responsible user behaviours over time.
Treating the user as a passive information source often leads to a frustrating user experience (UX) [44,45]. Effective solutions must balance and address the needs of both human users and automated systems. A wide range of issues in automated driving, such as accurate risk recognition, can be tackled through DL techniques. AI therefore offers promising solutions, especially when embedded in interactive systems designed to enhance decision-making, situational awareness, hazard recognition, alertness, and overall safety in AV contexts. The goal is to integrate systems that actively leverage human factors knowledge to identify and mitigate risk-related behaviour in users.
Risk prevention encompasses all pre-emptive strategies designed to avoid or reduce threats, danger, harm, or fatalities. The advancement of neurotechnology, alongside developments in ML, DL, LLMs, generative AI, intelligent transportation systems (ITS), big data analytics, and a variety of modelling tools, have enhanced our capacity to anticipate and prevent risks. For example, safety modelling now frequently incorporates real-time crash prediction using artificial neural networks, DL, or reinforcement learning, enabling a shift from reactive to proactive safety management [12].
In the context of automated driving, interactive AI is grounded in the principle that adaptive systems—such as those powered by ML, should be able to engage users on their own terms and respond dynamically to changing human input. Unlike traditional ML systems, which require periodic user input followed by interpretation of the output [46], interactive ML promotes tighter, more frequent, and incremental user–system interactions. This creates a stronger emphasis on understanding and incorporating user involvement throughout the learning process.
Accordingly, we advocate for both traditional and interactive ML approaches. This dual-pronged design strategy supports knowledge co-creation through learning, unlearning, and re-learning, drawing on both supervised and unsupervised learning practices. Such systems have the ergonomic potential to evolve through experiential learning in interactions with users, as well as through reinforcement-based DL. To responsibly integrate DL into automated driving, it is essential that systems produce input–output relationships that are trustworthy, safe, pleasant, accurate, and free from bias—issues often overlooked in automation discourse. A comprehensive overview of traditional and interactive ML approaches is provided in Ref. [46]. In the field of HCI, numerous applications of user-interactive ML systems have been demonstrated across visual, textual, and multimodal data domains (see Refs. [44,47,48,49,50,51,52,53,54,55]). Examples are summarised in Table 1.
ML has become increasingly prominent in the fields of HCI and informatics. Within the automotive domain, influential work is being conducted to explore how ML can reduce driver workload and enhance safety during automated driving [59]. Ref. [59] proposed an ML-based Augmented Reality Head-Up Display (AR-HUD) application, designed to support drivers by improving situational awareness and reducing cognitive load.
Designing interactive AI systems that proactively promote safety and mitigate risk is therefore highly beneficial. This is closely linked to the concept of Quality-in-Use (QinU), which refers to the extent to which a system meets user needs in real-world contexts. Specifically, QinU describes a software system’s ability to support user effectiveness, safety, efficiency, and satisfaction in achieving specific goals within a defined context of use (ISO 25010:2011) [60]. It comprises five core attributes: freedom from risk, effectiveness, efficiency, satisfaction, and context coverage.
iRisk and AI rHMI systems embody this human-centred, adaptive ML philosophy. By leveraging interactive ML, iRisk learns to detect, evaluate, and respond to user-specific behavioural patterns—such as misuse, inattention, or delayed reactions—across a range of real-world driving scenarios. It adapts over time, tailoring risk assessment and behavioural interventions based on accumulated knowledge of the user’s engagement and interaction style. Meanwhile, AI rHMI translates these adaptive insights into meaningful, context-aware feedback. It facilitates real-time communication between the AV and the user, supporting trust, clarity, and continuous learning. Together, these systems create a collaborative safety loop between human and machine—supporting behavioural calibration, increasing system transparency, and promoting responsible use of automation.
Furthermore, iRisk and AI rHMI systems are inherently aligned with the principles of QinU. By continuously learning from user interactions and real-time behavioural cues, iRisk identifies risk-prone patterns and proactively intervenes to prevent unsafe conditions—supporting freedom from risk, effectiveness, and efficiency. Simultaneously, AI rHMI ensures that users receive timely, interpretable, and context-sensitive feedback, fostering satisfaction and clarity in the interaction. Together, these systems offer a high-QinU experience, particularly in safety-critical environments like automated driving, where human–machine collaboration is essential.

3. The Next Frontier in Human-Centred AI-Powered Solutions for Ergonomically Responsible Automated Driving: The iRisk and AI rHMI

As exemplified in Ref. [39], experts frequently highlighted the physiological and cognitive aspects of human factors as key contributors to risk in automated driving contexts. They recommended the development of intelligent in-vehicle systems capable of monitoring user behaviour to ensure safe system use and identify potential risk scenarios before they escalate. In conceptualising the iRisk and AI rHMIs, we drew upon these insights to inform strategies that provide real-time assistance to users during AV operation.
The experts consulted come from a wide range of industries, all operating under the broader umbrella of AVs. These domains include automated driving, trucking, aviation, and smart farming—each offering unique yet transferable insights into human–machine interaction and safety-critical design. A phased approach was used to develop the concept, grounded in quality engineering and human factors engineering principles. This approach was aimed at realising a robust iRisk system architecture with high-quality standards, designed to accommodate diverse user behaviours and operating environments.
While the conceptual framework for iRisk and AI rHMI is well-defined, it remains subject to further development through heuristic evaluation and real-world implementation. These next steps are essential to validate the system’s usability, effectiveness, and alignment with safety and user experience goals.
To design responsible, human-centred support systems for AVs, a phased development methodology was adopted for the conceptualisation and realisation of the iRisk and AI rHMI systems. This approach integrates principles from quality engineering, human factors engineering, and interactive system design, ensuring that safety, usability, and adaptability are built into the foundation of the system architecture. The development methodology follows key phases, as illustrated in Figure 6.
Expert insights from various sectors of automation, including automated driving, aviation, logistics, and smart agriculture were consulted during early development. Despite their differing domains, a shared emphasis emerged around the need to monitor cognitive and physiological states, support BA, and intervene in real time to reduce risk. This highlights the necessity of systems like iRisk and AI rHMI, which can promote responsible automation through continuous learning and interaction with users.

3.1. Phase 1: Exploratory Research and Requirements Gathering and Conceptualization

An inductive approach and qualitative methods were employed, including literature reviews and interview extracts from industry experts. Firstly, a literature review was conducted to identify the connection between D-FMEA and P-FMEA, as well as potential approaches for their integration. Subsequently, interview extracts from industry experts within OEMs were analysed to gain insights from a practical perspective. Secondly, knowledge integration was carried out to derive applicable strategies. The literature review, combined with expert perspectives, highlights safety systems that should be considered to support users. See Table 2 for the 18 extracts from Ref. [39], which outline implementable concepts.

Case Analysis: INEX AI rHMI

De-risking countermeasures for on-road traffic safety in responsible automated driving is a crucial topic, particularly given the prevalence of human-factor challenges such as over-trust, workload, fatigue, distraction, complacency, and skill atrophy, alongside the inherent susceptibility of automation to errors. Long-term/repeated research is essential for knowledge discovery aimed at designing systems that mitigate risk-taking behaviours when vehicle automation is engaged. For instance, implementing a risk management system—such as iRisk and AI rHMI—can serve as a crash countermeasure by facilitating both internal (within the AV) and external (outside the AV) communication and interactions. This system can be utilised to anticipate crash risks involving both in-group and out-group actors, effectively integrating diverse elements to simulate responsible automated driving under real-world traffic conditions. In our forthcoming study, we plan to develop three case study scenarios: the first focusing on in-vehicle risk-induced scenarios; the second on out-group risk-induced scenarios; and the third examining the context of Multi-Agent Social Interaction (MASI) [61].
These scenarios encompass components such as risk exposure contexts—for example, a zebra-crossing scenario where AVs and vulnerable road users (VRUs) are generated continuously at regular intervals. In designing the iRisk and INEX (INternal-EXternal) AI rHMI, we drew on insights from several experts specialising in augmented reality concepts [62]. Our approach emphasises the communication of both risk quantification and qualification to enhance risk awareness.
The aim is to understand the co-risk cues exhibited by INEX cases (as illustrated in Figure 7) in automated driving on-road traffic situations, as well as the reaction and reactivity behaviours observed in a road-crossing scenario from the perspective of INEX interactions. In this context, co-risk cues refer to the risk-taking behaviours induced during MASI as a result of long-term effects. Therefore, the objective is to assess how different individuals’ activity and reactivity towards risk awareness evolve over time.

3.2. Phase 2–4: Conceptual Design and System Modelling, and Contextualisation

As experts have argued, direct training at schools alone may not sufficiently motivate long-term safe user behaviour, as misuse can still occur [39]. However, equipping AVs with user-interactive AI systems capable of detecting and deterring misbehaviours presents a significant opportunity to sustain long-term safety. Such systems can persuade users to behave responsibly by leveraging DL networks designed to nudge behaviour. Persuasion here is defined as “the act of causing people to do or believe something”, and persuasive technology as “technology designed to influence people’s actions or beliefs” [42] (p. 106). This dual capability accelerates collaborative learning between the AI and the user, improving mutual adaptation.
Good interface design has been shown to enhance UX with AVs by providing clear system-state information through effective output channels [63]. Moreover, intelligent systems that monitor ‘user state’—such as fatigue or drowsiness—can help mitigate risk by alerting users during automated driving. As automation capabilities advance, these systems not only serve as transitional aids but also enhance human driving abilities [64].
Exploring social dynamics in AV user interfaces (UIs) or HMIs offers promising avenues for safety induction. For example, integrating speech-based natural language feedback and emotional intelligence (EI) into AI HMIs can foster socially intelligent systems that understand social norms and user states. This anthropomorphizing approach attributes human-like traits—emotional and intellectual—to AV systems, creating a socially engaging experience that bridges the human–automation gap [65]. Such systems should detect unsafe or socially unacceptable behaviours (e.g., distraction, drowsiness) and motivate corrective actions.
Several OEMs, including Nissan and BMW, are pioneering prototypes that combine advanced vehicle control, safety technologies, and AI-enhanced interfaces. Gamified persuasive features are also being considered to motivate sustained and responsible use of automation. Some OEMs are exploring technologies that socially interact with users, especially during distraction episodes. Experts emphasize the need to redesign systems that connect emotionally with users, thereby enhancing the human–machine relationship [39].
Supporting this trend, Ref. [66] highlights the urgent need to develop effective HMIs that communicate system states through auditory, visual, and haptic modalities while enabling user interaction via the same channels. With expanding automation functionalities, robust evaluation methods for these interfaces are critical [66].
Looking forward, AI-powered vehicle chatbots and robotic voice assistants (e.g., Amazon Alexa, Gatebox, ChatGPT, ElliQ) could revolutionize risk management and user support, acting as safety assistants or co-drivers. These assistants can foster safer automation use by providing timely guidance and emotional support. Future explorations may include 3D computer-generated holography (CGH) to enhance interactive systems, potentially bridging the gap between virtual and real-world visuals [59]. Autostereoscopic 3D displays integrated into head-up displays (HUDs) may further improve user immersion and safety by overlaying virtual images onto real driving scenes [59].
We also examined similar ideas, such as the following: Smart watch—a device can track the user’s vitals through biomarkers which are derived from sweat. The watch is connected to GPS so in case of emergencies, it is possible to find the user. Health app on iPhone—an app monitoring the user’s activity, sleep and vitals.
Augmented reality (AR) systems and head-mounted displays have seen rapid advancements in automotive contexts, enhancing driver comfort and safety. AR-enabled HUDs provide real-time sensor data and collision avoidance support by merging virtual and real-world visuals, contributing to better situational awareness [59].
For robust safety design, AVs should possess sophisticated self-modifying capabilities that adapt to diverse user types and improve interactions with the real world. Effective AV design must prioritize coordination, cooperation, and collaboration between users and AI across interactive tasks. Nonetheless, personal factors such as personality traits and intrinsic motivation may influence the effectiveness and adoption of persuasive technologies, warranting further consideration.

3.2.1. Conceptual Design and Sketching: The iRisk Brain Networks and AI rHMI Architecture

These perspectives enable us to envision a future where responsible automated driving delivers safe, risk-free experiences. Building on these insights, we can conceptualize innovative solutions that hold promise for enhancing urban road safety.
The AI rHMI with an AI-powered robot (as exemplified on Figure 8) entails output channels that communicate crash risk information to the user (e.g., via visual displays and auditory signals, etc.), input channels to receive the user input (e.g., via buttons, steering wheel, and voice, etc.), and a dialog logic to specify the relationships among input, output, and context parameters. AI rHMI provides a convenient test bed for exploring novel interaction (and learning) with AI, like most prior interactive ML systems. In the prototyping process, we endeavour to improve on dynamic and impact by carefully designing and implementing a strategy using an interactive ML-AI system to test out its effects.
iRisk and AI rHMI purpose: The iRisk focuses on understanding the psychological and emotional needs of drivers across various driving scenarios. At its core is AI rHMI—an in-vehicle social robot with a personalised persona that reflects different moods, expressions, movements, tones, and identities. The objective is to foster and accelerate disruptive innovations in vehicle connectivity and to co-design a connected vehicle service offering diverse in-vehicle content, such as real-time traffic updates, crash risk alerts, human factors risk, and destination-based information—powered by big data, maps, AI, and integrated portal services. This addresses the growing demand for intelligent vehicle experiences.
AI-powered in-vehicle OS: The in-vehicle OS is an open, end-to-end, multi-modal AI solution within the IoV ecosystem. It includes a dashboard interface, smart infotainment system, in-vehicle social robot, etc. The in-vehicle robot acts as a central communication hub, emotionally engaging with users and responding interactively to voice commands. It supports key functions like navigation, risk alerts, and situational awareness. It integrates driver safety features, including fatigue detection and careless driving warnings.
Advanced voice recognition and Internet of Things (IoT) integration: Includes voice recognition using natural language processing (NLP) enhanced by proprietary noise-cancellation technology. This ensures accurate in-vehicle voice command recognition, even in noisy situations. Realising the central role of the IoT in connected mobility, iRisk explores vehicle-to-everything integration, further expanding the vehicle’s role in daily life.
Ecosystem and predictive analytics: The convergence of the ICT in the automotive industries is driving innovation in automated driving, safe navigation, and interaction. iRisk and AI rHMI supports this shift through intelligent/responsible driving solutions, big data analysis of naturalistic driving behaviour, and development of predictive analytics platforms. These technologies aim to support AI-enabled automated driving and deliver a safe, comfortable, and personalised automated driving experience. The iRisk system enacts risk-awareness during automated driving and holds significant potential to support users by leveraging ML, particularly DL algorithms to improve human–automation efficiency. iRisk’s learning capability is based on a combination of algorithmic strategies, including a game tree search procedure (representing a brute-force decision mechanism) and deep neural networks (providing a more intuitive risk modelling approach). A hybrid architecture (ML/DL/LLMs) is a powerful option for iRisk:
  • DL/ML for real-time sensor data analysis, behavioural classification, and risk quantification. ML/DL modules handle behavioural data analysis, risk computation, and prediction.
  • LLM for real-time HMI dialogue, persuasive feedback, user education, and interpretability. LLM modules handle user-facing communication, real-time persuasion, and interpretability.
  • Multimodal integration to combine sensor insights with user-facing conversational interfaces. A central control logic orchestrates when and how each system communicates with the user, adapting to risk, behaviour, and context.
This would allow iRisk to leverage the data-driven precision of ML/DL with the human-centric adaptability of LLMs. In the iRisk system, LLMs do not replace the core ML/DL structure. Instead, they enhance the system’s intelligence, empathy, and usability. This synergy supports
  • More effective real-time user guidance.
  • Stronger behavioural influence through persuasive HMI.
  • Improved understanding of system outputs, making safety and performance gains more transparent and actionable for users.
ML provides computers with the ability to learn without being explicitly programmed and thus focuses on the development of computer programmes that can teach themselves to learn and change when exposed to new information. ML systems search through behavioural data to look for patterns as a form of data mining and farming to detect patterns and adjust actions accordingly. iRisk uses DL and neural networks to teach itself about the fundamentals of crash risks and user behaviours and compartmentalise these effects based on either safety (responsible behaviour) or risk (irresponsible behaviour) taking. In a sense, just as iPhotos can help you divide photos into different albums according to different characters because it holds the storage of countless character images that have been processed down to the pixel level, iRisk’s intelligence is based on a large dataset of behavioural patterns, including crash and naturalistic data.
In addition, iRisk’s intelligence relies on different components; for instance, game tree search procedure and neural networks that simplify the learning process. The tree search procedure can be considered as a brute-force approach, whereas the convolutional networks afford a level of intuition to risk-based automated driving behaviour. In a sense, the neural networks are conceptually comparable to the evaluation function in other AIs, except that iRisk’s are learned and not necessarily designed. Thus, deciphering problematic human factors associated to new risk behaviours. Like AlphaGo (from DeepMind), two main types of neural networks inside iRisk are trained, which are value network (covers evaluating the current situation in the environment) and policy network (covers the policy, which describes the decision-making process). In addition, both types of networks take the current user behavioural states as the input and grade each possible risk factor through different formulas and output the probability of crash risk.
  • On one hand, the value network provides an estimate of the risk value (quantified and qualified) based on the current user behavioural state during automated driving: the probability of crash risk, given the current drowsy state, for example. The output of the value network is the probability of a crash risk encounter during automated driving in a distinctive context. It focuses on the long run, analysing the whole situation to reduce possible crash risk encounters.
  • On the other, the policy networks provide crash risk awareness guidance concerning the action that iRisk must choose, given the current drowsy state (for example) during automated driving. The output is a probability value for crash risk awareness based on possible permissible change in driving behaviour (the output of the network is as large as the board). It focuses on the present and decides the next steps.
Essentially, actions (driving behavioural change) with higher crash risk probability values correspond to actions that have a higher chance of leading to danger. One of the most important aspects of iRisk is long-term learning-ability based on DL. DL allows iRisk to continually improve its intelligence by researching and collecting a large quantity of behavioural data on the internet and in real time with users during automated driving. In a sense, this trains the policy network to help iRisk predict the resulting actions based on crash risk awareness, which in turn trains the value network to ascertain and evaluate crash causation situations. Accordingly, it assesses possible risks ahead based on possible behavioural changes and permutations, given various risk eventualities before selecting the one or high-rated outcome it deems most likely to transpire. In general, it averages the suggestion from the networks to produce a final decision on what action to take.
What makes iRisk noteworthy is that it not only follows Game Theory principles like other ML, DL, LLMs, but also encompasses human-factors intelligence, as it gains knowledge. Thus, it overstands a level of critical thinking, knowing, and doing. For example, augmented AI can surpass human capabilities in certain situations, as it uses several intelligent techniques in addition to the risk-consciousness AI characterised for responsible automated driving. Using visual (e.g., facial signals, etc.) and voice (e.g., speech tone, sound pitch signals, etc.) recognitions related to the human factors of either neurocognitive, neurophysiological, or neuropsychological behavioural effects. In short, it is performed in comprehensive or super intelligence and learning ability. While DL has made a lot of progress, it still trails behind when it comes to predicting different humans’ unpredictability during automated driving, thus there is a need for intensive behaviour data based on user type.
The idea of ergonomically responsible AI systems for automated driving may have an overstating impact on safety for AVs on-road traffic if designed with risk-cognisance or consciousness AI in mind. In addition, the user can also have access to their performance, which can be observed in real-time monitoring evolution on an app (on their smart phone or UI, for example).
Interactive ML systems for AVs designed with sensory signals about user states and safety planning are important. This is crucial for eliciting calibrated trust and de-risking in the best way possible, including adaptable HMI designs to provide safety and risk insights, with brain networks that assess cognitive factors and physiology, and including stress levels, specifically to assess the degree of crash risk during automated driving. This neurotechnology could truly move the needle on preventing crashes caused by human factors (e.g., drowsy drivers).
To train the iRisk system on performance, components linked to crash risk, we consider different forms of behavioural measurements. Using physiological measurements acquired via biosignals and methods such as a smart watch, bracelet, necklace, or rings (etc.), sensors for electrocardiography (ECG) and headsets for electroencephalography (EEG), etc. These tools are non-invasive, low-cost, and provide very detailed brain wave data in real-time. For example, these clinical techniques provide dynamic human performance information that is repurposed from clinical or medical applications, e.g., EEG for epilepsy or emotion detection to monitor driver behaviour as in the case of ECG-based drowsiness detection. In addition, other biosignals and physiological sensors to consider could be implemented via car seats, steering wheel, among others. Eye tracking is a more common physiological signal that is used by researchers to assess the driver’s attention.
Research shows EEG monitoring can reliably detect driver fatigue by tracking changes in brain activity before any outward signs appear. This outpaces current technology like eyelid tracking. The brain data which is fed into safety systems (iRisk) helps nudge sleepy drivers or passengers. Implemented responsibly into an automated driving context and infrastructure could help drivers stay alert and engaged behind the wheel.
Additionally, mood research focuses almost exclusively on subjective, phenomenological experience, and this could be tracked via facial recognition or voice recognition technology pertaining to mood. In contrast, emotions classically have been viewed as multimodal psychophysiological systems, with at least four differentiable components: (a) the subjective (e.g., feelings of fear and apprehension), (b) the physiological (e.g., activation of the sympathetic nervous system), (c) the expressive (e.g., the facial expression of fear), and (d) the behavioural (e.g., flight away from danger) [67]. In sharp contrast to emotion research, mood measurement essentially involves the assessment of subjective feelings, without any systematic consideration of these other components [68]. There are studies that have examined and measured user states using closed-question surveys, using EEG and ECG, and physiology motion sensors measurement, e.g., biosignal sensors. Essentially, iRisk uses a multimodal psychophysiological system for crash risks. In this case, it estimates the idiosyncratic experience of user states using a variety of clinical and psychophysiological measuring implements to estimate emotional, cognitive, and physiological motion activity and reactivity states. The fields of neuroscience, psychology, behaviour science, physiology, and kinematics are known to employ various human performance measurement tools, including subjective, emotion, behavioural, and physiological measures to tap into human states activity and reactivity, quantitatively and qualitatively.
The theoretical and practical fundamentals of using neurophysiology sensors to capture emotion activity in systems engineering context [69] underline the complexity of regulating (internal and external) effects on the human body, including highly individual physiological (emotion) responses and provide a starting point for engineering researchers entering the field. Examples for emotional states (see Ref. [70] and Figure 9) are the PANAS Scales and the modified Differential Emotions Scale (mDES; Ref. [71]).
Research on measuring the behavioural component of emotion focuses mostly on
  • The analysis of facial expressions, using the Facial Affect Scoring Technique (FAST; Ekman et al., 1971) [72], the Facial Action Coding System (FACS; [72], and the Special Affect Coding System (SPAFF; [73]).
  • The analysis of amplitude and pitch of voice [74,75]; and
  • The coding of body posture and gesture [76,77].
Indeed, some prominent measures in this area—such as the Multiple Affect Adjective Checklist–Revised (MAACLR [68] and the Positive and Negative Affect Schedule–Expanded Form (PANAS-X) [67] contain alternative versions that permit assessing (a) short-term fluctuations in current mood or (b) long-term individual differences in trait affect [68]. With respect to physiological measurements of emotion, a great amount of work has been performed to attempt to decode either physiological responses of distinct emotions (such as fear and anger) or physiological responses that can be sorted along dimensions of arousal and valence [69]. For a decent overview of the relevant experiments conducted in psychology, the measurement tools used, and the reasoning for either approach, we suggest reading reviews on measurements of emotion (see Ref. [78]).
Ref. [79] suggests that there is no “gold standard” measure of emotional responding, and that “experiential, physiological, and behavioural measures are all relevant to understanding emotion and cannot be assumed to be interchangeable.” However, we do recognise that as humans are complex creatures with an unpredictable-based behaviour, this creates a challenge of depth and nuance for AI. In addition, we believe in training behavioural models based on large data sets, for instance, naturalistic driving data.
For the iRisk system to prolifically detect behaviour patterns, there are efficient training and retraining methods needed for its proficiency (exhibited on Figure 10). As a result, we propose training the system on static and moving images related to risk-taking behaviour as well as feeding it with multimodal psychophysiological sensor data, thus allowing the system to interactively define perceived “user states” concepts for tracking neuro (cognitive, psychological, physiological) behaviour over time, including long-term effects.
The images (visual-based, image-based, voice-based, word, and text-based) can be web-based in the form of real-world situations as well as based on actual real-time experiences of on-road traffic situations, etc. Human factors and AI designers potentially train the iRisk system by providing examples of different user state imaginings with different user type characteristics. These examples would be used to learn behaviour metrics based on a behavioural model, as a weighted sum of user behaviour metrics (as risk-taking or safety-taking behaviours). This includes histograms of long-term effects based on human factors, performance, driving behaviour, and knowledge, etc. In addition, when referencing behaviour models, this infers a process that applies a riskiest-factor classifier to re-rank risk behaviour patterns by their likelihood membership in the low-negative value (low risk factor) behaviour class or high-negative value (high risk factor) behaviour class.
We also hypothesise that a more effective oracle approach permits the ability to revise and refine in relation to its goals. Thus, the aim is to compare multiple alternatives. To explore potential models, we consider the use of a historical or anthropological visualisation of behaviour patterns to monitor progress and roll back to previous versions, as well as undoing and redoing actions based on the process of algorithmic learning, thus, unlearning and relearning crash events. Conceptually, this extends the traditional feedback loop by allowing the possibility to branch off and explore different alternatives as part of determining the best-desired concept. In a sense, we may consider mechanisms for better quality behaviour models, based on examining and revising actions. The following are possible areas of application for iRisk:
  • Driver Performance: By integrating AI-based qEEG and HRV scans, we can monitor the cognitive performance and stress levels of AV users. These scans can provide real-time insights into user attention, workload, and stress levels, allowing the AV to adapt and provide personalized assistance to optimize user performance and safety.
  • Health and Wellbeing Monitoring: MCI biomarker scans can be used to detect early signs of cognitive impairment, executive functions (EFs), or ailments like Alzheimer, dementia, or heart attacks in users. By analysing biomarkers associated with cognitive decline, we can provide interventions, support, or alerts to ensure safety.
  • Personalised Driving Experience: AI can analyse qEEG, HRV, and MCI biomarker data to create individual user profiles. These profiles can be used to tailor the UX by adjusting AV settings, such as seat positioning, temperature, lighting, and entertainment options, to optimize comfort, reduce stress, and enhance overall satisfaction.
  • User Support and Safety: AI can process qEEG, HRV, and MCI biomarker data to detect fatigue, distraction, or cognitive decline that may impact safety. The AV can then provide adaptive assistance, alerts, or interventions to mitigate risks and enhance safety.
  • Research and Development: AI-based analysis of qEEG, HRV, and MCI biomarker data to gain insights into user behaviour, performance, and habits. This information can inform efforts to improve AV design, develop advanced systems, and safety features.
For an interactive training strategy, a comprehensive overview can be presented, selecting a representative set of examples to provide good coverage of the positive (safety behaving) and negative (risk behaving) models. This could be formalised by developing a generous selection of information, building upon state-of-the-art multimodal sensor systems. Additionally, there is a projected overview of examples that illustrate variation along major dimensions of behaviour concepts, formalized by developing a non-linear projection technique similar to principal component analysis and a modification of multimodal cognitive, psychological, and neurophysiological sensor data to provide coverage of a single principal dimension but also vary as little as possible in all other dimensions.
During the iterative learning process, the system is presented with examples illustrating the current understanding of the desired effect and further improve its intelligence. Training iRisk to recognise different multimodal sensor data, we target aspects of the interactive learning process that enable estimation of multimodal behaviour models with different ‘user types’, thus diversifying behaviour on cultural dogma.
The system’s ability to continuously match specific behaviour to risk awareness may be illustrated by presenting sets of the best and worst matching scenarios (i.e., as low–high risk certainty). This may help better evaluate intelligence compared to a standard method of presenting an entire set of ranked examples. Nonetheless, high-risk certainty examples for safety criticality and risk fatality are extremely easy to recognise to that of low-risk certainty. Thus, it would be ideal to provide additional information to the learning algorithm. As a result, we consider strategies on how humans act with technology when no one is watching and thereafter select what could be considered minor sets of examples that are of low value to feed the learning system. This may provide intuitive overviews of the least positive and negative risk values of a situation defined based on user type. The carefully chosen examples could guide and help select learning examples that would better steer the system towards human-factors intelligence, making iRisk an expert on what it means to be human, as well as the unpredictability that comes with it. To provide characteristic crash risk models and the best performing strategy, the systems may learn by generalising from examples of ontological classifications.
When designing an AI rHMI, we might consider the dHMI and internal communication HMIs, which is used to recognise the need for a “multi-actor” communication channel. A novel concept here is the introduction of an interactive HMI strategy for mediating long-term effects and user behaviours related to crash risks. AI rHMI integrates information on user states, using the following interfaces: cogHMI (neurocognitive state), psyHMI (neuropsychological state), and phyHMI (neurophysiological state). We also consider other influencing factors, such as AV system (functional scope), user types (e.g., personality, age, experience, gender, etc.), and driving environment.
Risk factors should be the first line of defence for automated driving. Co-risk factors refer to the risk grouping of low-level to high-level risk factors (e.g., low–high utilisation of crash risks) on automated driving. The following crash risk rating system applies (see Figure 11).
  • Lower-level crash risk (Green) rating marked as Rated-M, with the capitalised M (for manageable). Perceived crash risk is manageable based on user state and behaviour during automated driving. The driving conditions are normal with low risk.
  • Mid-level crash risk (Amber) rating marked as Rated-C, with the capitalised C (for caution). Perceived crash risk is controllable, but user/driver caution is advised based on user state and behaviour during automated driving. Attention, moderate risk of collision is detected. Exercise caution and remain attentive.
  • Higher-level crash risk (Red) rating marked as Rated-R, with the capitalised R (for restricted behaviour). The user/driver should restrict risk-taking behaviour as safety is in critical risk due to user state and behaviour during automated driving. Warning, a high risk of collision is identified.
It is critical that AI rHMI is prolific in communicating these risk factors so that users are supported in their decision-making process, and that risks are identified and resolved quickly. AI rHMI provides risk estimates and references based on iRisk’s models and knowledge frameworks. It consists of two interacting components: (1) a cloud-based interactive ML-AI engine for risk ontology and behavioural models with (2) an interface to communicate information to users. Thus, it enhances UX via an intuitive personalised design that encourages responsible behaviour or safety during automated driving. AI rHMI can strike a balance between user and automation, by interactively enabling users to be risk savvy and interactively sanctioning irresponsible behaviour. User interactive AI plays an important role in managing the overwhelming automation pitfalls within automated driving while supporting users with safety, comfort, usefulness, etc., in mind. Additionally, it includes transparent information on what they learn, when they learn, where they learn, and how they learn during automated driving.
When we think of new ways of interacting with AVs, our intrinsic interest is exploring risk warnings and dynamics within the design of HMIs to support users in phases of automated driving. A possible approach towards designing a risk-affinity HMI would be through an AI-powered risk assessment that monitors user states and communicate mental states, environment, and vehicle states to the user. Ref. [63] proposed a framework which categorises HMIs into subsections, based on the relationship between humans and AVs (Figure 12). Ref. [63] suggests separating the range of HMIs into explicit task-related strategies and arguing its value for understanding the underlying elements imposed by each.
The AI rHMI are designed to make daily automated driving safe, long-term (see Table 3). Users can use the HMI/UI to track their mental states/vitals (such as fatigue, workload, attention, stress, etc.), all of which with the help of aids that can be connected to iRisk. Such aids can be a smart watch that tracks their biomarkers, smart ring, bracelet, headset, or DMS, etc. These allow the user to know when their vitals or mental state are not normal.

Custom Settings and Data Privacy Overview

User onboarding and data privacy overview: Upon first use, the user is guided through a “First Use Settings” process before accessing the Main Screen, see Figure 13 and Figure 14. From the main menu, users can access Privacy settings, Preferred vitals selection, Personal information (view/edit), Educational content on risk conditions, and improvement strategies.
To use the system, users must create a personal profile. Vitals tracking becomes available only after connecting a sensor, with options to edit both privacy settings and linked sensors at any time.
Security and privacy: The system handles highly sensitive user data, including driving history, mental state, personal information, and GPS location. By default, only the user has access to this data. Each user can configure what types of information the system receives, including the degree of personal data shared.
In emergency situations, access to user data is granted exclusively via a secure QR code, ensuring both protection and control over personal information.
User-defined autonomy levels: Users can select their preferred level of autonomy in the system, ranging from full manual control to full automation. The system adapts its behaviour accordingly:
  • Manual Control
    • The user monitors the situation, generates options, makes decisions, and physically executes actions.
    • The system provides no decision-making assistance.
  • Shared Control
    • Both the user and system generate potential options.
    • The user selects which option to implement, while execution is shared between the system and the user.
    • The system may suggest decision/action alternatives, executing them only with user approval.
    • This mode enables equal-autonomy collaboration.
  • Automated Decision-Making
    • The system generates, selects, and executes an option, with user input influencing the alternatives.
    • In risk-sensitive situations, the system operates based on the current goal and execution context.
    • A veto window is provided—e.g., the user has 30 s to cancel or override a decision.
    • If no user response is received within the time limit, the system proceeds with full automation.
This autonomy framework ensures flexibility, user agency, and safety—adapting to individual preferences and varying driving-related risk scenarios.
The main UI comprises the following screens (see Figure 15): the main screen, specific risk-aware diagnosis screen, additional risk indicators screen, specific smart device screen, full driving report screen, and a safety social robot (pop-up screen), etc. The visual design of these screens is informed by a careful review of symbols and colours, and the meanings they convey to different users. In interface design, symbols and colour play a crucial role in shaping how users perceive and interact with a product. Colours act as visual codes that help users understand and navigate the system effectively, see Figure 16. Data visualisation is a powerful tool in interactive design. When selecting colours to complement the design, it is also essential to consider users with colour vision deficiencies. These conditions are relatively common and affect a significant portion of the population. Therefore, the colour scheme should be accessible and inclusive for all users. It is important to acknowledge these considerations early in the design process, particularly when creating visual symbols.
Even though it can sometimes be challenging to cater to everyone’s needs, the aim should be to make the information and design as accessible and transparent as possible.

3.3. Phase 5–7: Application—Prototyping and Simulated Testing, the Heuristic Evaluation and Implementation Protocol, Real-World Deployment and Longitudinal Study

This concept has not yet progressed to phase 5–7, which involves heuristic evaluation and implementation. Thus, no experiments, simulations, or technical demonstrations to assess the system’s effectiveness or feasibility can be offered in this paper. However, we believe it is important to highlight potential solutions through theory before practice. As such, this paper serves as a conceptual foundation (developed through brainstorming and sketching) for future progress. Time is of the essence—when engineers shelve conceptual ideas simply because they have not yet been tangibly developed, we risk falling behind in knowledge discovery. There is a pressing need for efforts aimed at improving the interpretability of DL systems, an inherent opacity that, until recently, has often served as an excuse for irresponsibility [80].
Ref. [81] has stressed the importance of more well-defined, systematic, and design guidelines for appropriate in-vehicle HMIs. In this context, the potential applications of iRisk could prove beneficial and enhance performance over time. As Ref. [82] (p. 3) argues, “the technology that will fully enable autonomous driving is still in the future, which requires developers to envision this future in order to be able to design for it.” Furthermore, “if human factors would only use the mind-set and methods of yesterday to solve the problems of today, it would inadvertently contribute to the complexity of tomorrow and would be in a strong dilemma” [83].
As Ref. [84] states, interaction design always engages with an imagined future, and ergonomic design thinking provides the opportunity to understand the present state. Therefore, “researchers generate new knowledge by understanding the current state and then suggesting an improved future state in the form of a design” [84]. In essence, it involves deep reflection in iteratively understanding the people, problems, and contexts surrounding a situation that researchers believe they can improve [84]. This underscores the importance of embedding different user requirements in the minds of designers—what Ref. [82] describes as “reframing the problem and identifying the ultimate solution from an infinite set of possibilities.”
Alongside ongoing discourse surrounding interpretability, mitigating ‘mode confusion’ remains a key objective in designing safe interactions with AVs. This is because users’ perceptions of a technology can shape how that technology is used and adopted over time. It is essential that we understand how these perceptions or mental models influence user performance. We consider whether risk-based behaviours could be altered simply by making users aware of perceived crash risk encounters and system criticality in real time.
A usability test will be employed in this project, as it is a widely used technique in user-centred interaction design to assess and evaluate a product by testing it on potential users. In these usability tests, we will measure the perceived experience of using the iRisk and AI rHMI, based on the degree of usability, ease-of-use, and clarity. The findings will be used to correct some of the expressed concern and highlight missing links.
As we look ahead, it seems clear that iRisk and AI rHMI will play a pivotal role in automated driving, long-term. The iRisk and AI rHMI caters especially for the unconventional worlds of automated driving, working for and in cooperation with its users, thus, assisting in on-road safety. The system will engage users, foresee crash risk concerns, and report them, as well as enlighten them about their own wellbeing. The aim is to simply develop a system that allows for quality of life and safety.

4. Limitations

While the proposed iRisk system offers a compelling vision for responsible automated driving through intelligent, human-aware interfaces, several limitations must be acknowledged. Firstly, this research remains conceptual in nature. The current stage of the project is confined to ideation, informed by theoretical insights and domain expertise, and has not yet progressed to heuristic evaluation or empirical validation. As such, the system’s practical feasibility and effectiveness remain untested. Future work should focus on iterative prototyping and user testing to assess performance, usability, and safety implications under real-world and simulated conditions.
Secondly, the system’s reliance on ML and DL techniques introduces limitations tied to data availability and quality. The predictive power of the iRisk system is contingent on access to extensive and diverse datasets encompassing physiological, emotional, cognitive, and behavioural states across a range of driving scenarios. However, acquiring such comprehensive and annotated datasets, particularly naturalistic driving data that captures rare but high-risk behaviours, remains a significant challenge. Without this data, model accuracy, generalisability, and fairness may be compromised. Moreover, while LLMs add significant value, they also come with constraints:
  • LLMs do not natively process real-time sensor data, this remains the strength of DL.
  • Their output can be non-deterministic and occasionally inaccurate if not grounded in real data.
  • They require careful safety and alignment tuning to avoid misleading the user or offering inappropriate advice.
Thus, LLMs are best deployed as a communication and interpretive layer, and not as a substitute for sensor-processing models.
A further limitation arises from the dependence on insights provided by a select group of experts to inform the design direction. While these contributions are valuable, they may not fully capture the perspectives and needs of broader user populations, especially those less familiar with automation technology, such as older adults or individuals with cognitive impairments. Co-design practices involving diverse end-users would provide richer and more inclusive design insights, ensuring that the system serves a wide range of user needs and contexts.
Additionally, the iRisk system, which leverages DL architectures such as neural networks for behavioural prediction, inherits the “black-box” problem inherent to such models. These systems lack transparency in how they generate outputs—such as risk estimations—making it difficult for users or regulators to interpret or verify decisions. This opacity could affect user trust and regulatory acceptance. There is, therefore, a need for future research in explainable AI (XAI) methods to improve interpretability without sacrificing system performance.
Another concern relates to the possibility of over-reliance on persuasive technology. While the iRisk system is designed to encourage responsible user behaviour through persuasive interaction, it may unintentionally promote user complacency or over-trust in the system’s capabilities. If users begin to view the system as infallible, this could diminish their engagement and critical oversight, increasing safety risks in edge cases. Thus, the design must balance persuasion with transparency and ensure calibrated trust.
Privacy and ethical considerations are also central to the implementation of iRisk. The use of biosignals (e.g., EEG, ECG), facial and voice recognition, and other psychophysiological inputs to assess user state raises important questions regarding data protection, consent, and user autonomy. Real-time monitoring of emotional and cognitive states, while beneficial for safety, could be perceived as intrusive or even coercive. Designers must ensure that data is collected, processed, and stored securely, and that users retain control over their data through transparent privacy policies and opt-in mechanisms.
From a technical perspective, integrating multimodal sensing hardware into vehicles presents practical challenges. Physiological sensors may be sensitive to movement artefacts or require specific placements that could interfere with user comfort or vehicle operation. Some technologies, such as EEG headsets or eye trackers, might not yet be reliable or unobtrusive enough for mass-market use in everyday driving environments. Usability testing and ergonomically informed hardware design will be crucial to ensure robust and seamless integration.
Finally, there are cross-cultural and socio-psychological variabilities to consider. Human behaviours, emotional expression, and risk perception differ significantly across individuals and cultural contexts. A risk-assessment system trained predominantly on Western behavioural norms may not perform well in non-Western contexts or among diverse demographic groups. Localisation and culturally adaptive system training and design are necessary to ensure fairness, inclusiveness, and global applicability.
Taken together, these limitations highlight the complexity of designing responsible AI-based systems for automated driving. While the iRisk framework lays an important theoretical foundation, empirical research, stakeholder involvement, and interdisciplinary collaboration will be vital to move from concept to real-world implementation.

5. Conclusions

The future sees a greater use of automation in the delivery of mobility, especially for individuals who try to balance the competing demands of work, family, and social life. In a sense, AVs can be seen as a complement to traditional means of transportation by situating the traditional means in the context of technology. A statement of social contemporary order with the traditional and the innovative co-exiting. Thus, iRisk encompasses an array of possibilities towards risk awareness, usefulness, and resourcefulness. It promises a safe, comfortable, and responsible automated driving experience. The goal is to advance knowledge on how users can safely interact with AVs by exploring advanced solutionism.
There are tensions between what people say they do and what they do when no one is watching. Thus, this paper highlights an intuitive in-vehicle risk blocker. The narrow sense of ‘AI’ or system-level intelligence programmed into ‘AVs’ neural networks’ for risk recognition needs to be considered. Granted, there are driver-monitoring systems (DMS) that provide indirect functionality for users by manually deactivating the system and handing over the driving to the users. However, experts have argued that users are still prone to risk-taking behaviour even with these systems installed. At the other extreme, users can be discouraged to use the DMS as it provides a limited mechanism for satisfaction. Our research efforts aim to advance knowledge and foster a holistic dogma of ideas.
To conclude, iRisk and AI rHMI may be crucial in de-risking and minimal risk avoidance through risk recognition, awareness, and recovery. Although this paper references AI in a general sense to describe intelligent system behaviour, the technical foundation of iRisk relies on machine learning methodologies. These include deep learning neural networks for pattern recognition, behavioural classification, and crash risk prediction, as well as structured learning techniques such as policy and value networks akin to those used in reinforcement learning. Therefore, references to AI in this context specifically denote ML-driven processes embedded within the system architecture.

Author Contributions

Conceptualisation, N.Y.M.; methodology, N.Y.M.; software, N.Y.M.; validation, N.Y.M.; formal analysis, N.Y.M.; investigation, N.Y.M.; resources, N.Y.M. and K.B.; data curation, N.Y.M.; writing—original draft preparation, N.Y.M.; writing—review and editing, N.Y.M.; visualisation, N.Y.M.; supervision, K.B.; project administration, K.B.; funding acquisition, K.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author(s).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ADSAutomated Driving System
AVAutomated Vehicles
INEXINternal-EXternal
AI rHMIArtificial Intelligence-powered risk-aware Human–Machine Interface
iRiskintelligent Risk assessment system
LLMLarge Language Model
MLMachine Learning
DLDeep Learning
D-FMEA Design Failure Mode and Effects Analysis
P-FMEAProcess Failure Mode and Effects Analysis

References

  1. SAE International. Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles (J3016_202104); SAE International: Warrendale, PA, USA, 2021. [Google Scholar]
  2. Krause, A.; Singh, A.; Guestrin, C. Near-optimal Sensor Placements in Gaussian Processes: Theory, Efficient Algorithms and Empirical Studies. J. Mach. Learn. Res. 2008, 9, 235–284. [Google Scholar]
  3. Yang, Q. The Role of Design in Creating Machine-Learning-Enhanced User Experience. In Association for the Advancement of Artificial Intelligence (AAAI) Spring Symposia; Technical Report SS-17-04; 2017; Available online: https://cdn.aaai.org/ocs/15363/15363-68257-1-PB.pdf (accessed on 10 June 2025).
  4. Karakaya, B.; Bengler, K. Investigation of Driver Behavior during Minimal Risk Maneuvers of Automated Vehicles. In Proceedings of the 21st Congress of the International Ergonomics Association (IEA 2021); Lecture Notes in Networks and Systems, 221, Online, 13–18 June 2021; Springer: Cham, Switzerland, 2021. [Google Scholar]
  5. Li, M.; Holthausen, B.E.; Stuck, R.E.; Walker, B.N. No risk no trust: Investigating perceived risk in highly automated driving. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Utrecht, The Netherlands, 21–25 September 2019; pp. 177–185. [Google Scholar]
  6. Stapel, X.H.I.; Gentner, A.; Happee, R. Perceived safety and trust in SAE Level 2 partially automated cars: Results from an online questionnaire. PLoS ONE 2021, 16, e0260953. [Google Scholar]
  7. Kyriakidis, M.; de Winter, J.C.F.; Stanton, N.; Bellet, T.; van Arem, B.; Brookhuis, K.; Martens, M.H.; Bengler, K.; Andersson, J.; Merat, N.; et al. A human factors perspective on automated driving. Theor. Issues Ergon. Sci. 2019, 20, 223–249. [Google Scholar] [CrossRef]
  8. Carroll, J.M.; Carrithers, C. Training wheels in a user interface. Commun. ACM 1984, 27, 800–806. [Google Scholar] [CrossRef]
  9. Roshandel, S.; Zheng, Z.; Washington, S. Impact of real-time traffic characteristics on freeway crash occurrence: Systematic review and meta-analysis. Accid. Anal. Prev. 2015, 79, 198–211. [Google Scholar] [CrossRef]
  10. Savolainen, P.T.; Mannering, F.L.; Lord, D.; Quddus, M.A. The statistical analysis of highway crash-injury severities: A review and assessment of methodological alternatives. Accid. Anal. Prev. 2011, 43, 1666–1676. [Google Scholar] [CrossRef]
  11. Mannering, F. Temporal instability and the analysis of highway accident data. Anal. Methods Accid. Res. 2018, 17, 1–13. [Google Scholar] [CrossRef]
  12. Xiao, D.; Zhang, B.; Chen, Z.; Xu, X.; Du, B. Connecting tradition with modernity: Safety literature review. Digit. Transp. Saf. 2023, 2, 1–11. [Google Scholar] [CrossRef]
  13. Mbelekani, N.Y.; Bengler, K. Risk and safety-based behavioural adaptation towards automated vehicles: Emerging advances, effects, challenges and techniques. In International Congress on Information and Communication Technology; Springer Nature: Singapore, 2024; pp. 459–482. [Google Scholar]
  14. Chu, Y.; Liu, P. Human Factor Risks in Driving Automation Crashes. In HCI in Mobility, Transport, and Automotive Systems; Krömker, H., Ed.; HCII 2023. Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2023; Volume 14048, pp. 3–12. [Google Scholar] [CrossRef]
  15. Simonet, S.; Wilde, G.J.S. Risk: Perception, acceptance and homeostasis. Appl. Psychol. Int. Rev. 1997, 46, 235–252. [Google Scholar] [CrossRef]
  16. National Transportation Safety Board. Collision Between a Car Operating with Automated Vehicle Control Systems and a Tractor-Semitrailer Truck near Williston, Florida, 7 May 2016; Highway Accident Report NTSB/HAR-17/02; NTSB: Washington, DC, USA, 2017. [Google Scholar]
  17. National Transportation Safety Board. Rear-End Collision Between a Car Operating with Advanced Driver Assistance Systems and a Stationary Fire Truck, Culver City, California, 22 January 2018; Highway Accident Report NTSB/HAB-19/07; NTSB: Washington, DC, USA, 2019. [Google Scholar]
  18. National Transportation Safety Board. Collision Between a Sport Utility Vehicle Operating with Partial Driving Automation and a Crash Attenuator, Mountain View, California, 23 March 2018; Highway Accident Report NTSB/HAR-20/01; NTSB: Washington, DC, USA, 2020. [Google Scholar]
  19. National Transportation Safety Board. Collision Between Car Operating with Partial Driving Automation and Truck-Tractor Semitrailer Delray Beach, Florida, 1 March 2019; Highway Accident Report NTSB/HAR-20/01; NTSB: Washington, DC, USA, 2020. [Google Scholar]
  20. National Transportation Safety Board. Collision Between Vehicle Controlled by Developmental Automated Driving System and Pedestrian, Tempe, Arizona, 18 March 2018; Highway Accident Report NTSB/HAR-19/03; NTSB: Washington, DC, USA, 2019. [Google Scholar]
  21. National Transportation Safety Board. Low-Speed Collision Between Truck-Tractor and Autonomous Shuttle, Las Vegas, Nevada, 8 November 2017; Highway Accident Report NTSB/HAR-19/06; NTSB: Washington, DC, USA, 2019. [Google Scholar]
  22. Rudin-Brown, C.M.; Parker, H.A. Behavioural adaptation to adaptive cruise control (ACC): Implications for preventive strategies. Transp. Res. Part F Traffic Psychol. Behav. 2004, 7, 59–76. [Google Scholar] [CrossRef]
  23. Metz, B.; Wörle, J.; Hanig, M.; Schmitt, M.; Lutz, A.; Neukum, A. Repeated usage of a motorway automated driving function: Automation level and behavioural adaption. Transp. Res. Part F Traffic Psychol. Behav. 2021, 81, 82–100. [Google Scholar] [CrossRef]
  24. Beggiato, M.; Krems, J.F. The evolution of mental model, trust and acceptance of adaptive cruise control in relation to initial information. Transp. Res. Part F Traffic Psychol. Behav. 2013, 18, 47–57. [Google Scholar] [CrossRef]
  25. Homans, H.; Radlmayr, J.; Bengler, K. Levels of Driving Automation from a User’s Perspective: How Are the Levels Represented in the User’s Mental Model. In International Conference on Human Interaction and Emerging Technologies; Springer: Cham, Switzerland, 2019; pp. 21–27. [Google Scholar] [CrossRef]
  26. Wozney, L.; McGrath, P.J.; Newton, A.; Huguet, A.; Franklin, M.; Perri, K.; Leuschen, K.; Toombs, E.; Lingley-Pottie, P. Usability, learnability and performance evaluation of Intelligent Research and Intervention Software: A delivery platform for eHealth interventions. Health Inform. J. 2016, 22, 730–743. [Google Scholar] [CrossRef] [PubMed]
  27. Ojeda, L.; Nathan, F. Studying learning phases of an ACC through verbal reports. Driv. Support Inf. Syst. Exp. Learn. Appropr. Eff. Adapt. 2006, 1, 47–73. [Google Scholar]
  28. Heimgärtner, R. Human Factors of ISO 9241-110 in the Intercultural Context. Adv. Ergon. Des. Usability Spec. Popul. 2014, 3, 18. [Google Scholar]
  29. Körber, M.; Baseler, E.; Bengler, K. Introduction matters: Manipulating trust in automation and reliance in automated driving. Appl. Ergon. 2018, 66, 18–31. [Google Scholar] [CrossRef]
  30. Lee, J.D.; See, K.A. Trust in automation: Designing for appropriate reliance. Hum. Factors J. Hum. Factors Ergon. Soc. 2004, 46, 50–80. [Google Scholar] [CrossRef]
  31. Hecht, T.; Darlagiannis, E.; Bengler, K. Non-driving related activities in automated driving–an online survey investigating user needs. In Proceedings of the Human Systems Engineering and Design II: Proceedings of the 2nd International Conference on Human Systems Engineering and Design (IHSED2019): Future Trends and Applications, Universität der Bundeswehr München, Munich, Germany, 16–18 September 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 182–188. [Google Scholar]
  32. Karakaya, B.; Bengler, K. Minimal risk maneuvers of automated vehicles: Effects of a contact analog head-up display supporting driver decisions and actions in transition phases. Safety 2023, 9, 7. [Google Scholar] [CrossRef]
  33. He, X.; Stapel, J.; Wang, M.; Happee, R. Modelling perceived risk and trust in driving automation reacting to merging and braking vehicles. Traffic Psychol. Behav. 2022, 86, 178–195. [Google Scholar] [CrossRef]
  34. Stapel, J.; Gentner, A.; Happee, R. On-road trust and perceived risk in Level 2 automation. Transp. Res. Part F Traffic Psychol. Behav. 2022, 89, 355–370. [Google Scholar] [CrossRef]
  35. Karakaya, B.; Kalb, L.; Bengler, K. A video survey on minimal risk maneuvers and conditions. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Online, 5–9 October 2020; SAGE Publications: Los Angeles, CA, USA, 2020; Volume 64, pp. 1708–1712. [Google Scholar]
  36. Nordhoff, S.; Kyriakidis, M.; van Arem, B.; Happee, R. A multi-level model on automated vehicle acceptance (MAVA): A review-based study. Theor. Issues Ergon. Sci. 2019, 20, 682–710. [Google Scholar] [CrossRef]
  37. Zhang, T.; Tao, D.; Qu, X.; Zhang, X.; Lin, R.; Zhang, W. The Roles of Initial Trust and Perceived risk in Public’s Acceptance of Automated Vehicles. Transp. Res. Part C Emerg. Technol. 2019, 98, 207–220. [Google Scholar] [CrossRef]
  38. Every, J.L.; Barickman, F.; Martin, J.; Rao, S.; Schnelle, S.; Weng, B. A novel method to evaluate the safety of highly automated vehicles. In International Technical Conference on the Enhanced Safety of Vehicles; NHTSA: Detroit, MI, USA, 2017. [Google Scholar]
  39. Mbelekani, N.Y.; Bengler, K. Learning Design Strategies for Optimizing User Behaviour Towards Automation: Architecting Quality Interactions from Concept to Prototype. In HCI in Mobility, Transport, and Automotive Systems; Krömker, H., Ed.; HCII 2023. Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2023; Volume 14048. [Google Scholar] [CrossRef]
  40. Mbelekani, N.Y.; Bengler, K. Learnability in Automated Driving (LiAD): Concepts for applying learnability engineering (CALE) based on long-term learning effects. Information 2023, 14, 519. [Google Scholar] [CrossRef]
  41. Mbelekani, N.Y.; Bengler, K. Interdisciplinary Industrial Design Strategies for Human-Automation Interaction: Industry Experts’ Perspectives. Interdiscip. Pract. Ind. Des. 2022, 48, 132–141. [Google Scholar] [CrossRef]
  42. Hock, P.; Kraus, J.; Walch, M.; Lang, N.; Baumann, M. Elaborating Feedback Strategies for Maintaining Automation in Highly Automated Driving. In Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’16), Ann Arbor, MI, USA, 24–26 October 2016. [Google Scholar]
  43. Thomaz, A.L.; Breazeal, C. Teachable robots: Understanding human teaching behavior to build more effective robot learners. Artif. Intell. 2008, 172, 716–737. [Google Scholar] [CrossRef]
  44. Baum, E.B.; Lang, K. Query Learning can work Poorly when a Human Oracle is Used. Neural Netw. 1992, 8, 8. [Google Scholar]
  45. Amershi, S.; Fogarty, J.; Kapoor, A.; Tan, D. Overview Based Example Selection in End User Interactive Concept Learning. In Proceedings of the 22nd Annual ACM Sympo-Sium on User Interface Software and Technology (UIST ’09), Bend, OR, USA, 29 October–2 November 2022; pp. 247–256. [Google Scholar] [CrossRef]
  46. Gurevich, N.; Markovitch, S.; Rivlin, E. Active Learning with Near Misses. American Association for Artificial Intelligence (AAAI) 2006, 362–367. Available online: https://cdn.aaai.org/AAAI/2006/AAAI06-058.pdf (accessed on 10 June 2025).
  47. Amershi, S.; Cakmak, M.; Knox, W.B.; Kulesza, T. Power to the people: The role of humans in interactive machine learning. Ai Mag. 2014, 35, 105–120. [Google Scholar] [CrossRef]
  48. Tong, S.; Chang, E. Support Vector Machine Active Learning for Image Retrieval. In Proceedings of the Ninth ACM International Conference on Multimedia (MULTIMEDIA 2001), Ottawa, ON, Canada, 30 September–5 October 2001; pp. 107–118. [Google Scholar] [CrossRef]
  49. Fails, J.A.; Olsen, D.R., Jr. Interactive Machine Learning. In Proceedings of the 8th International Conference on Intelligent User Interfaces (IUI ’03), Miami, FL, USA, 12–15 January 2003; pp. 39–45. [Google Scholar] [CrossRef]
  50. Dey, A.K.; Hamid, R.; Beckmann, C.; Li, I.; Hsu, D. A CAPpella: Programming by Demonstration of Context-Aware Applications. In Proceedings of the SIGCHI Conference on Human Factors in Computing Sys-Tems (CHI ’04), Vienna, Austria, 24–29 April 2004; pp. 33–40. [Google Scholar] [CrossRef]
  51. Steinder, M.; Sethi, A.S. A Survey of Fault Localization Techniques in Computer Networks. Sci. Comput. Program. 2004, 53, 165–194. [Google Scholar] [CrossRef]
  52. Fogarty, J.; Tan, D.; Kapoor, A.; Winder, S. CueFlik: Interactive Concept Learning in Image Search. In Proceedings of the SIGCHI Conference on Human Factors in Computing Sys-Tems (CHI ’08), Florence, Italy, 5–10 April 2008; pp. 29–38. [Google Scholar] [CrossRef]
  53. Jain, P.; Kulis, B.; Dhillon, I.S.; Grauman, K. Online Metric Learning and Fast Similarity Search. NIPS 2008, 761–768. Available online: https://proceedings.neurips.cc/paper_files/paper/2008/file/aa68c75c4a77c87f97fb686b2f068676-Paper.pdf (accessed on 10 June 2025).
  54. Ritter, A.; Basu, S. Learning to Generalize for Complex Selection Tasks. In Proceedings of the 14th International Conference on Intelligent User Interfaces (IUI ’09), Sanibel Island, FL, USA, 8–11 February 2009; pp. 167–176. [Google Scholar] [CrossRef]
  55. Amershi, S.; Fogarty, J.; Kapoor, A.; Tan, D. Examining Multiple Potential Models in End-User Interactive Concept Learning. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’10), Atlanta, GA, USA, 10–15 April 2010; pp. 1357–1360. [Google Scholar] [CrossRef]
  56. Amershi, S.; Lee, B.; Kapoor, A.; Mahajan, R.; Christian, B. CueT: Human-Guided Fast and Accurate Network Alarm Triage. In Proceedings of the SIGCHI Conference on Human Factors in Com-Puting Systems (CHI ’11), Vancouver, BC, Canada, 7–12 May 2011; pp. 157–166. [Google Scholar] [CrossRef]
  57. Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; van den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. Mastering the game of Go with deep neural networks and tree search. Nature 2016, 529, 484–489. [Google Scholar] [CrossRef]
  58. Silver, D.; Schrittwieser, J.; Simonyan, K.; Antonoglou, I.; Huang, A.; Guez, A.; Hubert, T.; Baker, L.; Lai, M.; Bolton, A.; et al. Mastering the game of Go without human knowledge. Nature 2017, 550, 354–359. [Google Scholar] [CrossRef] [PubMed]
  59. Murugan, S.; Sampathkumar, A.; Raja, S.K.S.; Ramesh, S.; Manikandan, R.; Gupta, D. Autonomous Vehicle Assisted by Heads up Display (HUD) with Augmented Reality Based on Machine Learning Techniques. In Virtual and Augmented Reality for Automobile Industry: Innovation Vision and Applications; Springer: Cham, Switzerland, 2022; pp. 45–64. [Google Scholar]
  60. Goldberg, K. What is automation? IEEE Trans. Autom. Sci. Eng. 2011, 9, 1–2. [Google Scholar] [CrossRef]
  61. Mbelekani, N.Y.; Bengler, K. Traffic and Transport Ergonomics on Long Term Multi-Agent Social Interactions: A Road User’s Tale; HCII 2022, LNCS 13520; Rauterberg, M., Fui-Hoon Nah, F., Siau, K., Krömker, H., Wei, J., Salvendy, G., Eds.; Springer: Cham, Switzerland, 2022; pp. 499–518. [Google Scholar] [CrossRef]
  62. Tran, T.T.M.; Parker, C.; Wang, Y.; Tomitsch, M. Designing wearable augmented reality concepts to support scalability in autonomous vehicle–pedestrian interaction. Front. Comput. Sci. 2022, 4, 39. [Google Scholar] [CrossRef]
  63. Bengler, K.; Rettenmaier, M.; Fritz, N.; Feierle, A. From HMI to HMIs: Towards an HMI Framework for Automated Driving. Information 2020, 11, 61. [Google Scholar] [CrossRef]
  64. Mirnig, A.G.; Marcano, M.; Trösterer, S.; Sarabia, J.; Diaz, S.; Dönmez Özkan, Y.; Sypniewski, J.; Madigan, R. Workshop on Exploring Interfaces for Enhanced Automation Assistance for Improving Manual Driving Abilities. In Proceedings of the 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ‘21 Adjunct), Leeds, UK, 13–14 September 2021; pp. 178–181. [Google Scholar] [CrossRef]
  65. Haeuslschmid, R.; Buelow, M.; Pfleging, B.; Butz, A. Supporting Trust in Autonomous Driving. In IUI; ACM: Limassol, Cyprus, 2017. [Google Scholar]
  66. Forster, Y.; Hergeth, S.; Naujoks, F.; Krems, J.; Keinath, A. User Education in Automated Driving: Owner’s Manual and Interactive Tutorial Support Mental Model Formation and Human-Automation Interaction. Information 2019, 10, 143. [Google Scholar] [CrossRef]
  67. Watson, D.; Clark, L.A. The PANAS-X: Manual for the Positive and Negative Affect Schedule–Expanded Form; Unpublished manuscript; University of Iowa: Iowa City, IA, USA, 1994. [Google Scholar]
  68. Watson, D.; Vaidya, J. Mood measurement: Current status and future directions. Handb. Psychol. Res. Methods Psychol. 2003, 2, 351–375. [Google Scholar]
  69. Balters, S.; Steinert, M. Capturing emotion reactivity through physiology measurement as a foundation for affective engineering in engineering design science and engineering practices. J. Intell. Manuf. 2017, 28, 1585–1607. [Google Scholar] [CrossRef]
  70. Watson, D.; Tellegen, A. Toward a consensual structure of mood. Psychol. Bull. 1985, 98, 219. [Google Scholar] [CrossRef]
  71. Fredrickson, B.L.; Tugade, M.M.; Waugh, C.E.; Larkin, G.R. What good are positive emotions in crisis? A prospective study of resilience and emotions following the terrorist attacks on the United States on September 11th, 2001. J. Personal. Soc. Psychol. 2003, 84, 365–376. [Google Scholar] [CrossRef]
  72. Ekman, P.; Friesen, W.V. Facial Action Coding System (FACS) [Database record]. Environmental Psychology & Nonverbal Behavior. APA PsycTests 1978. [Google Scholar] [CrossRef]
  73. Gottman, J.M.; Krokoff, L.J. Marital interaction and satisfaction: A longitudinal view. J. Consult. Clin. Psychol. 1989, 57, 47. [Google Scholar] [CrossRef]
  74. Bachorowski, J.-A.; Owren, M.J. Vocal expression of emotion: Acoustic properties of speech are associated with emotional intensity and context. Psychol. Sci. 1995, 6, 219–224. [Google Scholar] [CrossRef]
  75. Russell, J.A.; Bachorowski, J.-A.; Fernández-Dols, J.-M. Facial and vocal expressions of emotion. Annu. Rev. Psychol. 2003, 54, 329–349. [Google Scholar] [CrossRef] [PubMed]
  76. Coulson, M. Attributing emotion to static body postures: Recognition accuracy, confusions, and viewpoint dependence. J. Nonverbal Behav. 2004, 28, 117–139. [Google Scholar] [CrossRef]
  77. Dael, N.; Mortillaro, M.; Scherer, K.R. The body action and posture coding system (BAP): Development and reliability. J. Nonverbal Behav. 2012, 36, 97–121. [Google Scholar] [CrossRef]
  78. Levenson, R.W. The autonomic nervous system and emotion. Emot. Rev. 2014, 6, 100–112. [Google Scholar] [CrossRef]
  79. Mauss, I.B.; Robinson, M.D. Measures of emotion: A review. Cogn. Emot. 2009, 23, 209–237. [Google Scholar] [CrossRef]
  80. Stilgoe, J. Machine learning, social learning and the governance of self-driving cars. Soc. Stud. Sci. 2018, 48, 25–56. [Google Scholar] [CrossRef]
  81. Carsten, O.; Martens, M.H. How can humans understand their automated cars? HMI principles, problems and solutions. Cogn. Technol. Work. 2019, 21, 3–20. [Google Scholar] [CrossRef]
  82. Strömberg, H.; Pettersson, I.; Andersson, J.; Rydström, A.; Dey, D.; Klingegård, M.; Forlizzi, J. Designing for social experiences with and within autonomous vehicles—Exploring methodological directions. Des. Sci. 2018, 4, E13. [Google Scholar] [CrossRef]
  83. Flemisch, F.; Kelsch, J.; Löper, C.; Schieben, A.; Schindler, J. Automation spectrum, inner/outer compatibility and other potentially useful human factors concepts for assistance and automation. In Human Factors for Assistance and Automation; de Waard, D., Flemisch, F.O., Lorenz, B., Oberheid, H., Brookhuis, K.A., Eds.; Shaker Publishing: Maastricht, The Netherlands, 2008; pp. 1–16. [Google Scholar]
  84. Zimmerman, J.; Forlizzi, J. Research through Design in HCI. In Ways of Knowing in HCI.; Olson, J., Kellogg, W., Eds.; Springer: New York, NY, USA, 2014. [Google Scholar]
Figure 1. Safety strategy for automated driving crash risks.
Figure 1. Safety strategy for automated driving crash risks.
Electronics 14 02433 g001
Figure 2. Profiling automated driving crash causation.
Figure 2. Profiling automated driving crash causation.
Electronics 14 02433 g002
Figure 3. Scenes and probable cause of automated driving crash eventology [16,17,18,19,20,21].
Figure 3. Scenes and probable cause of automated driving crash eventology [16,17,18,19,20,21].
Electronics 14 02433 g003aElectronics 14 02433 g003b
Figure 4. Examining co-risks factors and long-term effects.
Figure 4. Examining co-risks factors and long-term effects.
Electronics 14 02433 g004
Figure 5. IxDS grounded on the KISS method.
Figure 5. IxDS grounded on the KISS method.
Electronics 14 02433 g005
Figure 6. Phased development process for iRisk and AI rHMI systems, incorporating quality engineering and human factors engineering principles across six design stages.
Figure 6. Phased development process for iRisk and AI rHMI systems, incorporating quality engineering and human factors engineering principles across six design stages.
Electronics 14 02433 g006
Figure 7. Case Analysis: INEX AI rHMI.
Figure 7. Case Analysis: INEX AI rHMI.
Electronics 14 02433 g007
Figure 8. iRisk’s in-vehicle infotainment and information communication AIrHMI platform.
Figure 8. iRisk’s in-vehicle infotainment and information communication AIrHMI platform.
Electronics 14 02433 g008
Figure 9. The two-dimensional structure of affect. Source: Ref. [70] (p. 221), and copyright 1985 by the American Psychological Association.
Figure 9. The two-dimensional structure of affect. Source: Ref. [70] (p. 221), and copyright 1985 by the American Psychological Association.
Electronics 14 02433 g009
Figure 10. Creating an ontology of long-term effects for iRisk knowledge training.
Figure 10. Creating an ontology of long-term effects for iRisk knowledge training.
Electronics 14 02433 g010
Figure 11. AI based neuroelectronics for crash risk related to automated driving.
Figure 11. AI based neuroelectronics for crash risk related to automated driving.
Electronics 14 02433 g011aElectronics 14 02433 g011b
Figure 12. Dynamic HMI (dHMI) framework [63] including vehicle HMI (vHMI), infotainment HMI (iHMI), automation HMI (aHMI), and external HMI (eHMI).
Figure 12. Dynamic HMI (dHMI) framework [63] including vehicle HMI (vHMI), infotainment HMI (iHMI), automation HMI (aHMI), and external HMI (eHMI).
Electronics 14 02433 g012
Figure 13. Process application.
Figure 13. Process application.
Electronics 14 02433 g013
Figure 14. Dynamic Workflow.
Figure 14. Dynamic Workflow.
Electronics 14 02433 g014
Figure 15. First use settings.
Figure 15. First use settings.
Electronics 14 02433 g015aElectronics 14 02433 g015b
Figure 16. Look and Feel: Screens.
Figure 16. Look and Feel: Screens.
Electronics 14 02433 g016
Table 1. Examples of user-interactive ML systems applications in HCI.
Table 1. Examples of user-interactive ML systems applications in HCI.
Applications of User-Interactive ML Systems
Crayons: Supports interactive training of pixel classifiers for image segmentation in camera-based applications.[49]
CAPpella: Programming by demonstrations of context aware applications. This enables end-user training of ML for context detection in sensor-equipped environments.[50]
CueT: Human-guided fast and accurate network alarm triage. The system uses interactive machine learning to learn from the triaging decisions of operators. It combines novel visualisations and interactive machine learning to deal with a highly dynamic environment where the groups of interest are not known a priori and evolve constantly.[56]
CueFlik: Interactive concept learning in image search.[52]
AlphaGo: A Go game computer program created by Google’s DeepMind Company, which became the first game program to defeat a world champion. Using deep neural networks, the tree search in AlphaGo evaluated positions and selected moves, and these neural networks were trained using (1) ‘supervised learning from human expert moves’ and (2) ‘reinforcement learning from self-play’. [57,58]
Table 2. Extracted experts’ views.
Table 2. Extracted experts’ views.
IndustryConcepts
Automated flying (Aircraft Experts)
  • “The system needs to talk to me and make itself transparent, show me what it is doing right now.” (AE2)
  • “The automated system gives the pilot hints, so autopilot flight system has something like a supervisor, which tells the pilots, right now there was a calculation error for example, maybe you must not trust the flight system for the next minute.” (AE3)
  • “Always propose that manufacturers should implement something like a risk block in the cockpit, where pilots can see what the risk for dying is at the moment.” (AE3)
  • “You did not sleep that well, and then it says today it’s 10 times risk to die because of your sleeping experience.” (AE3)
  • “The automation system is performing fine, and everything’s fine” then the system says, “pilot Steve as you can see today it’s pretty safe to fly.” (AE3)
  • “If you switch off the Autopilot, then you are notified that it’s around 100 times higher the risk of dying in the aircraft, or having a huge accident, instead of flying or keeping the Autopilot system on.” … “So, the chance that you are going to die in the aircraft is 100 times higher if you fly manually instead of automatically.” (AE3)
  • “Threat and error management is a major subject, and a major part of training, so, how to recognise threats, to avoid and deal with crew errors, and how to avoid undesired aircraft states.” (AE1)
  • “Two main components of the threat error management are unexperienced pilots or first-time automation users will make errors, the experienced pilot will see them, and he will intervene, he will tell the unexperienced co-pilot, look at this situation you should use this mode.” (AE1)
  • “Would be a coordination of time slots on the ground with an automated system”, for example, “it says I have to adjust the hold light in order to get into the parking stand at the correct time”. (AE3)
Automated driving (Car Experts)
  • “In-vehicle guides where it says to the user, are you interested in the lane keeping aid system, I will tell you how it works; I will talk you through activation, so a robot voice out of the centre stack.” “Okay, now what do I do, and the system says, well put the car in the middle and push this button, and say done when you’re done, so now you should experience this.” (CE1)
  • “If we get more competent systems that gets to know you a bit more or understands how you behave, like you can hopefully adapt it a bit more.” (CE3)
  • “Not sure of products on the market which already have that kind of learning algorithms”. (CE4)
  • “Digital virtual assistants sound like an interesting idea, but we have not seen really good examples.” “Something like virtual assistants, like someone explaining the system to you.” (CE6)
  • “To design the UI so that a person who knew nothing, if that feature got engaged, that they would be safe, then as they use the feature that is where any learning has to happen.” (CE2)
  • “Interfaces where it’s not like the existing interfaces we see in current vehicles, but it’s quite unique, using the wind screen as a form of cockpit and different other elements or objects in the vehicle as some sort of communication channels.” (CE1)
Automated trucking (Experts), automated farming (Experts)
  • “Very simple and straightforward dialogue with the computer assistant.” (CE1)
  • “We do not have that adaptive interface, like the first time you turn on you get it, which is sophisticated and straightforward.” (TE2)
  • “It can be virtual…; it would be nice to see what the consequences are and to experience the consequences of you doing so.” (SFE4)
Table 3. The main goals.
Table 3. The main goals.
Leading Safe Automated DrivingTracking Risk IndicatorsEmergency Response
  • Proper situational awareness
  • Calibrated trust
  • Lack of misuse
  • Absence of harmful NDRT habits
  • Constant monitoring of the user’s states (ECG, sleep pattern tracking, stress, distraction, workload, engagement, fatigue detection, etc.) during automated driving.
  • Constant information transfer from the system to user.
  • Communicate relevant information to users.
Personalised Interaction Design
Establish emotional connection and iRiskThe greatest value of emotional connection is the sense of immersion it creates, which helps build user trust. Simple interaction is not enough—emotional design fosters a belief in consistent positive experiences. In the short term, it enhances user response to stimuli by amplifying comfort and reducing stress, encouraging deeper engagement with the AV’s OS iRisk, AI rHMI, or in-vehicle AI social robot. iRisk plays two roles:
  • Expressing its own emotions through facial expressions, gestures, tone, and content via the AI-based rHMI robot.
  • Arousing user emotions by responding empathetically to context and user behaviour.
While emotional reactions should be scenario-specific in the short term, the long-term goal is to build an AI-driven emotional intelligence engine. This engine would assess user mood, mental state, and contextual cues to deliver emotionally appropriate responses. For instance, if fatigue is inferred, iRisk might offer persuasive safety advice, likely prompting a helpful emotional reaction.
Over time, this adaptive and emotionally aware system could form genuine emotional bonds with users, enriching the in-car experience and promoting safer, more personalized driving.
Emotional Design in Intelligent Systems, iRiskThe iRisk project explores how to meet drivers’ emotional and psychological needs in real-world scenarios. Emotions are influenced by both immediate context and deeper personal states. For example, a system saying “Don’t worry, I will support you” may comfort one user but feel intrusive to another, depending on their emotional state. Since such reactions cannot be predicted through logic alone, iRisk uses AI and big data to improve emotional accuracy—analysing user behaviour to increase the likelihood of emotionally appropriate responses. Emotional design under iRisk focuses on building context-driven interactions that enhance positive feelings and reduce stress. This is achieved through:
  • Expression forms: facial expressions, gestures, voice tone, and appearance, etc.
  • Content: proactive suggestions, real-time feedback, and context-aware dialogue, etc.
  • Personality: a consistent, caring, polite persona that users can relate to and grow with.
By combining these elements, iRisk helps create emotionally intelligent in-car experiences that feel human, supportive, and adaptive.
Designing the In-vehicle AI robot persona: Inspired by expectations, grounded in reality User expectations for AI robots are often shaped by popular media: wise companion (Knight Rider), strong protector (I, Robot), warm caregiver (Baymax), etc. These characters represent powerful emotional and functional roles. However, in the driving context, a vehicle remains primarily a tool, so the intelligent character must be designed with practicality and emotional relevance in mind.
What kind of character do we want to offer in the car? Traits for the driving context:
  • Simple and dependable: Prioritizes stability and clarity over complex intelligence, reinforcing trust, use.
  • Notably caring and polite: Warm, considerate, and polite behaviour builds emotional comfort.
  • Knowledgeable and relatable: A robust design helps calibrate user expectations, boosts tolerance, and creates a sense of growth mindset.
Preset character profile: A character that is smart, confident, mindful, responsive, decisive, and efficient, always acting from the user’s perspective, adding depth and relatability. This persona design aligns with iRisk’s goals: building emotional connection, enhancing user trust, and humanizing in-vehicle AI through emotionally intelligent interaction and UX.
User-centred design requirementsThe system must be adaptable to different user types, with a user interface that is intuitive and accessible. Design considerations include clear visual contrast for colours to ensure easy distinction of elements, smooth and understandable screen transitions to support user orientation, simplified workflows to reduce cognitive load and operational effort. User Analysis: Applies to different user types, including tech savvy, tech illiterate, and tech literates. Moreover, considers early users, mid-level users, and long-term users.
Users should be able to easily check whether their vital states are within normal ranges, access detailed information on each vital, and view analysis over a selected period. The goal is to deliver a low-effort, high-clarity experience that supports both quick insights and deeper understanding.
Design and experience objectivesThe system experience should be friendly, polite, intuitive, and easy to use. The graphical interface must clearly convey information about the monitored situation, encouraging engagement through simplicity and usability. It should also instil a sense of security, especially in emergency situations.
The broader goal is to co-design a connected car service that offers a rich range of in-vehicle content, powered by technologies such as ML/DL, LLM, Big Data and Neuro-Ergonomics, Maps, AI, and Portal Services. Additionally, the aim is to advance in-vehicle voice recognition through NLP and to develop intelligent vehicle-to-everything services, enhancing the seamless integration of mobility and lifestyle.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mbelekani, N.Y.; Bengler, K. iRisk: Towards Responsible AI-Powered Automated Driving by Assessing Crash Risk and Prevention. Electronics 2025, 14, 2433. https://doi.org/10.3390/electronics14122433

AMA Style

Mbelekani NY, Bengler K. iRisk: Towards Responsible AI-Powered Automated Driving by Assessing Crash Risk and Prevention. Electronics. 2025; 14(12):2433. https://doi.org/10.3390/electronics14122433

Chicago/Turabian Style

Mbelekani, Naomi Y., and Klaus Bengler. 2025. "iRisk: Towards Responsible AI-Powered Automated Driving by Assessing Crash Risk and Prevention" Electronics 14, no. 12: 2433. https://doi.org/10.3390/electronics14122433

APA Style

Mbelekani, N. Y., & Bengler, K. (2025). iRisk: Towards Responsible AI-Powered Automated Driving by Assessing Crash Risk and Prevention. Electronics, 14(12), 2433. https://doi.org/10.3390/electronics14122433

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop