Abstract
Driver monitoring systems (DMS), advanced driver assistance ssystems (ADAs), and technologies for autonomous driving, along with other upcoming innovations, have been developed as possible solutions to minimize accidents resulting from human error. This paper presents a thorough review of DMSs and user experience (UX). The objective is to investigate, combine, and evaluate the key elements involved in the development and application of DMSs, as well as the UX factors relevant to the current landscape of the field, serving as a reference for future investigations. The review encompasses a bibliographic analysis performed at different stages, offering valuable insights into the evolution of the topic. It examines the processes of development and implementation of driver monitoring systems. Furthermore, this work facilitates future research by consolidating and presenting a valuable collection of identified datasets, both public and private, for various research purposes. From this evaluation, critical components for DMSs can be identified, establishing a foundation for future research by providing a framework for the adoption and integration of these systems.
1. Introduction
The Instituto Nacional de Estadística y Geografía (INEGI) defines a traffic accident as an unexpected event that arises from foreseeable conditions combined with irresponsible human actions, leading to consequences such as loss of life, injuries, physical or psychological harm, or material damage [1].
The National Highway Traffic Safety Administration (NHTSA), in its Traffic Safety Facts report, further highlights the predominant role of human involvement by attributing approximately 94% of traffic accidents to driver-related factors. The remaining percentage is attributed to mechanical failures, adverse environmental conditions, and other unidentified causes [2]. Within driver-related factors, the most common contributors include recognition errors (such as delayed hazard detection), decision errors (incorrect judgment or risk assessment), performance errors (improper vehicle handling), and non-performance errors (such as fatigue or inattention).
Taken together, these findings consistently position the driver as the primary source of traffic accidents. Driving, therefore, must be understood as an inherently demanding and complex activity, requiring continuous vigilance, rapid decision-making, and sustained concentration. The driving environment is dynamic and unpredictable, shaped by a wide range of situational factors such as sudden changes in weather, variations in road infrastructure, and fluctuating traffic conditions. These unexpected challenges not only test a driver’s cognitive and physical abilities but also expose the limitations of human performance under stress and fatigue. As a result, the driver’s capacity to adapt effectively to such conditions is critical for road safety, while also revealing why human error remains the dominant cause of traffic accidents worldwide.
Driving impairments can arise from a wide range of causes, producing diverse effects on the driver, including alterations in neuronal activity, emotional state, cognitive abilities, and physiological or psychological conditions.
Research based on Russell’s Circumplex Model of Affect indicates that an optimal driving state is associated with a moderate level of arousal and a medium-to-high degree of emotional balance [3,4]. As illustrated in Figure 1, this condition implies that drivers should remain alert yet comfortable, avoiding states of drowsiness, fatigue, or extreme emotional fluctuations that can compromise performance.
Figure 1.
Optimal driver condition in Russell’s circumplex model of affect. Adapted [3].
Within this context, driver monitoring systems (DMSs), advanced driver assistance systems (ADASs), autonomous driving technologies, and other emerging innovations have been developed as potential solutions to mitigate accidents attributable to human factors. These systems aim to enhance safety by monitoring the driver’s state, issuing timely alerts, and, in some cases, executing corrective or preventive actions before the driver can respond. However, significant challenges remain before highly reliable systems can be fully integrated into vehicles and seamlessly adopted by users in the long term. As previously mentioned, driving is a demanding task that requires the driver’s full capabilities to remain unimpaired.
Therefore, the potential impact of these systems on vehicle operation, as well as the risks associated with improper integration, should not be overlooked. Innovations in ADASs and autonomous vehicles can lead to driver over-reliance, increased distraction, or, in the event of system failure, leave the driver unprepared to respond to unexpected situations [5,6].
The existing literature has put considerable effort into developing and validating models for detecting driver behaviors, especially drowsiness and distraction. However, these studies often emphasize model accuracy at the expense of important issues like technological integration, human–machine interaction, real-world deployment, and overall user experience. Furthermore, many real-time DMS methods designed for in-vehicle use are validated only on datasets or in simulation settings. They tend to overlook that driving is a dynamic, evolving task where the driver interacts not just with the vehicle but also with a rapidly growing ecosystem of in-car technologies. Ignoring this complex, interactive environment can create additional distractions and limit the system’s effectiveness in real-world scenarios.
Current trends emphasizing comfort and the integration of advanced technologies, such as infotainment systems or touch-based Human–Machine Interface (HMI) [7], transform vehicles into interactive environments but also serve as direct sources of distraction [8,9].
Therefore, considering the integration, long-term acceptance, and the impact that these innovations have on driver performance is essential for the development of new technologies, leading to more refined systems focused on road safety. This paper presents a systematic review of DMSs and user experience (UX). The aim is to analyze, synthesize, and evaluate the key elements involved in the development and application of DMSs, as well as the considerations regarding UX in the state of the art, serving as a guide for future research.
The main contributions of this work are as follows:
- Dual bibliographic analyses: This study goes beyond other reviews that usually provide just one bibliometric analysis by performing two separate bibliographic analyses at different stages of the document selection process. This dual approach not only enhances the reliability of the review but also provides a deeper insight into the evolution of the DMS field throughout the systematic review process.
- Framework and implementation diagrams: In addition to summarizing the literature, we introduce two original diagrams, one capturing the conceptual framework of DMS and another depicting its implementation process, supported by an engineering design process.
- User Experience perspective: While existing reviews typically emphasize technical components (e.g., sensors, algorithms, detection accuracy), this work highlights the underexplored dimension of UX. By discussing its relevance and implications, we broaden the scope of the field beyond system performance, acknowledging the human-centered factors essential for real-world adoption and long-term acceptance.
- Comprehensive dataset synthesis: To support reproducibility and future research, this review compiles an extensive table of the datasets identified, something that prior works often mention only superficially or in scattered form. This consolidated resource serves as a practical reference point for researchers, enabling more efficient dataset selection and comparative studies.
Together, these contributions offer a rich and integrative perspective of the DMS, advancing both the conceptual understanding and the practical development of driver monitoring.
The remainder of this manuscript is structured as follows: Section 2 outlines the methodology used for selecting the articles for evaluation. Section 3 presents the results, beginning with two bibliographic analyses that describe the topic from a general perspective before moving to a final, in-depth analysis. This chapter also introduces two diagrams: one for the DMS framework and another for its implementation. It also delves into the topic of user experience in the founded literature. Section 4 provides a discussion of the challenges, terminology, and areas of opportunity related to DMSs. Finally, Section 5 presents the conclusions and future work resulting from this review.
2. Methodology
For this literature review, we adopted a systematic approach focusing on both driver monitoring and user experience. This methodology ensures academic rigor and enables the identification of the most relevant insights in the field, highlighting key elements in the literature such as the application of Artificial Intelligence (AI) and the importance of a multimodal framework. Building on this foundation, the subsequent phase of the methodology involves establishing the key guiding elements of the review. This process includes developing the primary research question, detailing specific hypotheses, and defining clear inclusion and exclusion criteria. By accomplishing this, the review maintains alignment between its aims and methodological choices that influence the search strategy, data extraction, and eventual synthesis of the findings. The following sections provide a detailed overview of each component to demonstrate how the systematic review was organized and executed.
2.1. Research Questions
- What are the approaches in the literature on the development and integration of driver monitoring systems?
- How has user experience been addressed in the design, evaluation, and implementation of driver monitoring systems?
- What challenges and areas of opportunity exist in the state of the art regarding driver monitoring and user experience?
2.2. Hypothesis
From this analysis, the key elements for driver monitoring can be identified, serving as a foundation for future research in the field by establishing a process for the development and implementation of driver monitoring systems. This will enable future DMS to be perceived as more reliable, non-intrusive, and aligned with user needs, thereby facilitating their integration and long-term use.
2.3. Keywords
Driver Monitoring; Driver Behavior; Driver Emotions; Driver State, Driving Simulator and User Experience.
2.4. Research Definition
The search process was conducted using documents published in both Scopus and Web of Science. The search strategies employed to explore the aforementioned databases were as follows:
- “Driver Monitoring” AND “User Experience”
- (“Driver Emotions” OR “Driver Behavior” OR “Driver State”) AND “Driver Monitoring”
- (“Behavioral Research” OR “Driving Monitoring”) AND “User Experience” AND “Driving Simulator”
- (“Driver behavior” OR “Driver emotions” OR “Driver state”) AND “User experience”
- “Driving Simulator” AND “Driver Monitoring”
The main criteria for selecting the documents focused on their relevance to the topic, language, and type of study. This led to the inclusion of documents published between 2019 and 2024, in either English or Spanish, and limited exclusively to journal or conference articles. Conversely, articles that did not primarily focus on driver monitoring or that were review papers or discussion pieces were excluded from the review. This selection process is also reflected in Table A1.
2.5. Search Results
In Figure 2, the flowchart used for this systematic review is presented, following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology [10]. The process began with the identification phase, where documents were obtained from the Scopus and WOS databases by applying the search strategies and filtering criteria described in Table A1. The results of each search strategy are detailed in Table A2.
Figure 2.
PRISMA flow diagram.
Initially, 466 documents were identified. After removing duplicates, the set was reduced to 297 for the first screening process. A detailed bibliographic analysis, discussed in Section 3, revealed the significant impact of autonomous vehicles in the research area. Many documents emphasized autonomous vehicle technology rather than DMSs and UX, which were the main focus of this review. To improve relevance, a new screening step was implemented to exclude documents centered on automated vehicles not directly related to DMSs or UX. This resulted in 169 papers for further analysis.
The documents were classified based on the type of evaluation: whether it was at a conceptual level, in a driving simulator, in an equivalent driving environment, or in a real-world driving task. This initial classification was performed using the titles and abstracts of the documents; if the available information proved insufficient, a more thorough review of the documents was undertaken.
In the course of this second screening process, a total of 35 documents were determined to be outside the scope of the analysis. Additionally, 56 documents focused exclusively on datasets or their evaluation. Further, two papers were identified as purely theoretical in nature, and 14 were classified as surveys or review articles. Finally, two additional papers were excluded due to eligibility criteria, as they were not published in English or Spanish. Consequently, this screening process yielded a final set of 60 documents that form the foundation for this systematic review.
3. Results
During the identification stage, a total of 169 documents were retrieved. The countries with the largest number of publications were Germany and the United States, with 22 and 20 articles, respectively, followed by Russia, Italy, and China, each contributing between 11 and 12 articles. From a regional perspective, Europe and Asia stand out as the continents with the highest research activity in driver monitoring, while in the Americas, the United States clearly leads this line of research. These patterns are illustrated in the map presented in Figure 3, The map was created using the SCImago Graphica tool, developed by the SCImago Research Group [11].
Figure 3.
Map of documents by country from the identification stage.
Two bibliographic analyses were conducted at different stages of the screening process, offering valuable insights into the evolution of the topic. These reviews progressed from a comprehensive overview of the state of the art to a focused examination of the key components associated with DMSs. Both analyses feature two landscapes of bibliographic data, generated using the software tool VOSviewer, version 11.6.20 [12].
3.1. Initial Bibliographic Analysis
A scientific landscape analysis was performed on the 297 documents identified in the initial stage. This analysis utilized a co-occurrence methodology, focusing on all Keywords, including both author keywords and index keywords, with ‘full counting’ as the counting method. A value threshold of 5 was applied during this process.
From the total of 1907 keywords, 138 reached the minimum threshold of five occurrences, and these were used to generate the landscape shown in Figure 4. The scientific mapping reveals six distinct clusters, each representing a major research direction within the state of the art:
Figure 4.
Landscape of the first bibliographic analysis.
- •Red Cluster: This group is centered on driver monitoring and driver monitoring systems, directly associated with states such as sleepiness, road safety, and driver fatigue. These connections underscore road safety as the primary driver of research in this domain. The cluster also highlights the most widely applied tools and methodologies in the field, including neural networks, computer vision, eye tracking, and facial recognition, which constitute the technological foundation of modern monitoring approaches.
- •Green Cluster: This cluster emphasizes driver behavior and driving performance, reflecting the critical role of the human factor in both autonomous vehicle research and monitoring systems. The relationship here is bidirectional: while these technologies aim to improve performance and safety, they simultaneously influence driver behavior and workload. Hence, their integration must be carefully managed to ensure acceptance and avoid unintended negative effects.
- •Purple Cluster: This cluster aligns with transportation and road safety, incorporating elements such as road monitoring, sensor technologies, and connections to driver monitoring and behavior. It reflects how advancements in intelligent vehicle systems and smart infrastructure are increasingly being leveraged for accident prevention and enhanced mobility safety.
- •Blue Cluster: This group focuses on autonomous vehicles, highlighting the significance of driver monitoring in the context of different levels of autonomy. Research here stresses the continued responsibility of the user to supervise vehicle actions and maintain readiness to intervene, emphasizing the importance of human oversight in semi-automated driving.
- •Yellow Cluster: This cluster centers on the use of driving simulators, which, when combined with monitoring technologies, serve diverse purposes, such as analyzing the impact of autonomous vehicles on drivers, identifying risk factors in real-time driving, assessing user experience, and supporting iterative research and system development.
- •Light Blue Cluster: This cluster focuses on behavioral research, with a particular emphasis on the connection between driver behavior and monitoring technologies. A key theme is the development of HMIs, where monitoring data informs the design of interaction systems that enhance usability, acceptance, and overall user experience.
The work in driver monitoring is heavily focused on AI tools, with both Machine Learning and Deep Learning playing key roles. The landscape map shows that computer vision and eye-tracking are the main methodologies, as seen in their central positions within the red and purple clusters. These techniques mainly evaluate the driver’s attention and behavior directly. However, the inclusion of physiological signals, connected within the light purple and blue clusters, indicates a growing interest in developing multimodal DMSs. These systems combine multiple data sources for a more thorough assessment of the driver’s condition. This trend points towards more advanced, less invasive systems that can better adapt to real-world driving situations.
The primary use of DMSs is in autonomous and semi-autonomous vehicles, reflecting a major research focus on managing smooth control transitions between automated and manual driving modes. Research also emphasizes detecting critical driver states like drowsiness and distraction, which are prominent, central nodes on the map. In addition to accident prevention, there is increasing attention on mental workload and performance. This broader approach shows that the aim is not only to reduce risks but also to improve the driving experience and overall safety. The strong link between driver monitoring and safety, directly connected to the traffic accidents cluster, highlights that the main goal of these systems is accident prevention and road safety improvement. This core objective drives all technological and practical advances in the field.
3.2. Second Bibliographic Analysis
Following the initial screening process, which involved the removal of documents primarily focused on autonomous vehicles rather than driver monitoring, a second bibliometric analysis was performed using VOSviewer software. This analysis centered on author keywords to effectively capture the themes that researchers emphasize and the specific areas of focus within the literature. A threshold of two occurrences was applied, narrowing the mapping down to 45 keywords. This process resulted in the visualization depicted in Figure 5, created with a clustering resolution of 1.0.
Figure 5.
Landscape of the second bibliographic analysis.
The landscape map is organized into four main clusters. The most prominent cluster, depicted in blue, centers around driver monitoring. The green cluster concentrates on the advancement of DMSs and associated tools, such as adaptive controls, HMIs, computer vision, and driving simulators. This cluster emphasizes the safety of both drivers and passengers as the primary motivation for its focus. The red cluster encompasses the key aspects of the driver’s state, with particular attention to drowsiness and emotional well-being. Lastly, the smaller yellow cluster is dedicated to Deep Learning.
Drowsiness plays a crucial role in the realm of driver monitoring, standing out as a primary concern compared to other keywords. Its strong association with terms such as ADAS, Human–Machine Interface, Adaptive Cruise Control (ACC), and Collision Warning emphasizes its significance in accident prevention. Similarly, distraction also holds a notable position. Although their prevalence is not as high as that of drowsiness, they remain an important area of focus and are likely a complementary factor in research regarding driver states.
Artificial intelligence techniques, especially Deep Learning, have emerged as a notable cluster, highlighting their increasing significance in the research and development of classifiers for driver emotions, behaviors, and states. Their close association with Driving Simulator indicates that Deep Learning is still undergoing development and testing in this domain. Consequently, it is likely that the use of Machine Learning and Deep Learning techniques in vehicles equipped with a DMS will grow in the coming years.
In general, the trend in driver monitoring research is centered on developing Artificial Intelligence techniques to detect driver distractions, drowsiness, and risky behaviors. These initiatives aim to create and implement strategies that enhance road safety across vehicles, whether they possess no automation, feature assisted driving systems, or offer varying levels of autonomy.
When it comes to UX, its involvement in this domain remains relatively limited. It is essential to note that part of the initial research strategies focused on identifying ongoing studies that explore how users interact with monitoring systems. This aspect is particularly relevant, as the integration of these technologies with vehicle occupants will significantly influence their acceptance or rejection in the medium and long term. Furthermore, poor integration can adversely affect the driver, ultimately becoming counterproductive and jeopardizing driving performance.
Nonetheless, it is crucial to emphasize that UX demonstrates significant connections to the primary topics within the landscape, maintaining direct links to each cluster. This highlights a promising area for further investigation, with substantial potential for exploration. Specifically, it paves the way for assessing DMSs throughout all development stages, from early design and testing phases to in-vehicle implementation, thereby promoting systems that are both reliable and readily accepted by users.
3.3. Final Analysis
Considering the 60 documents selected for this review, a more in-depth analysis was conducted. This analysis involved the development of a framework for DMSs, outlining a basic architecture that spans from monitoring approaches to classifiers and the target behaviors or states to be identified. Similarly, the process for implementing these systems was examined, taking into account both the intended application of the DMS and its underlying architecture.
The framework for developing a DMS can be followed using the diagram in Figure 6. This diagram outlines the entire process, from the potential technological approaches that can be adopted to the specific driver states and behaviors that can be monitored. Therefore, depending on the intended application, different types of DMSs can be designed based on the various combinations presented in the diagram. It is important to note that multimodal systems can be developed not only by combining different technological approaches but also by integrating various types of signals.
Figure 6.
Framework diagram of driver monitoring systems.
3.3.1. Monitoring Approach
Considering the various approaches to developing a DMS, current research and industrial applications can be broadly classified into three primary strategies: emotional-state assessment, physiological-state monitoring, and behavioral analysis. Each of these categories addresses a different but complementary dimension of the driver’s status, guided by three fundamental questions:
- How does the driver feel emotionally?
- What is the driver’s physical condition?
- What actions is the driver performing?
Emotional state assessment typically involves techniques such as facial expression recognition [13], voice tone analysis, or stress detection models, which aim to capture the driver’s affective state and its potential impact on driving performance [14]. Physiological monitoring, on the other hand, relies on sensors that measure indicators like heart rate variability, eye movement, or electroencephalographic (EEG) activity, offering objective evidence of fatigue, drowsiness, or medical impairments. Finally, behavioral analysis focuses on observable actions, including steering patterns [15], pedal usage [16], lane keeping, or gaze distribution [17], thereby linking driver activity to operational performance.
These approaches, whether applied independently or in combination, provide valuable insights into driver status and can be tailored for different domains. In ADAS, real-time DMSs feedback supports immediate interventions, such as issuing warnings, adjusting the level of automation, or engaging emergency braking in critical situations. In fleet management and insurance, aggregated data from multiple drivers enable the generation of individualized driving profiles, which can be used for personalized coaching, predictive risk assessment, or performance benchmarking across large groups. Beyond safety, the integration of emotional, physiological, and behavioral dimensions contributes to improving user trust, enhancing comfort during automated driving, and fostering long-term acceptance of these technologies.
3.3.2. Data Acquisition
Once the monitoring approach has been defined, it is necessary to determine the type of information that will be collected from the driver and the surrounding context. This information can generally be classified into two categories: Direct Monitoring: Refers to data obtained directly from the driver, usually through sensors or vision-based systems [18]. Examples include eye-tracking cameras, EEG, electrocardiography (ECG) [19], facial expression analysis, or steering wheel sensors [20] that capture physiological and behavioral signals. These methods aim to provide accurate insights into the driver’s state, such as drowsiness, distraction, stress, or cognitive load [21].
Indirect Monitoring: Refers to data gathered from external sources not directly linked to the driver. This includes vehicle telemetry (e.g., steering patterns, lane position, acceleration, and braking behaviors), mobile devices, roadside cameras, information from other vehicles via vehicle-to-vehicle (V2V) communication, or even data from pedestrians and traffic infrastructure. Indirect monitoring offers valuable contextual information that complements the driver’s internal state with environmental and situational factors [22].
It is worth noting that the adoption of multimodal approaches, which integrate both direct and indirect monitoring, has shown growth in recent years. Evidence suggests that relying on a single source of information may be insufficient for advanced behavior analysis, as it could overlook critical nuances of human cognition and environmental interactions. By combining multiple data streams, multimodal systems enhance robustness, improve reliability, and provide a more holistic understanding of driver behavior. Such integration is becoming a cornerstone in the development of new DMSs, enabling more accurate detection of impairments and better adaptation to real-world driving conditions.
3.3.3. Pre-Processing
The processing of driver information is a critical step for its subsequent application. This process must account for the implementation of filters, the adaptation of signals, and the creation of functional datasets. The specific type of preprocessing will depend on the signals utilized and the particular application of the DMS. Below are examples of preprocessing techniques, organized by signal type:
- Physiological [23,24,25,26,27]:
- -
- Noise extraction (denoising).
- -
- Generation of time windows.
- -
- Application of algorithms for signal adaptation.
- Vision Systems [28,29,30,31]:
- -
- Artifact filtering.
- -
- Counting frames and establishing time windows.
- -
- Image conversion (e.g., from RGB to grayscale).
- -
- Use of AI for detecting significant elements, poses, or faces.
- -
- Adjustment for varying lighting conditions.
- -
- Application of filters to enhance image quality.
- Eye Tracking [28,32,33]:
- -
- Generation of time windows.
- -
- Pupil detection.
- -
- Eye position localization.
- -
- Calibration of sensors based on the driver’s position.
- Vehicle Information [34,35,36]:
- -
- Signal filtering.
- -
- Extraction of relevant signals, such as acceleration, lateral position, or brake usage.
For multimodal DMSs, it is essential to adapt and integrate information, such as combining video with data from in-vehicle sensors. The objective of a multimodal system is to address the limitations associated with a single data source, resulting in a more comprehensive dataset that enhances future classification efforts.
3.3.4. Feature Extraction
Once the raw information has been pre-processed and adapted for use, it is essential to perform a feature extraction process aimed at identifying and isolating the most informative elements of each signal. This step is critical, as the quality and relevance of the extracted features directly influence the accuracy, robustness, and generalization capability of the classifier. For example, eye-tracking data can be analyzed to detect pupil dilation patterns, blink frequency, or saccadic movements within specific time windows, all of which serve as strong indicators of cognitive load and attentional focus.
The extracted features can generally be categorized as either inferred data, representing continuous measurements (e.g., HRV values or pupil diameter over time), or inferred events, which capture discrete occurrences such as microsleeps, abrupt steering corrections, or extended off-road glances. Both categories are essential for classifier training, as they enable the model not only to capture instantaneous states but also to identify temporal patterns and event-driven anomalies that may compromise driving safety.
Practical examples of such features include tags like eye on/off road, mouth open, lane keeping error, or head off road [37]. These indicators, whether expressed as continuous data streams or discrete events, constitute the core input that will ultimately be processed by the classifier.
3.3.5. Classification
Once the most pertinent information has been gathered in the previous step, it should be submitted to a classifier capable of identifying the type of behavior or state from a predefined set of options established at the beginning of DMS development. These classifiers may utilize mathematical models, rule-based systems, or AI-driven approaches such as Machine Learning and Deep Learning.
Mathematical Models: Early DMS research often relied on theoretical models of human alertness and fatigue. Examples include the Two-Process Model and the Three-Process Model of Alertness, which describe sleep–wake regulation based on circadian rhythms and homeostatic sleep pressure. These models offer the advantage of being interpretable and computationally efficient, but they are typically constrained by their dependence on generalized physiological assumptions and limited adaptability to individual differences in drivers [38].
Rule-Based Models: Rule-based approaches, such as those employing fuzzy logic, are designed to translate domain expertise into explicit rules for decision-making. For instance, thresholds on blink frequency or steering variability may trigger alerts when exceeded. These methods are straightforward to implement and interpret, but their performance is often limited by the rigidity of predefined rules, making them less effective in dynamic driving environments or when dealing with subtle variations across individuals [35].
Machine Learning (ML): Supervised ML algorithms, including Support Vector Machines (SVM) and Gradient Boosted Decision Trees (GBDT), introduced greater flexibility by learning from labeled datasets rather than relying solely on expert-defined rules. These methods can effectively capture non-linear relationships in multimodal signals, offering higher accuracy and adaptability. However, their performance depends heavily on careful feature engineering, and they may struggle to generalize when exposed to unseen conditions or noisy data [39,40].
Deep Learning (DL): More recently, DL approaches—particularly Convolutional Neural Network (CNN) and Artificial Neural Network (ANN)—have become the dominant trend in DMS research [41,42,43]. CNNs, in particular, excel at automatically extracting hierarchical features from high-dimensional data such as images and videos, eliminating much of the manual feature engineering required in traditional ML pipelines. In numerous studies, CNN-based classifiers have achieved recognition accuracies exceeding 90% in tasks such as drowsiness detection or gaze estimation. Furthermore, the availability of large annotated datasets and transfer learning techniques has made DL approaches increasingly versatile, enabling the integration of multimodal signals (e.g., eye-tracking, physiological data, and vehicle dynamics) into a unified model. Nonetheless, DL methods come with challenges, including high computational cost, the need for extensive training data, and reduced interpretability compared to simpler models.
In this context, a distinct trend is emerging towards AI-based classifiers that leverage both ML and Deep Learning DL techniques. Advanced DL methods, particularly CNN, have achieved impressive accuracy levels, frequently surpassing 90%. The extensive availability of data for training these models has rendered DMSs remarkably versatile, enabling an array of combinations of approaches, signal types, and classifiers to accurately identify specific driver states or behaviors.
3.3.6. Driver State or Behavior Inferred
At the conclusion of the process, the DMS must provide the final assessment of the driver’s state or behavior, which constitutes the primary output to be used by the corresponding application. This output represents either the physical or emotional condition of the driver, as determined after signal acquisition, pre-processing, feature extraction, and classification.
Some of the driver States or behaviors to be detected by the DMS:
- Emotional State: Emotions such as afraid, angry, disgusted, happy, sad, surprised, and neutral can be inferred through facial expression recognition, voice tone analysis, and physiological cues. Understanding emotional states is relevant not only for safety but also for user experience and comfort in human–machine interaction [41,42,44,45,46].
- Driving Distraction: Lack of attention caused primarily by external factors like interacting with the control panel, leading to a slow response to traffic events while driving [17,29,47,48].
- Drowsiness: Driving impairment state due to lack of sleep or related causes, such as excessive fatigue, monotonous driving tasks, circadian rhythm, and more [49,50,51,52].
- Cognitive load: A multidimensional concept describing the mental effort required to perform a task. A high cognitive load can reduce a person’s ability to react to unexpected events, whereas a very low load can cause them to become disengaged and lose focus [27,53,54].
- Attentiveness: A mental state where the driver is able to perceive and process information from the driving task and respond appropriately to what is presented [55,56,57].
3.4. DMS Implementation Process
DMSs can have a wide range of applications. While their primary focus lies in enhancing safety as an additional ADAS that provides information to reduce human error, DMSs can also be integrated with complementary tools to further improve overall safety. For example, they can be combined with other ADAS technologies, connected to in-vehicle speakers to influence the driver’s emotional state through music, integrated with alarms to provide timely warnings, or used to generate driver profiles that help identify individuals at higher risk. In such cases, targeted educational strategies and personalized feedback can be delivered through mobile applications to promote safer driving habits [58].
A proper implementation of a DMS ensures accurate data collection, minimizes the likelihood of user rejection, and improves the reliability of detecting the specific driver state or behavior under study.
In Figure 7, the diagram for the implementation of a DMS proposed in this work is presented. This diagram serves as a complement to the DMS framework introduced in Figure 6. In this case, the authors suggest that the implementation process should be carried out in conjunction with an established engineering design methodology, such as those proposed by Ulrich [59] or Dieter [60].
Figure 7.
DMS Implementation process.
The rationale for integrating the implementation process with an engineering design methodology lies in the ability to identify, from the earliest stages, the key elements that will enable both the integration and long-term acceptance of monitoring technologies by users. This approach emphasizes the importance of considering user interaction with both passive and active monitoring systems, ensuring that improvements can be made from the perspective of UX.
The proposal presented here draws inspiration from the mechatronic product design methodology outlined by Salazar-Calderón [61], which stresses the relevance of considering users and their socio-cultural context from the very first stages of design. Building on this foundation, the implementation of a DMS is structured into five stages, ranging from an initial information-gathering phase, focused on defining the intended application, through to the final stage, in which driver-related data is processed and made ready for practical use.
The first stage corresponds to the planning phase, which plays a critical role in defining the foundations of the DMS. At this stage, the intended application of the DMS must be clearly established, supported by a list of requirements and parameters derived from previous studies in the market or in the literature, as well as from existing local legislation, regulatory frameworks, limitations, usage conditions, and systematic strategies for identifying user and system needs. This ensures that the design process begins with a strong alignment between technical objectives, legal compliance, and user expectations. For example, the system may be designed only for data collection and behavioral analysis, serving primarily research and diagnostic functions, or used in a more advanced function, giving real-time feedback to other ADASs.
This phase offers an important opportunity to identify potential challenges, including ethical considerations, user acceptance, privacy concerns, and others. By addressing these elements early in the development of the DMS, the design team can effectively mitigate risks and enhance the system’s overall functionality.
Once the application of the DMS has been defined, the next crucial step is to establish its system architecture, which is the conceptual design phase of the engineering design. For this purpose, the framework presented in Figure 6 can serve as a guiding reference, supporting key decision-making processes during the conceptual design phase. The first and most critical decision is to determine the monitoring approach, which involves specifying which aspects of the driver will be analyzed. These may include the driver’s emotional state, physiological condition (e.g., fatigue, stress), or driving behavior (e.g., distraction, drowsiness). This initial choice is fundamental, as it defines the type of data required and strongly influences the subsequent design of the system.
Following this, it is necessary to select the monitoring signals to be employed, which may involve direct measurements from the driver or indirect information derived from external sources. In this context, adopting a multimodal approach has gained relevance, as it enables the integration of multiple data streams to provide a more comprehensive understanding of the driver. However, this approach also introduces challenges, such as the increased density of information, the computational cost of preprocessing and feature extraction, and the real-time performance required to run classification algorithms. An additional consideration is whether the analysis will be performed in real-time, to provide immediate feedback, or whether data will be stored in a dataset for offline analysis, which may be more suitable for research and model refinement.
Beyond the core monitoring components, it is essential to account for the complementary tools and subsystems that will support the DMS. These may include communication protocols for data transfer, dedicated computing units for signal processing, cloud-based solutions for large-scale data storage and analysis, or even mobile applications that act as user interfaces and extend the system’s monitoring capabilities.
To ensure both consistency and feasibility, the architecture design process should follow a structured methodology for concept generation, selection, and validation [59]. The outcome of this stage is expected to be a baseline architecture for the DMS, ideally represented in a visual format, such as diagrams or schematic models, that clearly illustrates the system’s components and their interactions within the vehicle environment.
The next stage corresponds to the application of the architecture, also known as the embodiment design phase within the design process. This stage comprises three fundamental steps. The first step involves the detailed development of all hardware and software components that will constitute the DMS and any necessary additional tools. This means defining both commercially available components and those that need to be custom-developed to meet the requirements established in previous stages. Likewise, appropriate software tools must be selected and the development of the classifier must be planned, including aspects such as the neural network architecture and the databases needed for its training. Once the components are defined, the second step focuses on the integration of hardware and software for the implementation of the DMS. In this phase, the system is installed in the vehicle or simulator according to the application, ensuring correct communication among the different modules. Furthermore, the classifier is developed and evaluated, which may include training a CNN using the selected database, as well as verifying the functionality of the algorithms under controlled conditions.
The third and final step consists of conducting validation tests to ensure the correct functioning of the DMS and its application in the intended context. Initially, these tests can be performed in simulators or on specific datasets to evaluate the classifier’s performance. The goal of this stage is that, upon its conclusion, the DMS is fully implemented in the vehicle, ready for evaluation under real-world conditions. It is recommended to complement this phase with the creation of detailed diagrams of the software and hardware functionality, bills of materials, and documentary records that provide evidence of the system’s evaluation and its performance in the final application.
The fourth stage is the launch of the DMS. This refers to conducting tests, either in a simulator or in controlled driving environments, where the DMS must be operational, gathering information from the person and preparing it before its evaluation in the classifier.
This leads to the final stage, where the results of the testing process are obtained. At this point, the driver information, previously pre-processed, is fed into the classifier in order to determine the driver’s state or behavior, which can then be used within the defined application. The development and implementation of the DMS can be considered successful when the classifier produces consistent and reliable results, and when the intended application operates effectively. Given the diversity of DMS solutions and their applications, it is the responsibility of the design and development team to establish the experimental parameters and define the acceptance criteria. The authors propose that a DMS, together with its application, can only be considered properly implemented if it fulfills the original design requirements, addresses the customer needs, and complies with applicable local regulations and legislation.
DMS Implementation Example
The following section presents a hypothetical use case to illustrate the process of implementing a DMS, using the diagram depicted in Figure 8. In this case, the system is designed to detect road rage, generate a driver profile, and share it with the appropriate authorities, who will then determine whether the driver should attend anger management driving classes.
Figure 8.
Example of the DMS Implementation process.
Customer Needs: The driver should not perceive or feel direct interaction with the DMS. Authorities must receive the minimum amount of personal information necessary. The DMS must be able to operate at any time of day. Assumptions: No local legislation or regulatory issues affect the use of a DMS.
Requirements and Operating Conditions: The emotion of anger must last for at least 2 s to be registered. The system must function both during the day and at night, under varying lighting conditions. Data must be shared directly with the authorities without intermediaries.
The driver’s personal information must remain secure. Based on the intended application, the architecture for the DMS was defined. Given the emotional nature of the use case, the design emphasizes emotion recognition.
As the primary input signal, an infrared (IR) camera mounted on the dashboard is proposed to capture a general view of the driver’s face. Facial expressions will be analyzed and processed by a CNN classifier. This CNN will be responsible for detecting, in real time, when the driver exhibits emotional states associated with anger.
To ensure both privacy and the generation of compact data packages, the DMS will register only the timestamps and duration of the detected anger episodes. This information will be transmitted directly to authorities using IoT and V2X technologies. Once the authorities receive the data, a driver profile will be generated, and it will be determined whether the individual must attend anger management driving lessons.
Additionally, video recordings of the driver will be stored in a black box within the vehicle for a period of one to two months. During this time, authorities may use the footage to verify the driver’s profile if necessary. After this period, the video data must be permanently deleted, leaving only the time-based anger reports as official records.
3.5. User Experience
One of the most critical aspects in the management and deployment of a DMS is its impact on UX. Ensuring seamless integration, user acceptance, trust, and effectiveness is essential for widespread adoption among drivers. Previous research has demonstrated that the introduction of certain technologies, such as auditory or visual alarms, can cause distractions and adverse reactions in driver performance [62]. These include annoyance, alert fatigue, ignoring the warnings, or simply turning off the entire system [63]. Therefore, alternative technologies that can mitigate this impact, such as the use of haptic alerts, should be considered.
In recent years, there has been a clear shift from technology-centered design to human-centered design, particularly in HMIs within vehicles [7]. This aligns with a broader industry trend that positions vehicles as interactive environments, emphasizing comfort, autonomous driving, advanced infotainment systems, in-vehicle streaming services, and even real-time AI-powered conversational assistants.
While UX is not a major focus in the current literature on DMSs, most studies that do address it rely on questionnaires or surveys. However, these methods are susceptible to subjective bias because they depend on the personal perceptions of drivers and passengers. Despite this limitation, the existing findings on UX in DMSs offer valuable insights into the challenges and opportunities for implementing these systems.
In most cases, UX integration with DMSs relies primarily on self-reporting instruments, such as the Van der Laan acceptance scale or the System Usability Scale. These tools, while widely used, capture only subjective impressions and do not directly assess how DMSs influence driver interaction in real-world scenarios. On the other hand, most research that employs objective measures to assess driver state does not explicitly consider the user experience dimension in DMS design and evaluation. This gap highlights a need for more holistic approaches that combine both subjective and objective indicators.
Challenges in DMSs from a UX Perspective:
- Distractions [64]: The integration of DMS poses challenges such as potential information overload and alert fatigue. Feedback delivered through HMIs or alert-based ADASs may inadvertently distract drivers, especially if it requires them to shift their gaze away from the road.
- Reliability and Trust [47,65]: Drivers may perceive DMS interventions as excessive or unjustified, which can undermine confidence in the system.
- Intrusiveness and Privacy [20,39]: Certain monitoring tools, such as EEG devices, can be perceived as highly intrusive. Moreover, camera-based systems that capture the driver’s face and actions raise additional concerns about data privacy and information management.
- Over-Reliance and fatigue [64,66,67]: There is a risk that drivers may become complacent and ignore warnings, potentially leading to riskier behavior due to an overdependence on assistance systems. Similarly, the constant use of alarms may fatigue the driver, leading to a state of agitation or prompting them to deactivate the system. In this same line, it is important to consider how the driver can regain control when interacting with advanced ADAS systems or vehicles featuring some level of autonomy.
A noteworthy application lies in the use of DMSs as design aids for HMIs. In such cases, driver monitoring serves as an input for identifying which interface elements are less distracting, thereby supporting the development of safer and more user-friendly vehicle interactions. This perspective repositions DMSs not only as monitoring tools but also as enablers of human-centered innovation in automotive design [7,28,62].
There are different ways UX can have a positive impact on various stages of driver monitoring.
On a preventive level, the development and implementation of any DMS must consider the impact and unique characteristics of the individuals who will be using it. This review suggests that this process should be integrated into the framework and implementation diagrams previously developed in Figure 6 and Figure 7. Considerations such as concerns about data management, local driving behaviors, and infrastructure should be part of the design requirements and features for DMSs. Therefore, UX holds a significant place from the earliest stages of DMS design and development.
During driving, DMSs can be employed not only as tools for detecting risk states but also as instruments for enhancing UX. Depending on the data collected from the driver, various systems can be adjusted; for instance, modifications to HMIs, as previously discussed, or even the generation of driving profiles that allow the vehicle’s behavior to be adapted (e.g., acceleration, HMIs, steering assistance, alarm settings, and more).
After driving, it is possible to implement strategies already established in the state of the art, such as surveys or interviews, to capture the user’s subjective perception. While these methods may involve subjectivity, they can be complemented by the insights provided by the DMS itself, enabling the creation of profiles that support multimodal feedback regarding the use, implementation, and application of DMSs. This, in turn, can serve as both a feedback mechanism and a design tool for research and development.
More directly, there are monitoring tools that can be linked to specific aspects of UX. Sensory experience, for example, can be analyzed through eye-tracking or facial recognition, where measures such as fixation duration, Areas of Interest (AOI), number of saccades within an AOI, regression count of AOIs, occurrences of surprise, or frequency of smiling can be used, this cases can be used to detect some states or behavior, reducing intrusiveness, like estimating the mental workload with the use of camera based eye trackers [28,68].
Interaction experience can be assessed through indicators such as task completion time, number of errors, number of steps required to execute a task, or finger movement detection during task execution, including finger speed. These measures can be obtained either through video analysis or finger-tracking on touchscreens, serving as strategies to distinguish the complexity involved in task performance.
Finally, emotional experience can be identified through emotion recognition techniques or the use of questionnaires that provide more subjective feedback.
Together, these strategies illustrate that monitoring tools and UX assessment methods can operate synergistically. Given that driver monitoring systems (DMS) are highly integrated into the driver’s environment, UX should be consistently considered in their design and evaluation.
4. Discussion
Significant advancements have been achieved in the realm of DMSs, which have become widely used tools in the contemporary automotive industry for identifying unresponsive drivers. However, considerable challenges remain in terms of further development, potential applications, and areas needing improvement.
This section will discuss several challenges that DMSs face, explore potential opportunities for enhancing these systems, and present details regarding the datasets collected during this review. These discussions will be informed by the insights and results identified in the previous sections.
4.1. Challenges of DMS
There are several issues that must be addressed to ensure that DMSs are both reliable and accepted by users. On the technical side, it is essential that signal processing, classifiers, and communication protocols operate properly. At the same time, the perception and trust that drivers place in these systems play a crucial role in their adoption.
4.1.1. Unexpected Road Conditions
One of the main challenges faced by DMSs is the occurrence of unexpected situations during driving. These may be environmental in nature, such as rain or sudden changes in lighting, or they may result from external factors such as accidents, shifts in traffic dynamics, the presence of pedestrians, or animals crossing the road [69]. It is important to note that a DMS requires a minimum amount of processing time to assess the driver’s state; however, in the case of sudden events, this delay can be critical. Moreover, such variations in the environment may trigger unintended system reactions, leading to unnecessary alerts that can distract the driver. Additionally, many of the models used in DMS classifiers are trained in relatively controlled environments, either through simulators or curated datasets. As a result, these models are not always equipped to handle highly variable real-world conditions, which can lead to misdiagnosis, classification errors, and potentially flawed decision-making in critical situations.
4.1.2. Vehicles Without Innovations
While there is a growing trend toward integrating ADASs and other new technologies in vehicles, many cars on the road still lack these advancements. This situation creates a challenge: as we enter a new era of high-tech automobiles that utilize technologies like V2X communication, the Internet of Things (IoT), and cloud-based analytics, we also have to deal with an older fleet that cannot support these advanced capabilities.
This issue is made even more complex by the fact that our current road infrastructure and driver education programs often fail to keep up with these rapid changes. As a result, we face a range of new problems that require not just a change in driving habits but also adjustments for older vehicles that are not equipped with modern features. Moreover, we need to develop new laws and regulations to manage these innovations effectively, along with a careful strategy to address the economic impacts associated with this major technological shift.
4.1.3. Limitations of the Sensors
Sensors used in the current state of the art often present implementation challenges that hinder their widespread use. For instance, while EEG sensors offer high accuracy in detecting brain activity, they are often impractical for daily use due to their high cost, complexity, and the discomfort they cause the driver [70]. Other alternatives like biometric wristbands or smartphone-integrated sensors heavily depend on the user, requiring them to be worn correctly and not forgotten, which limits their reliability in safety-critical applications. Adding to these limitations are external factors [71]. Sudden driving maneuvers caused by environmental conditions, such as unexpected turns or emergency braking, can alter the recorded signals, degrading the data quality. Likewise, variations in lighting, including glare, intense shadows, or rapid transitions between bright and dark environments, directly impact the performance of vision-based systems. These limitations not only reduce the accuracy of data interpretation but can also lead to false positives or erroneous diagnoses in monitoring systems, posing a significant risk to the safety and reliability of DMSs. Depending on the application, the development and integration of multimodal DMSs can be a strategic approach to compensate for potential sensor failures. By incorporating multiple signals, a redundant system can be created. With the help of an appropriate classifier, the system can detect these problems and correct them using the other signals, ensuring greater robustness and reliability.
4.1.4. Highly Complex Vehicle
The constant pursuit of innovation in the automotive industry has led to the integration of a growing number of devices and systems that operate simultaneously in vehicles. These include both passive and active ADASs, infotainment platforms, cameras, GPS, augmented reality tools, and gamification elements, among others. While this technological convergence offers new functionalities and benefits, it also significantly increases the complexity of the driving experience [72].
The coexistence of multiple systems presents several challenges. For one, it raises maintenance and repair costs due to the sophistication and interconnectedness of components. For another, drivers must quickly adapt to potential failures in critical systems, like ACC, where a delay in regaining manual control of the vehicle could have serious consequences. We also need to consider problems related to the stability of communication between autonomous vehicles, failures in communicating with smart infrastructure, or even errors in on-board AI algorithms.
In this context, DMSs are an additional element within this already complex ecosystem. Although their purpose is to enhance safety and mitigate risks, their integration introduces new demands in terms of compatibility, data processing, and user acceptance. This scenario reinforces the need to design vehicle architectures that are more robust and centered on reliability.
4.2. Terminology
Considering the possible states or behaviors in Section 3, a significant issue exists regarding the establishment of precise terminology, for example, with the terms “distraction” and “inattention”. Some researchers use these terms interchangeably, while others consider them distinct concepts. Although this may seem like a minor discrepancy, the inconsistent use of terminology can lead to significant variations in how driver behaviors and states are classified and evaluated by DMSs [73].
In this context, distraction refers to a loss of mental concentration caused by external factors, such as interacting with an HMI. In contrast, inattention pertains to a lack of mental focus stemming from the driver’s own internal thoughts. This distinction is crucial, as it directly influences the definition of system architecture and the classification methods used to identify and respond to different driver states.
Considering this same example, distraction detection can be monitored through vision-based systems that track the driver, the cabin, and the road. However, the detection of inattention often requires more intrusive methods, such as EEG, which can provide insights into whether the driver is maintaining concentration on the driving task or if their attention is diverted elsewhere.
Following this premise, some authors, such as Engström, and Regan [74,75], further refine the concept of attention during driving by categorizing it into distinct types: Driver Restricted Attention, Driver Misprioritized Attention, Driver Neglected Attention, Driver Cursory Attention, and Driver Diverted Attention.
In particular, the last category, Driver Diverted Attention, refers to situations in which the driver’s attention is shifted away from critical safety-related tasks. This category is further divided into driving-related and non-driving-related tasks. The latter, also referred to as secondary tasks, has attracted significant interest within the DMS research community due to its strong impact on driving safety and system performance.
On the other hand, the SAE J2944_202302 standard [76] provides a set of measures and guidelines regarding driver performance, focusing on states of interest such as drowsiness, inattention, and visual or cognitive distraction. These specifications can serve as useful references, as they offer structured data related to driver–vehicle interaction. However, their scope is primarily limited to the vehicle and its interaction with the road environment, which means they fall short of providing a comprehensive framework for defining and categorizing driver states and risky behaviors.
In contrast, the European New Car Assessment Programme (NCAP) offers more detailed descriptions of driver-related conditions, including impaired driving, distraction, fatigue, and unresponsive driver, along with a series of guidelines for incorporating DMS based on gaze tracking and its different types [77]. Within this framework, Euro NCAP also attempts to differentiate between microsleep, sleep, and unresponsive driver, although the distinction relies solely on the duration of eye closure. Although this represents an improvement compared to what is provided by SAE, it still falls short of accommodating more complex DMSs, such as those based on multimodal approaches or more complex terminology.
This comparison underscores the critical importance of establishing a robust and standardized terminology, as well as measurable and operational definitions of driver states. Without clear and universally accepted definitions, it becomes challenging to ensure interoperability across studies, comparability between DMS solutions, and regulatory compliance at an international level. Moreover, the lack of precise definitions risks oversimplifying complex cognitive and behavioral phenomena, potentially leading to false positives or false negatives in driver state detection. Therefore, advancing DMS research and deployment requires the joint efforts of standardization bodies, automotive consortia, and the scientific community to converge on a shared taxonomy of driver states and behaviors that is both technically rigorous and practically applicable.
In light of this, the authors propose a line of research in the development of a robust taxonomy that could serve as a standardized framework for defining concepts related to driver behavior and states in the context of DMSs. Such a taxonomy would not only enhance clarity and comparability across studies and systems but also facilitate the alignment of technological advances with regulatory requirements and industrial practices. Ultimately, this would contribute to the deployment of more reliable, interoperable, and effective driver monitoring solutions.
Following the work of Engström [74], which establishes a framework and taxonomy for classifying driver inattention, the authors propose a modification to their framework. The aim of this change is to better define the various states and behaviors that can negatively affect driving performance.
For this purpose, the material from Figure 9 will be used. This figure presents a diagram of the different resources a person utilizes for driving: cognitive, perceptual, and motor resources. Through sensory inputs, such as the eyes, skin, ears, or nose, the driver obtains information from the environment and performs driving tasks through action outputs, including the arms, legs, mouth, or neck.
Figure 9.
Proposal of taxonomy framework for driving monitoring.
During the driving task, the driver must use these three main resources at varying levels, resulting in cognitive, perceptual, or motor load. Ideally, the user should correctly register and process environmental information to execute the necessary actions for proper driving performance. However, various sources of disruption can occur, represented by the red arrows in the diagram. These disruptions, whether internal or external to the driver, will affect the three aforementioned resources. This, in turn, impacts the sensory inputs and motor outputs, ultimately leading to a driving performance that is inferior to what would be achieved under ideal conditions.
Thus, the process of defining concepts related to driver monitoring must consider both the nature of the sources of impairment and their level of impact on the user. For instance, a distraction from a billboard would be classified as an external source. Conversely, a distraction caused by the driver’s own worries would be an internal source, leading to a different concept within the taxonomy being developed. Additionally, changes in the usage levels of the three primary resources, whether the demand on them increases or decreases, must be evaluated. Finally, the type of sensory and motor functions affected by the impairment must also be taken into account.
Returning to the concept of ’driving distraction,’ we can construct it using the proposed framework as an example. First, an external distraction occurs, such as a street-side advertisement. This event increases the demand on cognitive and perceptual resources, but not on motor resources; in fact, the latter might even be diminished. In this case, the driver’s vision is affected by focusing on the ad, as is their head movement, leading to a decreased response to potential changes on the road. This aligns with the definition mentioned at the beginning of this section: that driver distraction is a loss of mental concentration caused by external factors. Its application in DMSs can be further guided by regulations, which stipulate that short-duration distractions should not exceed 3 s.
However, it must be emphasized that this framework is an initial proposal. Therefore, more extensive work is required before it can be considered a standard approach.
4.3. Opportunity Areas
Beyond the challenges that DMSs currently face, there exists a wide range of potential areas for improvement and innovation. These opportunities encompass both technological development, such as advances in sensing, data processing, and system integration, and novel applications that can enhance the functionality, adaptability, and user acceptance of DMSs. In this section, the authors outline and discuss several key areas of opportunity that could guide future research and foster the evolution of more intelligent, reliable, user-centered monitoring systems, as well as the possibility of introducing DMSs as a tool for different applications.
4.3.1. UX and Intrusiveness
There are numerous opportunities to be leveraged through the implementation of DMSs. One significant aspect highlighted in this review is UX, which, despite its critical importance, is often overlooked in much of the research and development related to DMSs. However, UX plays a vital role in facilitating effective integration and fostering user acceptance.
A comprehensive study focusing on the integration of DMSs in commercial vehicles could emerge as a distinct research avenue, addressing the long-term acceptance or rejection of specific technologies, as well as the management and utilization of user data. This entails investigating the extent to which certain information can be shared and exploring potential legislation to safeguard sensitive data, such as biometric signatures, facial recordings, and personal information.
The intrusiveness and impact of these technologies are significant concerns regarding UX. The incorporation of DMSs in commercial vehicles mustn’t disrupt the driving task itself, avoiding scenarios in which alarm usage causes distraction or where interaction with other ADAS features leads to overreliance and unsafe driving behaviors. In this light, DMSs in commercial applications should be designed as minimally intrusive tools, ensuring that drivers do not perceive their influence during driving activities.
4.3.2. Real-World Test
The lack of experimentation in real-world environments presents a significant concern that should not be underestimated. While considerable effort has been dedicated to the development of AI-based classifiers, their efficacy remains constrained by the limited extent of testing conducted within actual vehicles and under real-time driving conditions. Some of the existing research relies heavily on datasets or controlled simulations, which, although useful for algorithmic benchmarking, cannot fully capture the complexity and unpredictability of naturalistic driving.
Conducting experimentation in real-world settings could provide invaluable insights by uncovering challenges that are otherwise invisible in artificial environments. These include the impact of vehicle-specific vibrations that may distort sensor readings, the influence of changing environmental conditions such as lighting transitions, weather variability, or road surface irregularities, and the effects of unexpected mechanical malfunctions or sudden roadway events (e.g., abrupt braking of surrounding vehicles, erratic pedestrian behavior, or construction zones). Such factors directly affect the reliability of data acquisition, the robustness of classifiers, and the overall accuracy of DMSs.
Moreover, real-world testing is essential for evaluating the long-term adaptability and generalization capacity of classifiers. For instance, a model trained on a dataset collected under specific cultural or geographical conditions may fail when deployed in a different context, where driver behaviors, distraction sources, or even road signage differ significantly. Field experimentation also makes it possible to assess user acceptance and comfort, two factors that cannot be measured through offline validation but are critical for the successful adoption of DMSs in consumer vehicles.
Currently, DMSs in commercial vehicles have primarily focused on real-time detection of whether the driver is asleep or has been inattentive for extended periods, particularly in vehicles with some level of autonomy. However, with the growing trend toward highly autonomous vehicles and more advanced ADAS technologies, there is a need for real-time DMS that can adapt to dynamic road conditions.
Studies such as the one by Chang [42], which sought to modify the driver’s emotional state using music, conducted tests in real-world driving environments. This required the use of a multimodal DMS, employing strategies to monitor both the driver directly and the surrounding environment. The system utilized a camera for detecting facial expressions, Galvanic Skin Response (GSR), and heart rate; a second camera for environmental detection; and a microphone for ambient sounds. The goal was to generate feedback and use music to alter the driver’s mood based on their emotional state and the driving environment. After the tests, researchers administered questionnaires to the subjects, resulting in their technology receiving an 8 out of 10 satisfaction rating. However, the authors noted that the recognition was somewhat slow in certain environments and that the variation in emotional perception could be significant depending on the individual. They highlighted that the emotion classifier used requires more extensive training before it can be applied more broadly.
Similarly, a study by Li [28] aimed at evaluating the UX of in-vehicle HMIs also required a multimodal approach. It used eye tracking, finger tracking, a camera for expression detection, and post-test questionnaires to gauge driver perceptions. Another real-world application by Kashevnik [48], which sought to identify at-risk states based on heart rate using a smartphone, was able to demonstrate a general heart rate status (normal or high pulse) but found it impossible to detect the driver’s exact pulse.
These examples demonstrate that deploying a DMS in a real-world driving environment requires careful consideration of various factors. Multimodal systems are necessary for more detailed detection, and data from inside the vehicle, the external environment, and the driver themselves are all critical elements. Furthermore, bringing DMSs to a commercial level introduces additional challenges. Issues such as production costs, legislation on minimum DMS requirements, management of personal driver data, local driving styles, and regional idiosyncrasies are challenges that both industry and academia must work to address.
4.3.3. Other Applications
The primary objective of DMSs is to enhance road safety by enabling timely interventions in response to risky driving behaviors. However, the authors argue that the potential applications of DMSs extend far beyond mere safety enhancements.
One notable application is the integration of driver monitoring methods to support adaptive HMI design [78]. By closely analyzing driver behavior, the size, color, and positioning of interface elements can be dynamically adjusted to improve usability and responsiveness. For instance, if a driver shows signs of distraction, the system could shift critical information to more prominent positions, utilize brighter colors, or modify the complexity of the information displayed. This adaptive approach to HMI can be scaled up to influence broader vehicle design decisions, allowing manufacturers to create vehicles that automatically adjust based on individual driving profiles. This capability could manifest in various ways, such as modifying interface layouts, optimizing head-up displays for visibility, adjusting accelerator resistance for enhanced control, and providing tailored steering assistance based on the driver’s habits and comfort levels.
On a larger scale, DMSs can be integrated into transportation fleets or public transit systems, where they can optimize schedule management, enhance performance evaluation, and improve passenger safety. Such systems could employ DMSs to detect driving anomalies, such as erratic speed changes or sudden braking, and alert relevant authorities to potential safety concerns in real time. For example, if a bus driver is consistently engaging in risky maneuvers, the system could trigger a protocol to evaluate driver performance or even deploy remedial training.
Furthermore, the authors propose a focused line of research aimed at leveraging DMSs for evaluating vehicle and roadway design elements. In this scenario, insights derived from driver behavior could uncover critical shortcomings in proposed designs for future vehicles. For instance, if drivers frequently express frustration when navigating newly designed features, such as touch-sensitive controls or complex infotainment systems, these areas can be re-evaluated. Additionally, real-time data from DMSs can help identify potential road hazards, including driver reactions to physical road features such as speed bumps, unexpected accidents, dangerous curves, or variable lane scheduling. This analysis could be further enriched when integrated with V2X technologies, offering a holistic approach to road safety and design that proactively addresses both driver and infrastructure dynamics.
4.4. Datasets
The significance of datasets in the context of model validation and evaluation is increasingly paramount, particularly for assessing models designed to classify driver states and behaviors. This relevance becomes even more pronounced when conducting experiments in simulators or real vehicles is impractical or logistically challenging. In such scenarios, having access to a comprehensive dataset not only serves as a crucial resource for model validation but also offers researchers a foundation upon which to build and refine their models.
While the necessity of testing in real-world environments has been previously underscored, the prevailing trend in academic literature continues to prioritize model development and validation procedures. As a noteworthy contribution to this body of literature, a detailed table has been compiled that enumerates the various datasets identified throughout the review process. The Table A3 and Table A4 from Appendix A, includes essential information such as the dataset name, its accessibility status (categorized as public, private, or restricted to research purposes only), the institution or organization responsible for its creation or ownership, and the specific applications for which the dataset was utilized in the reviewed studies. Furthermore, when applicable, the table provides details regarding the source from which each dataset can be accessed.
The table also uncovered several supplementary datasets that, depicted in Table A5, while not explicitly cited in the primary sources, have been included to enhance the comprehensiveness of the table. These additional entries feature nuanced variations in the specified attributes, emphasizing the unique intended purposes of each dataset. By presenting this wealth of information, the table aims to serve as a valuable reference for researchers in the field, facilitating easier access to essential data that can inform future studies and advancements in driver state and behavior classification models.
5. Conclusions and Future Work
Driver monitoring has become an essential tools for enhancing road safety, with recent research emphasizing the role of Artificial Intelligence in developing classifiers through Machine Learning and Deep Learning techniques. These technologies are primarily applied in autonomous vehicles, particularly for tasks such as detecting driver drowsiness and distraction.
However, considerable challenges must be addressed before DMSs can be considered highly reliable in the dynamic environment of driving. These challenges extend beyond achieving robust technical performance; they also involve ensuring correct deployment, long-term integration, and user acceptance, all while avoiding any unintended negative impact for drivers.
The findings of this review highlight the necessity of a dual commitment to advancing DMSs: enhancing technical capabilities and facilitating seamless integration within a framework that prioritizes ethical design and user-centered principles. To maximize their potential as foundational elements of next-generation intelligent vehicles, it is crucial to align considerations of safety, usability, and trust.
Finally, this review emphasizes the development of a framework and implementation process rooted in engineering design methodology that places the user at its core. This user-centric approach directly addresses customer needs, integrates the technical aspects of DMSs and their applications, and underscores the importance of considering user experience from the earliest stages of system design.
This literature review will serve as a foundation for the development of a methodology aimed at evaluating in-vehicle technologies through the use of driver monitoring tools. Building on the insights gathered, the next step will involve the design and implementation of a proprietary DMS tailored specifically for this purpose. This system will support the creation of a baseline experimental framework to assess the extent to which different in-vehicle technologies generate driver distraction, as well as to analyze driver behavior both during interaction with such technologies and in their absence. The proposed methodology is envisioned as a preventive approach to improving road safety by identifying potential risks associated with technology use at the design stage, rather than solely relying on post-deployment evaluations. By integrating DMSs into the assessment process, it will be possible to capture a multidimensional view of driver states, encompassing cognitive, behavioral, and emotional responses, which are critical for understanding how interaction with new technologies may impact performance and safety. Furthermore, the outcomes of this development are expected to provide valuable contributions in three main areas: (i) informing the design of in-vehicle systems that are safer and more user-friendly; (ii) enhancing the user experience by ensuring that new technologies complement, rather than compromise, the driver’s attentional resources; and (iii) supporting a data-driven analysis of the broader implications of emerging technologies on driver performance and overall traffic safety. Ultimately, this approach aims to establish a systematic, user-centered, and safety-oriented methodology for evaluating and integrating future in-vehicle technologies.
Author Contributions
Conceptualization, L.A.S.-C.; methodology, L.A.S.-C.; validation, S.A.N.-T. and J.I.-R.; investigation, L.A.S.-C.; resources, S.A.N.-T.; writing—original draft preparation, L.A.S.-C.; writing—review and editing, S.A.N.-T. and J.I.-R.; supervision, S.A.N.-T.; project administration, S.A.N.-T. All authors have read and agreed to the published version of the manuscript.
Funding
This work was performed in the founding of Instituto Tecnológico y de Estudios Superiores de Monterrey Escuela de Ingeniería and Secretaría de Ciencia, Humanidades, Tecnología e Innovación (SECIHTI) with the scholarship CVU: 1239223.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| ACC | Adaptive Cruise Control |
| ADAS | Advanced Driver Assistance Systems |
| AI | Artificial Intelligence |
| ANN | Artificial Neural Network |
| AOI | Areas of Interest |
| CNN | Convolutional Neural Network |
| DL | Deep Learning |
| DMS | Driver Monitoring Systems |
| ECG | Electrocardiography |
| EEG | Electroencephalography |
| GSR | Galvanic Skin Response |
| GBDT | Gradient Boosted Decision Trees |
| HMI | Human–Machine Interface |
| IR | Infrared |
| INEGI | Instituto Nacional de Estadística y Geografía (National Institute of Statistics and Geography) |
| ML | Machine Learning |
| NCAP | New Car Assessment Programme |
| NHTSA | National Highway Traffic Safety Administration |
| PRISMA | Preferred Reporting Items for Systematic reviews and Meta-Analyses |
| SVM | Support Vector Machines |
| UX | User Experience |
| V2X | Vehicle-to-Everything |
| V2V | Vehicle-to-Vehicle |
| WOS | Web Of Science |
Appendix A. Methodology Extra Details
Table A1.
Inclusion and Exclusion Criteria.
Table A1.
Inclusion and Exclusion Criteria.
| Inclusion | Exclusion |
|---|---|
| Articles from 2019 to 2024 | Articles not mainly focused on driver monitoring |
| Articles in English or Spanish | Reviews |
| Articles and Conference Proceedings | Discussion |
Table A2.
Articles found for any Research strategy in Scopus and WOS.
Table A2.
Articles found for any Research strategy in Scopus and WOS.
| Research Strategy | Scopus | WOS |
|---|---|---|
| “Driver Monitoring” AND “User Experience” | 12 | 9 |
| (“Driver Emotions” OR “Driver Behavior” OR “Driver State”) AND “Driver Monitoring” | 175 | 78 |
| (“Behavioral Research” OR “Driving Monitoring”) AND “User Experience” AND “Driving Simulator” | 16 | 1 |
| (“driver behavior” OR “driver emotions” OR “driver state”) AND “user experience” | 47 | 15 |
| (“driver behavior” OR “driver emotions” OR “driver state”) AND “user experience” | 65 | 48 |
Appendix B. Datasets
Table A3.
Datasets found in the review.
Table A3.
Datasets found in the review.
| No. | Name | Application in the Review | Type | Institution | Access | Ref. |
|---|---|---|---|---|---|---|
| 1 | 3MDAD | Driver actions and Spatial Attention | Public | Laboratory of Advanced Technology and Intelligent Systems (LATIS) | Web Page | [79] |
| 2 | Driver drowsiness using keras | Drowsiness | Public | Noakhali Science And Technology University | Kaggle | [80] |
| 3 | RAD dataset | Aggressive driver behavior | Private | Vellore Institute of Technology | NA | [81] |
| 4 | GIDAS (PCM subset) | Drive behavior and state | Private | Federal Highway Research Institute (BASt) and the Research Association for Automotive Technology (FAT) | Web Page | [82] |
| 5 | Folksam data | Drive behavior and state | Private | Folksam Research, Euro NCAP and US-NCAP | NA | [83] |
| 6 | Naturalistic Driver Behavior Dataset (NDBD) | Driver posture and behavior | Public | ICMC/USP-Mobile Robots Lab and University of Sáo Paulo | Youtube | [84] |
| 7 | n.s. | Drowsiness | Private | VITAL Laboratory, Universiti Teknologi MARA | NA | [85] |
| 8 | Chengdu Taxi GPS Data | Driving Profile | Private | Data Castle | Web Page | [86] |
| 9 | n.s. | intention-aware lane | n.s. | Chalmers University of Technology and Zenseact | n.s. | [87] |
| 10 | n.s. | Primary and secondary driving task | Private | Silesian University of Technology | NA | [88] |
| 11 | n.s. | Drowsiness | Private | Mercedes-Benz Technology Center | NA | [89] |
| 12 | NTHU-DDD | Fatigue | Public | National Tsing Hua University, Computer Vision Lab | Kaggle | [90] |
| 13 | n.s. | Drowsiness and distraction | Private | HealthyRoad Biometric Systems | n.s. | [91] |
| 14 | n.s. | Driver anomalies behavior | Private | Department of Computer Science, Technical University of Ostrava | NA | [92] |
| 15 | State Farm Distracted Driver Detection | loss of attention | Public | State Farm | Kaggle | [93] |
| 16 | n.s. | Drowsiness | Private | Institute of Industrial Science, The University of Tokyo and Nissan Motor Co. | NA | [94] |
| 17 | n.s. | Drowsiness | Private | Mercedes-Benz AG | NA | [95] |
| 18 | n.s. | Drowsiness | Private | n.s. | n.s | [96] |
| 19 | n.s. | Head Position | Private | Széchenyi István University | NA | [97] |
| 20 | 300W-LP | Driver State | Public | Center for Biometrics and Security Research & National Laboratory of Pattern Recognition | Repository | [98] |
| 21 | AFLW2000 | Driver State | Public | Center for Biometrics and Security Research & National Laboratory of Pattern Recognition | Repository | [98] |
| 22 | ETH Face Pose Range Image Dataset | Driver State | Public for Research | BIWI, ETH Zurich | Web Page | [99] |
| 23 | MPIIGaze | Drive State | Public | Max Planck Institute for Informatics | Web Page | [100] |
| 24 | Multi-view Gaze Dataset | Drive State | Public | The University of Tokyo | Web Page | [101] |
| 25 | n.s. | Cognitive impairment | Public for Research | Florida Atlantic University | By request to the author | [102] |
| 26 | n.s. | Fatigue and Distraction | Private | Computer Science Department BITS Pilani | NA | [103] |
| 27 | ImageNet | Drowsiness | Public for Research | Princeton University and Stanford University | Web Page | [104] |
| 28 | n.s. | Driver Behavior | Private | ITMOUniversity | NA | [105,106] |
| 29 | n.s. | Driver Behavior | Private | Sharif University of Technology | NA | [107] |
| 30 | IR-Camera Datasets | Inattentios and Drowsiness | Public | Electronics and Telecommunications Research Institute | Gitub | [108] |
| 31 | n.s. | Driver Behavior | Private | ITMO University | NA | [109] |
| 32 | YawDD dataset | Drowsiness | Public for Research | DISCOVER Lab, University of Ottawa | IEEE Dataport | [110] |
| 33 | n.s. | Emotions | Private | Fotonation Romania SRL | NA | [111] |
| 34 | DriverSVT | Driver State and actions | Public | ITMO University | Zenodo | [112] |
| 35 | Real & simulated driving | Driver distinction | Public for Research | Silesian University of Technology | IEEE Dataport | [113] |
| 36 | hcilab Driving Dataset | Driver physical condition | Public | hciLab Group, University of Stuttgart | Web Page | [114] |
| 37 | n.s. | Driver physical condition | Public for research | Direção Geral de Estatísticas da Educação e da Ciência (DGEEC) | By request to the author | [115] |
| 38 | Warwick-JLR Driver Monitoring Dataset (DMD) | Driver physical condition | Public for Research | University of Warwick | Request Form | [116] |
| 39 | Long-term ST database | Driver physical condition | Public | Laboratory of Biomedical Computer Systems and Imaging, University of Ljubljana | PhysioNet | [117] |
| 40 | Simulated Driving Database (SDB)/Alarm Test Driving Database/Real Driving Database (RDB) | Driver physical condition | Private | n.s. | NA | [118] |
| 41 | PTB-XL | Driver physical condition | Public | Physikalisch Technische Bundesanstalt (PTB) | PhysioNet | [119] |
| 42 | Drive&Act dataset | Driver Behavior | Public for Research | Fraunhofer IOSB | Web Page | [120] |
| 43 | Open-Drive&Act | Driver Behavior | Public | Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology | Github | [121] |
Table A4.
Datasets found in the review Part 2.
Table A4.
Datasets found in the review Part 2.
| No. | Name | Application in the Review | Type | Institution | Access | Ref. |
|---|---|---|---|---|---|---|
| 44 | Facial Expression Recognition 2013 (FER2013) | Emotions | Public | Montreal Institute for Learning Algorithms (MILA) | Kaggle | [122] |
| 45 | Keimyung University Facial Expression of Drivers (KMU-FED) dataset | Emotions | Public | Keimyung University | Web Page | [123] |
| 46 | n.s. | Eye detection | Private | Continental Automotive Romania | Access through Continental | [124] |
| 47 | n.s. | Driver Actions | Private | South Carolina State University | Request to the author | [125] |
| 48 | n.s | Drowsiness | Private | University of Iowa Driving Safety Research Institute | NA | [126] |
| 49 | n.s | Driving Behavior | Private | Kyungpook National University | NA | [127] |
| 50 | n.s. | Drowsiness and Distraction | Private | Research Centre for Territory, Transports and Environment, University of Porto | NA | [128] |
Table A5.
Extra List of Datasets found in the review.
Table A5.
Extra List of Datasets found in the review.
| No. | Name | Data Purpose | Type | Institution | Access | Ref. |
|---|---|---|---|---|---|---|
| 48 | MIT-BIH Noise Stress Test Database | Stress by noise | Public | Massachusetts Institute of Technology (MIT) | PhysioNet | [129] |
| 49 | MIT-BIH Arrhythmia Database | Arrythmia | Public | MIT | PhysioNet | [130] |
| 50 | European ST-T Database | ST and T changes in the ECG | Public | Clinical Physiology del National Research Council (CNR) | PhysioNet | [131] |
| 51 | NTU RGB+D | Actions | Public | Nanyang Technological University | Web Page | [132] |
| 52 | Turms Dataset | Hand Detection | Public | University of Modena and Reggio Emilia Modena | Web Page | [133] |
| 53 | Distracted Driver Dataset | Driver Distraction | Public | Machine Intelligence group at the American University in Cairo (MI-AUC) | Web Page | [134] |
| 54 | EEE BUET Distracted Driving (EBDD) | Driver Distraction | Public | Bangladesh University of Engineering and Technology (BUET) | Web Page | [135] |
| 55 | UnoViS | Vital Signs | Public | mediT | Web Page | [136] |
| 56 | The original EEG data for driver fatigue detection | Driver Fatigue | Public | Beihang University | Repository | [137] |
| 57 | SEED Dataset | EEG and eye movement | Public for Research | Shanghai Jiao Tong University & Brain-Like Computing and Machine Intelligence (BCMI) | Web Page | [138] |
| 58 | EEG Alpha Waves dataset | EEG Alpha Waves | Public | GIPSA-lab | Zenodo | [139] |
| 59 | Multi-channel EEG recordings during a sustained-attention driving task | Attention on Driving Task | Public | University of Technology Sydney (UTS) | Repository | [140] |
| 60 | Driver Behavior Dataset | Driving Events | Public | Universidade Federal do Paraná (UFPR) | Github | [141] |
References
- Instituto Nacional de Estadística y Geografía (INEGI). Síntesis Metodológica de la Estadística de Accidentes de Tránsito Terrestre en Zonas Urbanas y Suburbanas 2016; INEGI: Mexico, 2016; ISBN 978–607-739-996-4. Available online: https://en.www.inegi.org.mx/contenidos/productos/prod_serv/contenidos/espanol/bvinegi/productos/nueva_estruc/702825087999.pdf (accessed on 6 October 2025).
- Singh, S. Critical Reasons for Crashes Investigated in the National Motor Vehicle Crash Causation Survey; Report No. DOT HS 812 506; NHTSA: Washington, DC, USA, 2018. Available online: https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812506 (accessed on 6 October 2025).
- Davoli, L.; Martalò, M.; Cilfone, A.; Belli, L.; Ferrari, G.; Presta, R.; Montanari, R.; Mengoni, M.; Giraldi, L.; Amparore, E.G.; et al. On Driver Behavior Recognition for Increased Safety: A Roadmap. Safety 2020, 6, 55. [Google Scholar] [CrossRef]
- Cai, H.; Lin, Y. Modelling of operators’ emotion and task performance in a virtual driving environment. Int. J. Hum.-Comput. Stud. 2011, 69, 571–586. [Google Scholar] [CrossRef]
- DeGuzman, C.A.; Donmez, B. Training benefits driver behaviour while using automation with an attention monitoring system. Transp. Res. Part C Emerg.Technol. 2024, 165, 104752. [Google Scholar] [CrossRef]
- National Highway Traffic Safety Administration. Summary Report: Standing General Order on Crash Reporting for Level 2 Advanced Driver Assistance Systems. DOT HS 813 325; Washington, DC, USA, June 2022. Available online: https://www.nhtsa.gov/sites/nhtsa.gov/files/2022-06/ADAS-L2-SGO-Report-June-2022.pdf (accessed on 6 October 2025).
- Perrier, M.J.R.; Louw, T.L.; Carsten, y.O.M.J. Usability testing of three visual HMIs for assisted driving: How design impacts driver distraction and mental models. Ergonomics 2023, 66, 1142–1163. [Google Scholar] [CrossRef]
- Yager, C.; Dinakar, S.; Sanagaram, M.; Ferris, T.K. Emergency Vehicle Operator On-Board Device Distractions; Technical Report; Texas A&M University: College Station, TX, USA, 2015. [Google Scholar] [CrossRef]
- Schneider, E.M.; D’Ambrosio, L.A. Impacts of Advanced Vehicle Technologies and Risk Attitudes on Distracted Driving Behaviors. Transp. Res. Rec. J. Transp. Res. Board 2024, 2678, 622–634. [Google Scholar] [CrossRef]
- Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. J. Clin. Epidemiol. 2021, 134, 178–189. [Google Scholar] [CrossRef]
- Hassan-Montero, Y.; De-Moya-Anegón, F.; Guerrero-Bote, V.P. SCImago Graphica: A new tool for exploring and visually communicating data. Prof. Inf. 2022, 31, e310502. [Google Scholar] [CrossRef]
- van Eck, N.J.; Waltman, L. VOSviewer, Version 1.6.20, Software Tool for Constructing and Visualizing Bibliometric Networks; Centre for Science and Technology Studies (CWTS), Leiden University: Leiden, The Netherlands, 2023; Available online: https://www.vosviewer.com (accessed on 6 October 2025).
- Braun, M.; Chadowitz, R.; Alt, F. User Experience of Driver State Visualizations: A Look at Demographics and Personalities. In Human-Computer Interaction—INTERACT 2019 (Lecture Notes in Computer Science, Vol. 11749), Proceedings of the INTERACT 2019, Paphos, Cyprus, 2–6 September 2019; Springer: Cham, Switzerland, 2019; pp. 158–176. [Google Scholar] [CrossRef]
- Krüger, S.; Bosch, E.; Ihme, K.; Oehl, M. In-Vehicle Frustration Mitigation via Voice-User Interfaces—A Simulator Study. In HCI International 2021—Posters (Communications in Computer and Information Science, Volume 1421); Springer: Cham, Switzerland, 2021; pp. 241–248. [Google Scholar] [CrossRef]
- Tran, D.; Du, J.; Sheng, W.; Osipychev, D.; Sun, Y.; Bai, H. A Human-Vehicle Collaborative Driving Framework for Driver Assistance. IEEE Trans. Intell. Transp. Syst. 2019, 20, 3470–3485. [Google Scholar] [CrossRef]
- Frémont, V.; Phan, M.-T.; Thouvenin, I. Adaptive Visual Assistance System for Enhancing the Driver Awareness of Pedestrians. Int. J. Hum.-Comput. Interact. 2019, 36, 856–869. [Google Scholar] [CrossRef]
- Mulhall, M.; Wilson, K.; Yang, S.; Kuo, J.; Sletten, T.; Anderson, C.; Lenné, M.G.; Rajaratnam, S.; Magee, M.; Collins, A.; et al. European NCAP Driver State Monitoring Protocols: Prevalence of Distraction in Naturalistic Driving. Hum. Fact. 2024, 66, 2205–2217. [Google Scholar] [CrossRef] [PubMed]
- Quiles-Cucarella, E.; Cano-Bernet, J.; Santos-Fernández, L.; Roldán-Blay, C.; Roldán-Porta, C. Multi-Index Driver Drowsiness Detection Method Based on Driver’s Facial Recognition Using Haar Features and Histograms of Oriented Gradients. Sensors 2024, 24, 5683. [Google Scholar] [CrossRef] [PubMed]
- Linschmann, O.; Uguz, D.U.; Romanski, B.; Baarlink, I.; Gunaratne, P.; Leonhardt, S.; Walter, M.; Lueken, M. A Portable Multi-Modal Cushion for Continuous Monitoring of a Driver’s Vital Signs. Sensors 2023, 23, 4002. [Google Scholar] [CrossRef]
- Amidei, A.; Rapa, P.M.; Tagliavini, G.; Rabbeni, R.; Pavan, P.; Benatti, S. ANGELS—Smart Steering Wheel for Driver Safety. In Proceedings of the 2023 9th International Workshop on Advances in Sensors and Interfaces (IWASI), Monopoli (Bari), Italy, 8–9 June 2023; pp. 15–20. [Google Scholar] [CrossRef]
- Bhagat, A.; Kale, J.G.; Pachhapurkar, N.; Karle, M.; Karle, U. Development & Testing of a Camera-Based Driver Monitoring System; SAE Technical Paper 2024-26-0028; SAE International: Warrendale, PA, USA, 2024. [Google Scholar] [CrossRef]
- Kontaxi, A.; Ziakopoulos, A.; Yannis, G. Trip characteristics impact on the frequency of harsh events recorded via smartphone sensors. IATSS Res. 2021, 45, 574–583. [Google Scholar] [CrossRef]
- González-Ortega, D.; Díaz-Pernas, F.J.; Martínez-Zarzuela, M.; Antón-Rodríguez, M. A Physiological Sensor-Based Android Application Synchronized with a Driving Simulator for Driver Monitoring. Sensors 2019, 19, 399. [Google Scholar] [CrossRef]
- González-Ortega, D.; Díaz-Pernas, F.J.; Martínez-Zarzuela, M.; Antón-Rodríguez, M. Comparative Analysis of Kinect-Based and Oculus-Based Gaze Region Estimation Methods in a Driving Simulator. Sensors 2020, 21, 26. [Google Scholar] [CrossRef] [PubMed]
- Kundinger, T.; Yalavarthi, P.K.; Riener, A.; Wintersberger, P.; Schartmüller, C. Feasibility of smart wearables for driver drowsiness detection and its potential among different age groups. Int. J. Pervasive Comput. Commun. 2020, 16, 1–23. [Google Scholar] [CrossRef]
- Rezaee, Q.; Delrobaei, M.; Giveki, A.; Dayarian, N.; Haghighi, S.J. Driver Drowsiness Detection with Commercial EEG Headsets. In Proceedings of the 2022 10th RSI International Conference on Robotics and Mechatronics (ICRoM), Tehran, Iran, 22–24 November 2022; pp. 546–550. [Google Scholar] [CrossRef]
- Nilsson, E.J.; Bärgman, J.; Aust, M.L.; Matthews, G.; Svanberg, B. Let complexity bring clarity: A multidimensional assessment of cognitive load using physiological measures. Front. Neuroergonomics 2022, 3, 787295. [Google Scholar] [CrossRef]
- Li, W.; Wu, Y.; Zeng, G.; Ren, F.; Tang, M.; Xiao, H.; Liu, Y.; Guo, G. Multi-modal user experience evaluation on in-vehicle HMI systems using eye-tracking, facial expression, and finger-tracking for the smart cockpit. Int. J. Veh. Perform. 2022, 8, 429–449. [Google Scholar] [CrossRef]
- Bassani, M.; Catani, L.; Hazoor, A.; Hoxha, A.; Lioi, A.; Portera, A.; Tefa, L. Do driver monitoring technologies improve the driving behaviour of distracted drivers? A simulation study to assess the impact of an auditory driver distraction warning device on driving performance. Transp. Res. Part F Traff. Psychol. Behav. 2023, 95, 239–250. [Google Scholar] [CrossRef]
- Weiss, C.; Kirmas, A.; Lemcke, S.; Böshagen, S.; Walter, M.; Eckstein, L.; Leonhardt, S. Head Tracking in Automotive Environments for Driver Monitoring Using a Low Resolution Thermal Camera. Vehicles 2022, 4, 219–233. [Google Scholar] [CrossRef]
- Tada, M.; Nishida, M. Real-Time Safety Driving Advisory System Utilizing a Vision-Based Driving Monitoring Sensor. IEICE Trans. Inf. Syst. 2024, E107.D, 901–907. [Google Scholar] [CrossRef]
- Koniakowsky, I.M.; Forster, Y.; Naujoks, F.; Krems, J.F.; Keinath, A. How Do Automation Modes Influence the Frequency of Advanced Driver Distraction Warnings? A Simulator Study. In Proceedings of the 15th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Ingolstadt, Germany, 18–22 September 2023; pp. 1–11. [Google Scholar] [CrossRef]
- Kuo, J.; Lenné, M.G.; Mulhall, M.D.; Sletten, T.L.; Anderson, C.; Howard, M.E.; Rajaratnam, S.M.; Magee, M.; Collins, A. Continuous monitoring of visual distraction and drowsiness in shift-workers during naturalistic driving. Saf. Sci. 2019, 119, 112–116. [Google Scholar] [CrossRef]
- Assunção, A.N.; Aquino, A.L.L.; Câmara de, M. Santos, R.C.; Guimarães, R.L.M.; Oliveira, R.A.R. Vehicle driver monitoring through the statistical process control. Sensors 2019, 19, 3059. [Google Scholar] [CrossRef]
- Wang, Y.; Wang, Z.; Han, K.; Tiwari, P.; Work, D.B. Gaussian Process-Based Personalized Adaptive Cruise Control. IEEE Trans. Intell. Transp. Syst. 2022, 23, 21178–21189. [Google Scholar] [CrossRef]
- Zhang, X.; Wang, X.; Bao, Y.; Zhu, X. Safety assessment of trucks based on GPS and in-vehicle monitoring data. Accid. Anal. Prev. 2022, 168, 106619. [Google Scholar] [CrossRef] [PubMed]
- Yoshihara, Y.; Tanaka, T.; Osuga, S.; Fujikake, K.; Karatas, N.; Kanamori, H. Identifying High-Risk Older Drivers by Head-Movement Monitoring Using a Commercial Driver Monitoring Camera. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 1021–1028. [Google Scholar] [CrossRef]
- Hooda, R.; Joshi, V.; Shah, M. A comprehensive review of approaches to detect fatigue using machine learning techniques. Chronic Dis. Transl. Med. 2021, 8, 26–35. [Google Scholar] [CrossRef]
- Poli, A.; Amidei, A.; Benatti, S.; Iadarola, G.; Tramarin, F.; Rovati, L.; Pavan, P.; Spinsante, S. Exploiting Blood Volume Pulse and Skin Conductance for Driver Drowsiness Detection. In IoT Technologies for HealthCare: 9th EAI International Conference, HealthyIoT 2022, Braga, Portugal, 16–18 November 2022, Proceedings, LNICST; Springer: Cham, Switzerland, 2023; Volume 456, pp. 50–61. [Google Scholar] [CrossRef]
- Ziryawulawo, A.; Kirabo, M.; Mwikirize, C.; Serugunda, J.; Mugume, E.; Miyingo, S.P. Machine learning based driver monitoring system: A case study for the Kayoola EVS. SAIEE Afr. Res. J. 2023, 114, 40–48. [Google Scholar] [CrossRef]
- Ceccacci, S.; Mengoni, M.; Generosi, A.; Giraldi, L.; Presta, R.; Carbonara, G.; Castellano, A.; Montanari, R. Designing in-car emotion-aware automation. Eur. Transp. 2021, 84, 1–15. [Google Scholar] [CrossRef]
- Chang, K.-J.; Cho, G.; Song, W.; Kim, M.-J.; Ahn, C.W.; Song, M. Personalized EV Driving Sound Design Based on the Driver’s Total Emotion Recognition. SAE Int. J. Adv. Curr. Pract. Mobility 2023, 5, 921–929. [Google Scholar] [CrossRef]
- Zero, E. Towards real-time monitoring of fear in driving sessions. IFAC-PapersOnLine 2019, 52, 299–304. [Google Scholar] [CrossRef]
- Ceccacci, S.; Mengoni, M.; Generosi, A.; Giraldi, L.; Carbonara, G.; Castellano, A.; Montanari, R. A preliminary investigation towards the application of facial expression analysis to enable an emotion-aware car interface. In Universal Access in Human-Computer Interaction—Applications and Practice: 14th International Conference, UAHCI 2020, Part II (Lecture Notes in Computer Science, Volume 12213); Antona, M., Stephanidis, C., Eds.; Springer: Cham, Switzerland, 2020; pp. 504–517. [Google Scholar] [CrossRef]
- Generosi, A.; Bruschi, V.; Cecchi, S.; Dourou, N.A.; Montanari, R.; Mengoni, M. An Innovative System for Driver Monitoring and Vehicle Sound Interaction. In Proceedings of the 2024 IEEE International Workshop on Metrology for Automotive (MetroAutomotive), Bologna, Italy, 26–28 June 2024; pp. 159–164. [Google Scholar] [CrossRef]
- Spencer, C.; Koç, İ.A.; Suga, C.; Lee, A.; Dhareshwar, A.M.; Franzén, E.; Iozzo, M.; Morrison, G.; McKeown, G. Assessing the Use of Physiological Signals and Facial Behaviour to Gauge Drivers’ Emotions as a UX Metric in Automotive User Studies. In Proceedings of the 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’20 Adjunct), Virtual Event, 21–22 September 2020; pp. 78–81. [Google Scholar] [CrossRef]
- Forster, Y.; Schoemig, N.; Kremer, C.; Wiedemann, K.; Gary, S.; Naujoks, F.; Keinath, A.; Neukum, A. Attentional warnings caused by driver monitoring systems: How often do they appear and how well are they understood? Accid. Anal. Prev. 2024, 205, 107684. [Google Scholar] [CrossRef] [PubMed]
- Kashevnik, A.; Lashkov, I.; Gurtov, A. Methodology and Mobile Application for Driver Behavior Analysis and Accident Prevention. IEEE Trans. Intell. Transp. Syst. 2020, 21, 2427–2436. [Google Scholar] [CrossRef]
- Baccour, M.H.; Driewer, F.; Schäck, T.; Kasneci, E. Comparative Analysis of Vehicle-Based and Driver-Based Features for Driver Drowsiness Monitoring by Support Vector Machines. IEEE Trans. Intell. Transp. Syst. 2022, 23, 23164–23178. [Google Scholar] [CrossRef]
- Ebrahimian, S.; Nahvi, A.; Tashakori, M.; Triki, H.K. Evaluation of driver drowsiness using respiration analysis by thermal imaging on a driving simulator. Multimed. Tools Appl. 2020, 79, 17793–17815. [Google Scholar] [CrossRef]
- Schwarz, C.; Gaspar, J.; Miller, T.; Yousefian, R. The detection of drowsiness using a driver monitoring system. Traff. Inj. Prev. 2019, 20 (Suppl. S1), S157–S161. [Google Scholar] [CrossRef]
- Schwarz, C.; Gaspar, J.; Yousefian, R. Multi-sensor driver monitoring for drowsiness prediction. Traff. Inj. Prev. 2023, 24 (Suppl. S1), S100–S104. [Google Scholar] [CrossRef]
- Rogister, F.; Mwange, M.-A.P.; Rukonić, L.; Delbeke, O.; Virlouvet, R. Fast Detection and Classification of Drivers’ Responses to Stressful Events and Cognitive Workload. In HCI International 2022 Posters: 24th International Conference on Human-Computer Interaction, HCII 2022, Virtual Event, 26 June–1 July 2022, Proceedings, Part II, Communications in Computer and Information Science; Stephanidis, C., Antona, M., Ntoa, S., Eds.; Springer: Cham, Switzerland, 2022; Volume 1581, pp. 210–217. [Google Scholar] [CrossRef]
- Nilsson, E.J.; Victor, T.; Aust, M.L.; Svanberg, B.; Lindén, P.; Gustavsson, P. On-to-off-path gaze shift cancellations lead to gaze concentration in cognitively loaded car drivers: A simulator study exploring gaze patterns in relation to a cognitive task and the traffic environment. Transp. Res. Part F Traff. Psychol. Behav. 2020, 75, 1–15. [Google Scholar] [CrossRef]
- Benusa, M.; Min, C.-H. Wearable Driver Monitoring System. In Proceedings of the 2024 IEEE 67th International Midwest Symposium on Circuits and Systems (MWSCAS), Springfield, MA, USA, 11–14 August 2024; pp. 605–608. [Google Scholar] [CrossRef]
- Riyahi, P. A Brain Wave-Verified Driver Alert System for Vehicle Collision Avoidance. SAE Int. J. Trans. Saf. 2021, 9, 105–122. [Google Scholar] [CrossRef]
- Schwarz, C.; Gaspar, J.; Carney, C.; Gunaratne, P. Silent failure detection in partial automation as a function of visual attentiveness. Traff. Inj. Prev. 2023, 24 (Suppl. S1), S88–S93. [Google Scholar] [CrossRef] [PubMed]
- Camden, M.C.; Soccolich, S.A.; Hickman, J.S.; Hanowski, R.J. Reducing risky driving: Assessing the impacts of an automatically assigned, targeted web-based instruction program. J. Saf. Res. 2019, 70, 105–115. [Google Scholar] [CrossRef]
- Ulrich, K.; Eppinger, S. Product Design and Development; McGraw-Hill Education: New York, NY, USA, 2015. [Google Scholar]
- Dieter, G.E.; Schmidt, L.C. Engineering Design, 6th ed.; McGraw-Hill Higher Education: New York, NY, USA, 2020. [Google Scholar]
- Salazar-Calderón, L.A.; Izquierdo-Reyes, J.; Tejera, J.A.d.l. Proposal of a Methodology for Mechatronic Design From Ideation to Embodiment Design: Application in a Masonry Robot Case Study Design. IEEE Access 2025, 13, 140667–140684. [Google Scholar] [CrossRef]
- Presta, R.; Simone, F.D.; Tancredi, C.; Chiesa, S. Nudging the safe zone: Design and assessment of HMI strategies based on intelligent driver state monitoring systems. In HCI in Mobility, Transport, and Automotive Systems: 5th International Conference, MobiTAS 2023, Held as Part of the 25th HCI International Conference, HCII 2023, Copenhagen, Denmark, 23–28 July 2023, Proceedings, Part I; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2023; Volume 14048, pp. 166–185. [Google Scholar] [CrossRef]
- Bidabadi, N.S.; Song, Y.; Wang, K.; Jackson, E. Evaluation of Driver Reaction to Disengagement of Advanced Driver Assistance System with Different Warning Systems While Driving Under Various Distractions. Transp. Res. Rec. J. Transp. Res. Board 2024, 2678, 1614–1628. [Google Scholar] [CrossRef]
- Lachance-Tremblay, J.; Tkiouat, Z.; Léger, P.-M.; Cameron, A.-F.; Titah, R.; Coursaris, C.K.; Sénécal, S. A gaze-based driver distraction countermeasure: Comparing effects of multimodal alerts on driver’s behavior and visual attention. Int. J. Hum.-Comput. Stud. 2024, 193, 103366. [Google Scholar] [CrossRef]
- Reinmueller, K.; Kiesel, A.; Steinhauser, M. Adverse Behavioral Adaptation to Adaptive Forward Collision Warning Systems: An Investigation of Primary and Secondary Task Performance. Accid. Anal. Prev. 2020, 146, 105718. [Google Scholar] [CrossRef] [PubMed]
- Okazaki, S.; Haramaki, T.; Nishino, H. A safe driving support method using olfactory stimuli. In Complex, Intelligent, and Software Intensive Systems. CISIS 2018. Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2018; pp. 958–967. [Google Scholar] [CrossRef]
- Saito, Y.; Itoh, M.; Inagaki, T. Bringing a Vehicle to a Controlled Stop: Effectiveness of a Dual-Control Scheme for Identifying Driver Drowsiness and Executing Safety Control under Hands-off Partial Driving Automation. IFAC-PapersOnLine 2023, 56, 8339–8344. [Google Scholar] [CrossRef]
- Chihara, T.; Sakamoto, J. Effect of time length of eye movement data analysis on the accuracy of mental workload estimation during automobile driving. In Proceedings of the 21st Congress of the International Ergonomics Association (IEA 2021), Lecture Notes in Networks and Systems; Springer: Cham, Switzerland, 2021; Volume 221, pp. 593–599. [Google Scholar] [CrossRef]
- Nair, V.V.; Rehmann, M.; de la Rosa, S.; Curio, C. Investigating Drivers’ Awareness of Pedestrians Using Virtual Reality towards Modeling the Impact of External Factors. In Proceedings of the 2024 IEEE Intelligent Vehicles Symposium (IV), Jeju Island, Republic of Korea, 2–5 June 2024; pp. 3001–3008. [Google Scholar] [CrossRef]
- Cvetković, M.M.; Soares, D.; Baptista, J.S. Assessing post-driving discomfort and its influence on gait patterns. Sensors 2021, 21, 8492. [Google Scholar] [CrossRef]
- Lashkov, I.; Kashevnik, A.; Shilov, N. Dangerous State Detection in Vehicle Cabin Based on Audiovisual Analysis with Smartphone Sensors. In Intelligent Systems and Applications—Proceedings of SAI Intelligent Systems Conference (IntelliSys 2020), Advances in Intelligent Systems and Computing; Arai, K., Kapoor, S., Bhatia, R., Eds.; Springer: Cham, Switzerland, 2021; Volume 1250, pp. 789–799. [Google Scholar] [CrossRef]
- Romano, R.; Maggi, D.; Hirose, T.; Broadhead, Z.; Carsten, O. Impact of Lane Keeping Assist System Camera Misalignment on Driver Behavior. J. Intell. Transp. Syst. 2021, 25, 157–169. [Google Scholar] [CrossRef]
- Krstačić, R.; Žužić, A.; Orehovački, T. Safety Aspects of In-Vehicle Infotainment Systems: A Systematic Literature Review from 2012 to 2023. Electronics 2024, 13, 2563. [Google Scholar] [CrossRef]
- Engström, J.; Monk, C.A.; Hanowski, R.J.; Horrey, W.J.; Lee, J.D.; McGehee, D.V.; Regan, M.; Stevens, A.; Traube, E.; Tuukkanen, M.; et al. A Conceptual Framework and Taxonomy for Understanding and Categorizing Driver Inattention. Project Report; Driver Distraction & Human-Machine Interaction Working Group, US-EU Bilateral Intelligent Transportation Systems Technical Task Force; 2013; Available online: https://iro.uiowa.edu/esploro/outputs/report/A-conceptual-framework-and-taxonomy-for/9984186943902771 (accessed on 6 October 2025).
- Regan, M.A.; Hallett, C.; Gordon, C.P. Driver distraction and driver inattention: Definition, relationship and taxonomy. Accid. Anal. Prev. 2011, 43, 1771–1781. [Google Scholar] [CrossRef]
- SAE International. Operational Definitions of Driving Performance Measures and Statistics, SAE Standard J2944_202302 (Recommended Practice). Available online: https://saemobilus.sae.org/standards/j2944_202302-operational-definitions-driving-performance-measures-statistics (accessed on 6 October 2025). [CrossRef]
- European New Car Assessment Programme (Euro NCAP). Assessment Protocol—Safety Assist: Safe Driving, Version 10.1; Implementation 2023. July 2022. Available online: https://cdn.euroncap.com/media/70315/euro-ncap-assessment-protocol-sa-safe-driving-v101.pdf (accessed on 6 October 2025).
- Marafie, Z.; Lin, K.-J.; Wang, D.; Lyu, H.; Liu, Y.; Meng, Y.; Ma, J. AutoCoach: An Intelligent Driver Behavior Feedback Agent with Personality-Based Driver Models. Electronics 2021, 10, 1361. [Google Scholar] [CrossRef]
- Jegham, I.; Khalifa, A.B.; Alouani, I.; Mahjoub, M.A. 3MDAD: A novel public dataset for multimodal multiview and multispectral driver distraction analysis. Signal Process. Image Commun. 2020, 88, 115966. [Google Scholar] [CrossRef]
- Driver Drowsiness Using Keras | Kaggle. Available online: https://www.kaggle.com/code/adinishad/driver-drowsiness-using-keras/comments (accessed on 15 December 2024).
- Arumugam, S.; Bhargavi, R. Road Rage and Aggressive Driving Behaviour Detection in Usage-Based Insurance Using Machine Learning. Int. J. Softw. Innov. 2023, 11, 1–29. [Google Scholar] [CrossRef]
- GIDAS–German In-Depth Accident Study; Verkehrsunfallforschung an der TU Dresden GmbH (VUFO): Hannover, Germany; Available online: https://www.gidas.org/start-en.html (accessed on 23 July 2025).
- Ydenius, A.; Stigson, H.; Kullgren, A.; Sunnevång, C. Accuracy of Folksam Electronic Crash Recorder (ECR) in Frontal and Side Impact Crashes. In Proceedings of the 23rd International Technical Conference on the Enhanced Safety of Vehicles (ESV), Seoul, Republic of Korea, 27–30 May 2013. [Google Scholar]
- Berri, R.; Osório, F. A 3D vision system for detecting use of mobile phones while driving. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil; 2018; pp. 1–8. [Google Scholar] [CrossRef]
- Bin Rozaimi, I.D.; Zaman, F.H.K.; Abdullah, S.A.C.; Abidin, H.Z.; Mazalan, L. Driver Drowsiness Detection Using Deep Learning Models Based On Different Camera Positions. In Proceedings of the 2023 11th International Conference on Information and Communication Technology (ICoICT), Melaka, Malaysia; 2023; pp. 611–616. [Google Scholar] [CrossRef]
- Data Castle. Chengdu Taxi GPS Data. Pkbigdata. Available online: https://www.pkbigdata.com/common/zhzgbCmptDataDetails.html#down (accessed on 23 July 2025).
- Dahl, J.; Campos, G.R.D.; Fredriksson, J. Intention-Aware lane keeping assist using driver gaze information. In Proceedings of the 2022 IEEE Intelligent Vehicles Symposium (IV), Anchorage, AK, USA, 4–7 June 2023. [Google Scholar] [CrossRef]
- Doniec, R.; Konior, J.; Sieciński, S.; Piet, A.; Irshad, M.T.; Piaseczna, N.; Hasan, M.A.; Li, F.; Nisar, M.A.; Grzegorzek, M. Sensor-Based classification of primary and secondary car driver activities using convolutional neural networks. Sensors 2023, 23, 5551. [Google Scholar] [CrossRef] [PubMed]
- Dreißig, M.; Baccour, M.H.; Schäck, T.; Kasneci, E. Driver Drowsiness Classification Based on Eye Blink and Head Movement Features Using the k-NN Algorithm. In Proceedings of the 2020 IEEE Symposium Series on Computational Intelligence (SSCI), Canberra, Australia, 1–4 December 2020; pp. 889–896. [Google Scholar] [CrossRef]
- Computer Vision Lab, National Tsuing Hua University. Driver Drowsiness Detection Dataset. 2016. Available online: https://www.kaggle.com/datasets/ismailnasri20/driver-drowsiness-dataset-ddd (accessed on 23 July 2025).
- Ferreira, S.; Kokkinogenis, Z.; Couto, A. Using real-life alert-based data to analyse drowsiness and distraction of commercial drivers. Transp. Res. Part F Traff. Psychol. Behav. 2018, 60, 25–36. [Google Scholar] [CrossRef]
- Fusek, R.; Sojka, E.; Gaura, J.; Halman, J. Driver State Detection from In-Car Camera Images. In Advances in Visual Computing. ISVC 2022. Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2022; pp. 307–319. [Google Scholar] [CrossRef]
- State Farm. State Farm Distracted Driver Detection. Kaggle. Available online: https://www.kaggle.com/c/state-farm-distracted-driver-detection/overview (accessed on 23 July 2025).
- Gwak, J.; Hirao, A.; Shino, M. An investigation of early detection of driver drowsiness using ensemble machine learning based on hybrid sensing. Appl. Sci. 2020, 10, 2890. [Google Scholar] [CrossRef]
- Hedi Baccour, M.; Driewer, F.; Schäck, T.; Kasneci, E. Camera-based Driver Drowsiness State Classification Using Logistic Regression Models. In Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, ON, Canada, 11–14 October 2020; pp. 1–8. [Google Scholar] [CrossRef]
- Herbers, E.; Miller, M.; Neurauter, L.; Walters, J.; Glaser, D. Exploratory development of algorithms for determining driver attention status. Hum. Fact. J. Hum. Fact. Ergon. Soc. 2023, 66, 2191–2204. [Google Scholar] [CrossRef]
- Hollósi, J.; Ballagi, Á.; Kovács, G.; Fischer, S.; Nagy, V. Bus Driver Head Position Detection Using Capsule Networks under Dynamic Driving Conditions. Computers 2024, 13, 66. [Google Scholar] [CrossRef]
- Zhu, X.; Lei, Z.; Liu, X.; Shi, H.; Li, S.Z. Face Alignment Across Large Poses: A 3D Solution. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 146–165. [Google Scholar]
- ETH Zurich Computer Vision Lab. Head Pose Image Database. Available online: https://data.vision.ee.ethz.ch/cvl/vision2/datasets/headposeCVPR08 (accessed on 23 July 2025).
- Zhang, X.; Sugano, Y.; Fritz, M.; Bulling, A. MPIIGaze: Real-World Dataset for Appearance-Based Gaze Estimation. Max Planck Institute for Informatics. Available online: https://www.mpi-inf.mpg.de/departments/computer-vision-and-machine-learning/research/gaze-based-human-computer-interaction/appearance-based-gaze-estimation-in-the-wild (accessed on 23 July 2025).
- Sugano, Y.; Matsushita, Y.; Sato, Y. Multi-View Gaze Dataset. UT Vision. 2014. Available online: https://www.ut-vision.org/resources/#datasets (accessed on 23 July 2025).
- Jan, M.T.; Furht, B.; Moshfeghi, S.; Jang, J.; Ghoreishi, S.G.A.; Boateng, C.; Yang, K.; Conniff, J.; Rosselli, M.; Newman, D.; et al. Enhancing road safety: In-vehicle sensor analysis of cognitive impairment in older drivers. Multimed. Tools Appl. 2024, 84, 18711–18732. [Google Scholar] [CrossRef]
- Janveja, I.; Nambi, A.; Bannur, S.; Gupta, S.; Padmanabhan, V. InSight. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2020, 4, 1–29. [Google Scholar] [CrossRef]
- Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Fei-Fei, L. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Kashevnik, A.; Teslya, N.; Ponomarev, A.; Lashkov, I.; Mayatin, A.; Parfenov, V. Driver monitoring cloud organisation based on smartphone camera and sensor data. In 17th International Conference on Information Technology–New Generations (ITNG 2020). Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2020; pp. 593–600. [Google Scholar] [CrossRef]
- Kashevnik, A.; Ponomarev, A.; Shilov, N.; Chechulin, A. Threats Detection during Human-Computer Interaction in Driver Monitoring Systems. Sensors 2022, 22, 2380. [Google Scholar] [CrossRef]
- Khosravi, E.; Hemmatyar, A.M.A.; Siavoshani, M.J.; Moshiri, B. Safe Deep Driving Behavior Detection (S3D). IEEE Access 2022, 10, 113827–113838. [Google Scholar] [CrossRef]
- Kim, D.; IR-Camera Datasets. GitHub Repository. Available online: https://github.com/kdh6126/IR-Carmera-Datasets (accessed on 6 October 2025).
- Lindow, F.; Kaiser, C.; Kashevnik, A.; Stocker, A. AI-Based Driving Data Analysis for Behavior Recognition in Vehicle Cabin. In Proceedings of the 2020 27th Conference of Open Innovations Association (FRUCT), Trento, Italy, 7–9 September 2020; pp. 116–125. [Google Scholar] [CrossRef]
- Abtahi, S.; Omidyeganeh, M.; Shirmohammadi, S.; Hariri, B. YawDD: Yawning Detection Dataset. Available online: https://ieee-dataport.org/open-access/yawdd-yawning-detection-dataset (accessed on 6 October 2025). [CrossRef]
- Mălăescu, A.; Duţu, L.C.; Sultana, A.; Filip, D.; Ciuc, M. Improving in-car emotion classification by NIR database augmentation. In Proceedings of the 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019), Lille, France, 14–18 May 2019; pp. 1–5. [Google Scholar] [CrossRef]
- Othman, W.; Kashevnik, A.; Hamoud, B.; Shilov, N. DriverSVT: Smartphone-Measured Vehicle Telemetry Data for Driver State Identification. Data 2022, 7, 181. [Google Scholar] [CrossRef]
- Sabry, M.; Morales-Alvarez, W.; Olaverri-Monreal, C. Automated Vehicle Driver Monitoring Dataset from Real-World Scenarios. In Proceedings of the 2024 IEEE 27th International Conference on Intelligent Transportation Systems (ITSC), Edmonton, AB, Canada, 24–27 September 2024. [Google Scholar] [CrossRef]
- Schneegass, S.; Pfleging, B.; Broy, N.; Schmidt, A.; Heinrich, F. A data set of real world driving to assess driver workload. In Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’13), Eindhoven, The Netherlands, 28–30 October 2013; pp. 150–157. [Google Scholar] [CrossRef]
- Nunes, C.; Beatriz-Afonso, A.; Cruz-Jesus, F.; Oliveira, T.; Castelli, M. Mathematics and Mother Tongue Academic Achievement: A Machine Learning approach. Emerg. Sci. J. 2022, 6, 137–149. [Google Scholar] [CrossRef]
- Taylor, P.; Griffiths, N.; Bhalerao, A.; Xu, Z.; Gelencser, A.; Popham, T. Investigating the feasibility of vehicle telemetry data as a means of predicting driver workload. Int. J. Mob. Hum. Comput. Interact. 2017, 9, 54–72. [Google Scholar] [CrossRef]
- Jager, F.; Taddei, A.; Moody, G.B.; Emdin, M.; Antolič, G.; Dorn, R.; Smrdel, A.; Marchesi, C.; Mark, R.G. Long-term ST database: A reference for the development and evaluation of automated ischaemia detectors and for the study of the dynamics of myocardial ischaemia. Med. Biol. Eng. Comput. 2003, 41, 172–182. [Google Scholar] [CrossRef]
- Vicente, J.; Laguna, P.; Bartra, A.; Bailón, R. Drowsiness detection using heart rate variability. Med. Biol. Eng. Comput. 2016, 54, 927–937. [Google Scholar] [CrossRef]
- Wagner, P.; Strodthoff, N.; Bousseljot, R.-D.; Kreiseler, D.; Lunze, F.I.; Samek, W.; Schaeffter, T. PTB-XL: A Large Publicly Available Electrocardiography Dataset (Version 1.0.3); PhysioNet: 2020. Available online: https://physionet.org/content/ptb-xl/1.0.3/ (accessed on 6 October 2025). [CrossRef]
- Martin, M.; Roitberg, A.; Haurilet, M.; Voit, M.; Stiefelhagen, R. Drive&Act: A multi-modal dataset for fine-grained driver behavior recognition in autonomous vehicles. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Roitberg, A.; Ma, C.; Haurilet, M.; Stiefelhagen, R. Open Set Driver Activity Recognition. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 1048–1053. [Google Scholar] [CrossRef]
- Ian Goodfellow, J.; Cukierski, W.; Bengio, Y. FER-2013: Facial Expression Recognition 2013 Dataset. Challenges in Representation Learning: A Report on Three Machine Learning Contests, Neural Networks. Available online: https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/overview (accessed on 6 October 2025).
- Jeong, M.; Ko, B.C. KMU-FED: Keimyung University Facial Expression of Drivers Dataset; Keimyung University CVPR Laboratory. Available online: https://cvpr.kmu.ac.kr/KMU-FED.htm (accessed on 6 October 2025).
- Valcan, S.; Gaianu, M. Ground Truth Data Generator for eye location on infrared driver recordings. J. Imaging 2021, 7, 162. [Google Scholar] [CrossRef]
- Chengula, T.J.; Mwakalonge, J.L.; Comert, G.; Siuhi, S. Improving Road Safety with Ensemble Learning: Detecting Driver Anomalies Using Vehicle In-built Cameras. Mach. Learn. Appl. 2023, 14, 100510. [Google Scholar] [CrossRef]
- Schwarz, C.; Gaspar, J.; Yousefian, R. Sequence analysis of monitored drowsy driving. Transp. Res. Rec. 2023. [Google Scholar] [CrossRef]
- Seong, J.; Lee, C.; Han, D.S. Neural Architecture Search for Real-Time Driver Behavior Recognition. In Proceedings of the 2022 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Jeju Island, Republic of Korea, 21–24 February 2022; pp. 104–108. [Google Scholar] [CrossRef]
- Soares, S.; Kokkinogenis, Z.; Ferreira, S.; Couto, A. Profiles of professional drivers based on drowsiness and distraction alerts. In Human Interaction, Emerging Technologies and Future Applications II. IHIET 2020. Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2020; pp. 272–278. [Google Scholar] [CrossRef]
- Moody, G.B.; Muldrow, W.E.; Mark, R.G. MIT-BIH Noise Stress Test Database (NSTDB). PhysioNet. 1992. Available online: https://physionet.org/content/nstdb/1.0.0/ (accessed on 6 October 2025). [CrossRef]
- Moody, G.B.; Mark, R.G. MIT-BIH Arrhythmia Database, version 1.0.0; PhysioNet, Dataset. Published 24 February 2005. Available online: https://physionet.org/content/mitdb/1.0.0/ (accessed on 6 October 2025). [CrossRef]
- Taddei, A.; Distante, G.; Emdin, M.; Pisani, P.; Moody, G.B.; Zeelenberg, C.; Marchesi, C. The European ST-T Database: Standard for evaluating systems for the analysis of ST-T changes in ambulatory electrocardiography. Eur. Heart J. 1992, 13, 1164–1172. [Google Scholar] [CrossRef] [PubMed]
- Shahroudy, A.; Liu, J.; Ng, T.-T.; Wang, G. NTU RGB+D: A Large Scale Dataset for 3D Human Activity Analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1010–1019. [Google Scholar]
- Borghi, G.; Frigieri, E.; Vezzani, R.; Cucchiara, R. Hands on the wheel: A Dataset for Driver Hand Detection and Tracking. In Proceedings of the 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi’an, China, 15–19 May 2018; pp. 564–570. [Google Scholar] [CrossRef]
- Eraqi, H.M.; Abouelnaga, Y.; Saad, M.H.; Moustafa, M.N. Driver Distraction Identification with an Ensemble of Convolutional Neural Networks. J. Adv. Transp. 2019, 2019, 4125865. [Google Scholar] [CrossRef]
- Billah, T.; Rahman, S.M.M.; Ahmad, M.O.; Swamy, M.N.S. Recognizing Distractions for Assistive Driving by Tracking Body Parts. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 1048–1062. [Google Scholar] [CrossRef]
- Wartzek, T.; Czaplik, M.; Antink, C.H.; Eilebrecht, B.; Walocha, R.; Leonhardt, S. UnoViS: The MedIT public unobtrusive vital signs database. Health Inf. Sci. Syst. 2015, 3, 2. [Google Scholar] [CrossRef] [PubMed]
- Min, J.; Wang, P.; Hu, J. The Original EEG Data for Driver Fatigue Detection. Dataset, Figshare. Published 2017. Available online: https://figshare.com/articles/dataset/The_original_EEG_data_for_driver_fatigue_detection/5202739 (accessed on 6 October 2025). [CrossRef]
- Li, Z.; Tao, L.-Y.; Ma, R.-X.; Zheng, W.-L.; Lu, B.-L. Investigating the effects of sleep conditions on emotion responses with EEG signals and eye movements. IEEE Trans. Affect. Comput. 2025. [Google Scholar] [CrossRef]
- Cattan, G.; Rodrigues, P.L.C.; Congedo, M. EEG Alpha Waves Dataset; GIPSA-lab: Grenoble, France, 17 Deceber 2018. [CrossRef]
- Cao, Z.; Chuang, C.-H.; King, J.-K.; Lin, C.-T. Multi-Channel EEG Recordings During a Sustained-Attention Driving Task. Dataset, Figshare. 2019. Available online: https://figshare.com/articles/dataset/Multi-channel_EEG_recordings_during_a_sustained-attention_driving_task/6427334/5 (accessed on 6 October 2025). [CrossRef]
- Ferreira Júnior, J.; Carvalho, E.; Ferreira, B.V.; de Souza, C.; Suhara, Y.; Pentland, A.; Pessin, G. Driver behavior profiling: An investigation with different smartphone sensors and machine learning. PLoS ONE 2017, 12, e0174959. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).