Next Article in Journal
Rail Surface Defect Detection Based on Dual-Path Feature Fusion
Next Article in Special Issue
Challenges in Information Systems Curricula: Effectiveness of Systems Application Products in Data Processing Learning in Higher Education through a Technological, Organizational and Environmental Framework
Previous Article in Journal
On the Influence of Fractional-Order Resonant Capacitors on Zero-Voltage-Switching Quasi-Resonant Converters
Previous Article in Special Issue
A Social Perspective on AI in the Higher Education System: A Semisystematic Literature Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Safety Aspects of In-Vehicle Infotainment Systems: A Systematic Literature Review from 2012 to 2023

by
Rafael Krstačić
,
Alesandro Žužić
and
Tihomir Orehovački
*
Faculty of Informatics, Juraj Dobrila University of Pula, Zagrebačka 30, 52100 Pula, Croatia
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(13), 2563; https://doi.org/10.3390/electronics13132563
Submission received: 20 April 2024 / Revised: 25 June 2024 / Accepted: 28 June 2024 / Published: 29 June 2024
(This article belongs to the Special Issue Advanced Research in Technology and Information Systems)

Abstract

:
This systematic literature review investigates the safety aspects of in-vehicle infotainment systems (IVISs) from 2012 to 2023, analyzing 96 studies. IVISs have significantly evolved, incorporating technologies such as navigation systems, parking assistance, and video games. However, these innovations introduce safety concerns like driver distraction and cognitive overload. This review identifies six primary safety issues: driving distraction, situational awareness, cognitive load, driving performance, interaction success, and emotional state. Head-down displays and touchscreens often have negative safety implications, while speech-based interfaces and Bluetooth-integrated systems are generally considered safer. Suggested improvements include enhancing interface design for touchscreens and exploring gesture-based alternatives. Despite these developments, significant gaps remain in real-world evaluations and studies in diverse driving conditions, highlighting the need for standardized manufacturing norms. Addressing these issues is essential for creating future IVIS that are both reliable and safe. This review serves as a foundation for future research, safety regulations, and design principles aimed at improving IVIS safety. Overcoming these challenges will require a multi-faceted approach that considers user-friendly design, adaptive technologies, and predictive analytics. The goal is to balance technological advancements with road safety, ensuring that IVISs contribute to a safer driving experience without compromising convenience and functionality.

1. Introduction

Throughout history, people have looked for ways to make transport faster and easier to use, from riding horses and using wheels on carts to semi-automated electric vehicles with countless features to assist the driver in driving-related activities. There are an estimated 1.35 million annual road deaths worldwide and 20 to 50 million serious injuries sustained in crashes around the world in the last few years [1], which makes road accidents the 10th most common cause of death [2]. The mentioned numbers indicate that driving is not a trivial or safe activity [3,4]. Driver accidents can occur for a variety of reasons. Some of them are speeding, driving under the influence of alcohol and other psychoactive substances, and distracted driving [5,6,7]. Accidents can be caused either by the driver or by the environment (e.g., other drivers, natural causes, etc.) [8,9]. To help reduce the number of accidents caused by the driver, car manufacturers started integrating in-vehicle infotainment systems (IVISs) [10].
The use of IVISs has increased dramatically, and today they provide drivers with a variety of features such as direct connection to a smartphone [11]. While these systems offer a new kind of safety to drivers, they also come with many safety risks such as driving distractions [12,13,14], inattention [15,16], and cognitive load overflow [12], which affect driving performance and have dangerous consequences for driver safety [4]. Driving distractions occur when the driver’s vision shifts from the road, often caused by looking at the central console to change the fan speed, adjust the volume, or change the radio station [9,13]. Not only do drivers divert their gaze [17] when trying to reach touchscreen buttons, but they also move their hands away from the steering wheel [18,19]. Cognitive load overflows when interaction with IVIS fills a person’s memory with unnecessary information, which can lead to the driver losing track of important aspects of driving such as the traffic situation, speed limit, or road conditions [12].
Some current modalities of IVISs are analog interfaces consisting of buttons and switches [20], head-down displays (HDD) physically located on the lower part of the driver’s field of view [18], voice user interfaces (VUI) allowing drivers to interact with a system through voice [21], touchscreens [22] and dashboards [19] offering interaction such as typing, browsing music, watching videos, connecting to Wi-Fi, connecting via Bluetooth, and navigating using in-vehicle navigation systems. With the advent of more sophisticated technologies such as head-up displays (HUDs) [23,24,25], augmented reality (AR) [26,27], hand gestures [28,29,30], on-wheel finger gestures [13,31], voice user interfaces [32,33,34], attentive user interfaces [35], predictive in-vehicle touchscreens with mid-air selection [36,37], many new types of IVISs have been developed and prototyped to address the above-mentioned safety risks. Studies on these types of IVISs are increasingly growing with the intention to improve their user experience and usability, and most importantly, driver safety. To overcome the lack of engagement [38,39,40] and reduced mental activity [35], which often result in boredom, especially among young drivers [41], gamification elements are implemented in IVISs.
The objective of this paper is to report findings of a systematic review of recent literature dealing with or related to the safety aspects of in-vehicle infotainment systems. To the best of our knowledge, no literature review in that respect has been conducted so far. Although the importance of the IVIS has been emphasized in many reports and articles, academic research on this topic has not been carried out sufficiently [42].
The extant body of knowledge offers two systematic literature reviews that touch upon the design of infotainment system interfaces. The aim of the literature review performed by Agudelo et al. [43] was to determine the positive and negative aspects of the design of the interfaces of automotive infotainment systems and how researchers have tried to mitigate the negatives. Although their review had some discussion on the safety elements, it was not their primary focus. The results of their study revealed that none of the current studies employed value-sensitive design (VSD) to improve infotainment system interfaces, which is surprising given how popular this method is [43]. Similarly, a systematic literature review carried out in 2021 by the same group of authors [44] found many gaps in the design of infotainment system interfaces. The result is unsurprising, given the lack of a design standard for infotainment systems, making it impossible to set suitable design criteria for these interfaces. As a follow-up, Agudelo et al. [44] suggested that the shortcomings in the interface design of infotainment systems should be remedied in the future by employing new methodologies, evaluating their effectiveness, and developing a design guide for these interfaces. While Rhiu et al. [42] reviewed studies on general smart car technologies and human–vehicle interaction (HVI) that are exploring issues in smart vehicles for elderly drivers, Park and Park [25] employed a literature review to investigate the developer, researcher, and user perspectives on the functional requirements of automotive HUDs.
Although the aforementioned literature reviews touched upon some of the aspects relevant to our study, such as shortcomings in the current research and state-of-the-art in the context of IVIS, they were either mainly related to autonomous vehicles or their primary focus was not to examine the safety aspects of IVISs. Therefore, this paper presents novelty in comparison to existing systematic literature review studies on IVISs.
The contributions of our systematic literature review on the safety aspects of IVISs are multifaceted and provide significant advancements in understanding the complex interplay between technology and driver safety:
  • Comprehensive Analysis of Safety Concerns: The review meticulously analyzes a variety of safety concerns associated with IVISs, such as driving distraction, cognitive load, and situational awareness. By synthesizing data from numerous studies, it highlights how different IVIS technologies affect driving performance and driver emotional state, offering a broad perspective on the multifarious impacts of these systems.
  • Evaluation of Technological Advancements: The paper evaluates both the benefits and shortcomings of current IVIS technologies, including touchscreens and head-down displays which have been found to be problematic in terms of safety. Conversely, it praises speech-based interfaces and Bluetooth-integrated systems for their potential to enhance driving safety. This balanced appraisal provides a clear direction for future technological enhancements.
  • Proposals for Future System Design: The review contributes to the field by proposing specific improvements to IVIS designs. For instance, it suggests enhancing touchscreens with better interface designs and explores the potential of gesture-based interfaces as safer alternatives. Such recommendations are crucial for the development of next-generation IVISs that prioritize user safety.
  • Highlighting the Need for Real-World Testing: One of the key contributions of this review is the emphasis on the necessity for real-world evaluations of IVIS. The identified lack of real-world testing underscores the importance of conducting further studies that simulate actual driving conditions to better understand the practical implications of IVISs on driver safety.
  • Foundation for Future Safety Regulations: The findings from this review serve as a foundational base for establishing future safety design principles and regulatory standards for IVISs. By identifying the current gaps and needs, the review sets the stage for more informed policymaking and standard-setting in the field of automotive safety technologies.
These contributions are critical not only for the academic community and industry stakeholders but also for policymakers and vehicle manufacturers, as they collectively strive to balance the dual objectives of technological innovation and road safety.
The remainder of this paper is structured as follows. The Section 2 outlines the methodology of the systematic literature review, explaining the search process and the criteria for including studies, which sets a clear framework for the analysis. In Section 3, findings from the literature are presented, discussing the impact of IVISs on driver safety and performance. The Section 4 provides a discussion of these findings together with a comparative analysis focusing on key aspects such as efficiency, response times, robustness of embedded systems, signal integrity, and ethical considerations. Implications for various stakeholders such as researchers, designers, and policymakers are explored in the Section 5, suggesting ways to leverage this information for enhancing road safety and technology design. The Section 6 acknowledges the limitations of the study, highlighting areas such as the scope of the literature review and the study designs included. Future research directions are proposed in the Section 7, pointing towards emerging technologies and methodologies that could address current gaps. The Section 8 draws conclusions, summarizing the major insights of the review and their significance for the development of safer IVISs.

2. Methodology

Our study has been undertaken as a systematic literature review (SLR), which includes a thorough, transparent, and replicable method for literature search and analysis. During the review, we followed the guidelines proposed by Keele [45]. The research questions being raised require substantial analysis of the literature on IVIS safety, thus making the choice of the approach appropriate.

2.1. Research Questions

Based on the objectives described in the introduction, the following research questions, which represent the foundation for the SLR, have been derived:
  • RQ1. What makes in-vehicle infotainment systems safe?
  • RQ2. How safe are current in-vehicle infotainment systems?
  • RQ3. How have safety aspects of in-vehicle infotainment systems advanced throughout history?
  • RQ4. How can the safety of in-vehicle infotainment systems be enhanced with emerging technology?
  • RQ5. What are the shortcomings in the research conducted so far regarding the safety aspects of in-vehicle infotainment systems?

2.2. Search Process

The search process for the review was directed towards finding papers published in journals and conference proceedings through extensively established literature search engines and databases, including Google Scholar, IEEE Xplore, ACM Digital Library, Elsevier ScienceDirect, Emerald Insight, Hindawi, Inderscience, MDPI, SAGE Journals, SpringerLink, and Taylor & Francis Online.
The search strings we used to explore current relevant literature in the aforementioned databases are composed of keywords that fit the scope of our study:
  • (“HCI” OR “human–computer interaction”) AND (“in-car entertainment” OR “in-vehicle infotainment” OR “in-vehicle information systems”) AND (“safety” OR “safe”)
  • HCI in-car entertainment safety
The search of academic databases was conducted in October 2023.

2.3. Inclusion and Exclusion Criteria

The main criteria for inclusion and exclusion must comply with the following subject matter: topic, recency, the language used, the type of study, and transparency. Accordingly, we defined the following inclusion criteria:
  • Papers must be directly related to at least one of the research questions defined above (RQ1–RQ5);
  • Literature should have been published between January 2012 and September 2023;
  • Articles should be written in English;
  • Papers should be published in journals or conference proceedings;
  • The research methodology used in the literature must be stated clearly and in detail (for instance, measuring instrument employed, sample size, etc.).
Articles that failed to meet one of the above-stated inclusion criteria were excluded from the further procedure.

2.4. Search Results

A comprehensive search of various databases resulted in the identification of 1830 potential studies. After removing 88 duplicates, the remaining 1742 studies were subjected to the screening phase. During this step, 1362 studies were excluded based on their title, as they were not relevant to the research question. Out of the remaining 380 studies, 211 of them could not be retrieved, leaving a total of 169 studies for eligibility assessment. Of these, 72 studies were deemed ineligible as they did not address the research questions, and one study was excluded due to language limitations, leaving a final sample of 96 studies for inclusion in the review. Table 1 displays the composition of the initial and final collections of studies from various academic databases, while Figure 1 presents a standardized report of the search and selection procedure in the form of a PRISMA flow diagram [46].
The final set of literature selected for review was predominantly (60.42%) composed of articles from conference proceedings, while the remaining 39.58% were papers published in peer-reviewed journals. As evident from Figure 2, the largest number of publications relevant to our research was published in the years 2020 (15.63%), 2019 (13.54%), and 2017 (12.5%), while the fewest (3.13%) were published in 2023, 2022, and 2012.
Of the total of 38 papers represented in journals, the highest number (13.16%) was published in Applied Ergonomics, followed by the Journal on Multimodal User Interfaces with 7.90% of the papers. Human Factors: The Journal of the Human Factors and Ergonomics Society, IEEE Vehicular Technology Magazine, International Journal of Human Factors and Ergonomics, International Journal of Industrial Ergonomics, Multimedia Tools and Applications, and Sensors each accounted for 5.26% of the papers published. Each of the remaining 19 journals published 2.63% of the papers. The distribution of paper representation across journals is shown in Table 2.
Of the total 58 papers included in conference proceedings, the majority (22.41%) were presented at the International Conference on Automotive User Interfaces and Interactive Vehicular Applications. This was followed by the International Conference on Human–Computer Interaction, where 17.24% of the relevant papers for our research were presented. The International Conference on Applied Human Factors and Ergonomics featured 5.17% of the papers, while the CHI Conference on Human Factors in Computing Systems, the IEEE Conference on Intelligent Transportation Systems, and the IEEE International Conference on Consumer Electronics each accounted for 3.45% of the papers. Proceedings from each of the remaining 26 conferences contained 1.72% of the papers selected for analysis in this systematic literature review. The distribution of paper representation across conference proceedings is shown in Table 3.

3. Findings

In this section, we will discuss the theoretical knowledge and provide answers to our research questions.

3.1. Theoretical Knowledge

An in-vehicle infotainment system (IVIS) can be defined as any form of interaction while transferring information to the driver in a safe and accessible manner [18]. There are different information types conveyed to the driver, such as vehicle state information, safety information, navigation information, infotainment information, and outside environment information [91]. The combined provision of information with entertainment functionalities is the reason it is called an infotainment system [10]. One of the most important IVIS components is a dashboard, where crucial vehicle state information is shown to the driver, usually located behind the steering wheel. The second most important IVIS component is a central console, which is conventionally located in the middle front of the vehicle and usually contains navigation information, infotainment information, and outside environment information, along with other secondary controls for adjusting the volume of a radio or controlling the air conditioner to adjust the temperature, among others [17].
Many current studies are focused on improving certain aspects of driver safety when developing IVISs. Engagement with IVISs has been defined as the quantity and quality of task-oriented mental resources [40]. Although inattention represents one of the synonyms for distraction, these two terms differ in the context of this paper. The key difference is that distraction is the “diversion of the mind, attention, etc., from a particular object or course; the fact of having one’s attention or concentration disturbed by something,” while inattention represents the “concentration of the mind upon an object; maximal integration of the higher mental processes” [57]. Inattention is when the driver is preoccupied with internalized thought, such as writing a text message or choosing a song, whereas distraction is the diversion of attention away from driving activity. The workload is another similar term frequently used when the IVIS is considered. Hart and Staveland [105] describe workload as “the perceived relationship between the amount of mental processing capability or resources and the amount required by the task.” In simpler terms, it is the amount of mental effort expected from a driver [77]. Another safety risk that often affects drivers is cognitive load or cognitive overload, which is different from cognitive distraction, being related to spontaneously occurring processes like daydreaming or becoming lost in thought. This can increase cognitive overload, which can be defined as any workload imposed on a driver’s cognitive processes [12]. There is also cognitive underload, which can be defined as the repetition of monotonous tasks, such as driving, which makes it difficult for the driver to concentrate [94]. Similar to cognitive load, the visual load is defined as performing a visually demanding secondary task concurrently with driving, resulting in the time-sharing of visual resources between the two tasks [65,79]. Driver distraction is defined as a diversion of attention away from activities critical for safe driving toward a competing secondary task [12]. Working memory (WM) is a limited-capacity cognitive system that can momentarily keep information and is consequently responsible for the temporary storage, processing, and manipulation of information. Another concept regarding safety is an interruptible moment, which represents a fraction of the time during which the driver can receive a piece of information without having to trade off safety [85]. Tactile feedback is the manipulation of electro-tactile feedback, which can elicit different sensations and convey information through electrical currents, generating an electrical field inside the skin to stimulate the nerves [98]. Similarly, haptic feedback is the use of perceived touch to communicate with users, like vibrations, without touching anything physical [16,66]. On the other hand, gestural interaction is when the user interacts with the computing device through a set of well-defined gestures, which can be any form of bodily motion or state but are usually just hand gestures [28,54]. Speech interaction or voice user interface (VUI) allows drivers to interact with a system through voice or speech commands [25,30,33]. Another interface is a head-up display (HUD), an electronic display of information shown in the driver’s upper field of view so that a driver can see it without looking away from the road [18,27]. Additionally, augmented reality (AR) HUD gives the exterior view of the traffic conditions in front of the vehicle with virtual information (augmentations) for the driver [16,19]. The interplay of these interfaces and interactions results in multimodal interfaces, which process two or more combined user input modes in a coordinated manner [22,78]. The last concept affecting safety is gamification, the application of game elements and digital game design techniques to non-game problems, such as driving [38,39,40,58]. Table 4 provides a detailed summary of theoretical knowledge on IVISs.

3.2. Answers to Research Questions

3.2.1. What Makes In-Vehicle Infotainment Systems Safe?

Six factors affect the safety of the in-vehicle infotainment system (IVIS): driving distraction, situational awareness, cognitive load, driving performance, interaction success, and emotional state.
Driver distraction and inattention are top threats to road safety [4] since a distracted driver has a low response to traffic events [3]. Current studies [4,12,23,41] have confirmed that distraction is one of the main causes of traffic accident occurrence. Similarly, the transfer of the driver’s sight and the lack of attention usually result in a traffic accident [5,6,7,90,93]. Distracted driving is a serious and growing threat to road safety, and increased use of IVIS is critical because it induces visual and cognitive distraction, which has challenged the automotive and electronics manufacturers to provide an appropriate solution in that respect [12,18]. The driver’s reaction time is of immense importance for driver safety (i.e., faster reaction time promotes safer driving) [18,60]. However, an extensive portion of devices and applications located on the dashboard is not compatible with the driving process since they require the driver’s immediate attention and response, thus endangering safety. Imposed cognitive load is of major influence as well. IVIS-induced bad driving posture and driver fatigue, alongside high mental workload and driver distraction, and low reaction times, all negatively contribute to driver safety [102]. Some distractions affecting the driver’s concentration are inevitable (e.g., mirror adjustments, turning the lights on, etc.), while others are dispensable (e.g., turning on/off the music, making phone calls, etc.) [8]. The results of a study on the emotional states of drivers conducted by McStay and Urquhart [64] indicate that fatigue, drowsiness, intoxication, and stress harm the safety of driving. Fatigue, drowsiness, and distraction have been found by Oh et al. [7] to influence driving safety negatively as well. Zuki and Sulaiman [17] revealed that fatigue, inattention, visual overload, lack of tactile feedback, auditory attention, visual attention, and slow driver response have negative effects on safe driving. According to Kim et al. [102], drivers’ attention, cognition, drowsiness, fatigue, emotion, and situational awareness are all determinants of safety that can be induced by IVISs. There are a multitude of other studies [30,84,86,93,103] that further emphasize the importance of undistracted driving.
The increase in the driver’s workload enhances the degree of his/her distraction [84]. Thus, keeping an appropriate extent of the driver’s workload is of critical importance [87]. Pauzie and Orfila [61] uncovered that decreased workload and increased situation awareness improve driver safety. Jiang et al. [60] and Riener [28] argue that reduced visual workload can enhance the safety of driving. Zeng et al. [84] discovered that display clutter has a great effect on attention allocation in driving. The findings of a study carried out by the same authors revealed the importance of IVIS interface layout and manipulation (e.g., searching) efficiency while driving. They also uncovered that the conventional location of a display (center console) induces lower mental stress, not negatively impacting safety. Li et al. [87] stated that minimizing the distracting effect of non-driving related tasks (NDRT) is significant to today’s road safety. The findings of the study conducted by the same authors suggest that longer reaction times have negative effects on driving safety. Results of three additional studies [2,25,42] support the claim that increased workload and distraction have a negative influence on driving safety. Visual distraction [12,51,72], manual distraction [12,13], cognitive situational awareness distraction [12], distraction in general [14,15,16,17], inattention [15,16], visual clutter [15,16,19], and distracting interfaces [19] are all found to hurt road safety. Additional four studies [34,69,90,100] confirmed the negative influence of distraction on safety in the context of IVIS.
As drivers do not need to know every single vehicle-related information during the entire ride, because it increases cognitive workload [89], they are advised to avoid overload with information that is not immediately relevant [25,82]. When compared to the use of the switches placed on the steering wheel, interaction with the switches located on the center console attracts more glances and is more cognitively demanding, which negatively impacts driver safety [20]. Cohen-Lazry and Borowsky [59] discovered that reduced gaze at the screen shortens task duration and enables drivers to detect dangers faster.
Drivers often suffer from visual overload while driving. This is because they are using vision not only to focus on driving but also for multitasking, thus dividing their gaze attention [17]. Since kinetic scrolling leads to an increase in visual load and workload, it is recommended to use a swipe instead [25,59,92]. Increased workload reduces situational awareness while using speech interfaces [82] and in-vehicle navigation [81]. Speech interfaces can degrade driving safety, particularly when a driver is cognitively overloaded [76,89]. This is because various obstacles, such as natural language processing (NLP) errors [30,96], often occur during the interaction. Hence, clear communication is essential since NLP errors commonly result in repeated task-switching [75], thus increasing the driver’s anxiety, sadness, and frustration, and causing road rage and aggressive driving [6]. Using any IVIS for visual searches [50], regardless of its position [25,56], enhances visual clutter [25], and causes increased off-road glances [50,59] and glance duration [62], thus making driving less safe. Given that a decrease in visual clutter reduces the need for repeated task-switching [75], making touch buttons bigger is of great importance because it diminishes visual demands on the driver [62], thus improving safety [48].
Many studies [18,19,27,32,49,70,87,89,91,95] indicate that keeping eyes on the road and hands on the steering wheel most contribute to safe driving. The aforementioned studies also suggested that IVISs should reduce the number of glances off-road and proposed various metrics (e.g., task completion times and rates) for examining and improving driver performance. Drivers should not need to look at visual displays and controls for longer than 20 s in total during a single glance, and IVISs should not require drivers to do so [32]. The time spent on continuously executing visual secondary tasks should not exceed 15 s because otherwise, the risk of a car crash increases [68]. Furthermore, a lack of engagement while driving can result in a feeling of boredom in young drivers. This feeling may trigger the seeking of sensations or distractions, which in turn can lead to car accidents [41].
The mundane nature of low engagement [40] and under-challenged driving [58] can increase driver fatigue [49], and result in drowsiness [55], which negatively affects the driver’s reaction time [31], increases lapses in attention [83], and impairs judgment and situational awareness [25]. The use of haptics for in-vehicle information displays can improve a driver’s situational awareness and emergency response times [31].
To enhance the safety of IVISs, a multi-faceted approach is required that focuses on user-friendly design, adaptive technologies, and predictive analytics.
The ergonomics of IVISs are crucial in ensuring driver safety. A user-friendly design should feature intuitive interface layouts that are easy to navigate, thereby minimizing the time drivers spend looking away from the road. Controls and displays should be positioned within easy reach and sight of the driver to reduce physical strain and the need for excessive eye movement. Moreover, using contrasting colors and large fonts can help drivers quickly identify important information without significant focus, which keeps their attention primarily on the driving task [84].
Incorporating adaptive technologies into IVIS can significantly enhance safety. For example, systems that adjust their behavior based on the driving context can reduce cognitive demands. During high-demand driving situations, such as navigating busy intersections or adverse weather conditions, the IVIS could limit the availability of certain functions that are non-essential to driving. This context-aware adaptation ensures that the driver’s attention remains on the road when it is most needed [61].
Leveraging data analytics can improve IVIS safety by anticipating the driver’s needs and minimizing potential distractions before they occur. Predictive analytics can use historical data on the driver’s behavior and preferences to streamline interactions with the IVIS. For instance, if the system recognizes that a driver typically listens to a particular type of music during certain times of the day, it could automatically play that music upon detecting similar conditions, thus reducing the need for manual input [87].
Additionally, integrating voice-controlled technology and natural language understanding enhances IVIS safety by allowing drivers to perform necessary operations without removing their hands from the wheel or their eyes from the road. Advanced speech recognition should be capable of understanding commands accurately in various accents and noise conditions to ensure reliable operation [59]. Table 5 summarizes the key factors affecting the safety of IVISs.

3.2.2. How Safe Are Current In-Vehicle Infotainment Systems?

One of the commonly used IVISs is a dashboard, a head-down display (HDD) physically located on the lower part of the driver’s field of view (FOV) [18]. Dashboard displays can be cluttered with visual messages such as texts, numbers, pictograms, and counters, which can increase the driver’s cognitive load and decrease driving safety [19]. When drivers want to obtain information from a dashboard, they are forced to avert their eyes away from the road for a few seconds. The longer drivers avert their gaze off the road when driving, the higher the possibility that a traffic accident will occur [27]. Another function of the dashboard is to provide safety information such as visual warnings. These warnings could represent a distraction if used inappropriately or be unnoticed by the driver, which is undesirable, especially when they are conveying critical information. Therefore, visual warnings should be supplemented with auditory warnings, which can be performed in different forms, such as the use of tones and manipulation of feedback from the radio [17].
Another often implemented type of IVIS is a center console, which uses analog interfaces consisting of buttons and switches. These require physical touch to manipulate and either need to be visually searched or the feedback of selection needs to be visually confirmed, though this can be avoided with practice since it often requires drivers to take their gaze off the road. In more recent vehicles, buttons and switches can be found on a steering wheel, attracting fewer glances than interaction with the buttons and switches on the center console [20]. Although button control systems are being replaced by touchscreens [22], it is still dangerous for people to operate them while driving [93].
Touchscreens and dashboards, both HDDs, are quite similar regarding their safety issues. Among the two types of scrolling methods, the use of kinetic scrolling for touchscreen displays is not recommended since it leads to increased visual load compared with swipe [65]. Touch interfaces offer interaction such as typing, browsing music, watching videos, connecting to Wi-Fi, connecting via Bluetooth, and navigating using an in-vehicle navigation system. All these activities are sources of distraction and inattention, affecting driving performance and having dangerous consequences for road safety [90]. Although in-vehicle navigation systems are useful, they can be the main cause of many incidents. In one case, the driver completely followed the instructions given by the GPS and drove to the edge of a cliff, realizing he was on the wrong road only when he hit the edge of the fence. Moreover, using in-vehicle navigation causes the driver to be disengaged from the external environment and unaware of environmental objects [81]. Additionally, performing a search for music while driving increases the averted gaze time and reduces the ability to maintain a constant lane position [49]. Listening to music can also adversely affect the driver’s concentration [8].
Speech interfaces are widely used in IVISs because the driver needs to allocate most of their visual and manual attention to driving the vehicle [21]. These interfaces can reduce off-road glances and minimize peripheral impairment. While speech-based interfaces are a promising solution to the distraction caused by secondary task engagement, they are not distraction-free [87]. During the ride, various obstacles can occur, such as NLP errors or the inability of the IVIS to respond to the driver’s command, which consequently occupies the driver’s attention and cognitive resources and reduces driver safety [96]. Since the speech-based interface cannot distinguish human voices from environmental noise, it is prone to NLP errors [30], thus causing the driver to do repeated task-switching. This leads to the completion of the primary task with lower accuracy and longer duration, in addition to increased anxiety and perceived difficulty of the task [75]. While NLP errors can happen with the speech-based interface, the cause of the error can also be a driver who does not recognize, perceive, or react to the information provided by the interface. Therefore, most speech-based interfaces use higher voice pitches, which are usually associated with the female voice, increasing efficiency in the perception of urgency [33].
Amongst the many devices located on the center console, some are especially distracting as they require the driver’s attention and response, mobile phones being one of them [18]. As much as 35–50% of drivers admit they use mobile phones while driving, whereas 90% of them fear those who are doing so [8]. People nowadays spend a significant amount of time with their mobile phones, and driving is no exception in this regard. To address this issue, car manufacturers integrated the feature of making phone calls via Bluetooth technology into their cars, which makes using mobile phones while driving easier and safer [103]. However, driver performance is worse during an auditory task than during a visual task. This is especially dangerous as the risk of collision quadruples during auditory distractions such as conversations on the phone [32]. One solution currently used is to place mobile phones on the car windshield at eye level, which is safer since it reduces off-road glances and gaze time [53].
To provide a comprehensive understanding of the safety of current IVIS, it is essential to examine the latest advancements and potential safety improvements that are being implemented or explored. This includes the integration of advanced technologies, the development of smarter user interfaces, and the incorporation of machine learning algorithms to predict and mitigate safety risks.
Modern IVISs are increasingly integrated with advanced driver assistance systems (ADAS) that offer a range of features designed to enhance safety [5,61]. These systems can include lane-keeping assistance, adaptive cruise control, collision warning systems, and automatic emergency braking. By providing real-time feedback and assistance, ADAS helps reduce the cognitive load on drivers and allows them to focus more on driving safely. However, the increased complexity and additional visual messages from these systems can add to cognitive load and distract drivers, similar to the issues with dashboard displays [19].
Unlike traditional head-down displays, augmented reality head-up displays (AR HUDs) project critical information directly onto the windshield, within the driver’s natural line of sight. This technology minimizes the need for drivers to look away from the road, thereby reducing the risk of accidents. AR HUDs can display navigation cues, speed limits, and hazard warnings, which are superimposed onto the road view, making it easier for drivers to process information quickly and maintain situational awareness [27].
Adaptive user interfaces in IVISs adjust the displayed information based on the driving context and driver behavior. These systems can prioritize essential information during high-demand driving situations and simplify or hide non-essential details. By using sensors and real-time data analysis, adaptive interfaces can tailor the user experience to ensure that drivers receive the right information at the right time, thus enhancing safety [19].
Advances in natural language processing (NLP) have led to more accurate and reliable voice-activated controls in IVISs. These systems are designed to understand and respond to a wide range of voice commands, even in noisy environments. Enhanced NLP reduces the need for manual interaction with the IVIS, allowing drivers to keep their hands on the wheel and eyes on the road. Additionally, ongoing improvements in machine learning algorithms are helping to minimize NLP errors, which further enhance driver safety by reducing cognitive load and task-switching [30,96].
AI and machine learning are being leveraged to predict and mitigate potential safety risks associated with IVIS. By analyzing patterns in driver behavior and environmental conditions, AI systems can proactively adjust the IVIS interface and provide timely alerts. For example, if the AI detects that the driver is becoming drowsy or distracted, it can prompt a warning or adjust the display to reduce non-essential information. This predictive capability helps prevent accidents before they occur [17].
To address the limitations of relying solely on visual or auditory warnings, current IVISs are incorporating multimodal feedback systems that use a combination of visual, auditory, and haptic feedback. For instance, vibrating seats or steering wheels can provide tactile alerts that are immediately perceivable without requiring visual confirmation. This multimodal approach ensures that critical information is communicated effectively, regardless of the driving conditions or the driver’s state of attention [17].
Modern IVISs are designed to integrate seamlessly with other vehicle systems and external devices. Enhanced connectivity allows for real-time data exchange between the vehicle and external sources, such as traffic management systems or emergency services. This integration enables features like real-time traffic updates, hazard notifications, and remote diagnostics, all of which contribute to a safer driving experience [103]. Table 6 summarizes the key factors affecting the safety of current IVISs.

3.2.3. How Have Safety Aspects of In-Vehicle Infotainment Systems Advanced throughout History?

Over the past few years, IVISs have gone through significant improvements considering safety aspects [10,63]. In American automobiles, passive-safety technology was introduced at the end of the 1940s when the instrument panel was coated with sponge rubber [63]. Also, around the 1950s and 1960s, several organizations were established focusing on safety and human factors, such as TNO Human Factors (Soesterberg, The Netherlands), ONSER (road safety organization, currently IFSTTAR, Champs-sur-Marne, France), and the automotive ergonomics study group in JSAE (Tokyo, Japan). As a follow-up, findings from studies focused on driving safety, in which methods from psychology, medicine, and anthropology were applied, led to the production of vehicles geometrically adapted to human characteristics [63].
In the initial stages, human–machine interaction (HMI) in vehicles was focused on providing drivers with information related to primary driving parameters like speed, throttle, and revolution counter. As driving duration increased, manufacturers started implementing radios into vehicles, which in turn increased the complexity of HMI. The combined information with entertainment functionalities led to the revolution of in-vehicle infotainment systems [10]. The Ford S-Max featured entertainment, navigation, and phone options in the center console, vehicle state information, safety information, and safety functionalities in the display behind the steering wheel on the dashboard, which is still the most dominant layout used today. The integration of motorized mirrors that are electrically operated provided more flexibility than mechanical mirrors in terms of adjusting the field of view and reducing blind spots in vehicles [10]. IVIS CD players improved the usability of music players as they accommodated a feature to change tracks more easily than cassette players, thus decreasing driver distraction [10].
Traffic information was first broadcasted via radio during the 1970s, resulting in many studies on a range of sensory modalities over the decades to provide various warning signals, including visual warnings such as icons on the dashboard (1999, 2014), auditory warnings in different forms such as using tones, manipulation of feedback from the radio (2002 until 2009), and tactile warning feedback (2005 until 2013) [17]. There were also studies focused on collision warnings, such as blind spot warning systems and warning systems for distracted drivers [17]. Throughout history, it has been confirmed that negative emotions caused by interacting with IVISs have a significant impact on driver safety. Over the years, it has also been well documented that, when engaging in visual-manual-based NDRTs, drivers tend to have longer reaction times, reduced longitudinal and lateral control, and higher scanning randomness [87].
Lars Magnus Ericsson was the first to put the telephone in a vehicle in 1910, which could be electrically connected to telephone poles installed along the street. As mobile phones became affordable and popular in 2001, the Bluetooth special interest group introduced a hands-free vehicle kit for mobile phones with a speech recognition system, which motivated car manufacturers to integrate the functionality of making phone calls via Bluetooth technology into their vehicles.
Over the past decade, IVISs have evolved from basic entertainment hubs to sophisticated interfaces that enhance driver safety and integrate advanced technologies. This shift includes the transition from physical controls to touchscreens with high-resolution, customizable interfaces that provide easy access to navigation, entertainment, and vehicle information [93].
The adoption of natural language processing [30,96] and enhanced voice recognition [103] has minimized the need for manual inputs, allowing drivers to maintain focus on the road. Connectivity features like Apple CarPlay, Android Auto, and Bluetooth ensure seamless integration with personal devices, maintaining digital lifestyles without compromising safety.
The role of IVISs has expanded with autonomous driving technologies, transforming vehicles into hubs for entertainment and productivity. Advanced Driver Assistance Systems (ADAS), including lane departure warnings and collision avoidance, are now standard, significantly enhancing safety [5,61].
Artificial intelligence in IVIS predicts driver needs and automates responses, reducing distractions. Augmented reality (AR) in head-up displays (HUDs) projects vital information directly onto the windshield, improving situational awareness without diverting the driver’s gaze [16,19].
Cloud-based services provide real-time updates for navigation, maintenance, and infotainment, ensuring that IVISs adapt dynamically to driver needs. With ongoing advancements such as AR, AI, and 5G integration, IVIS is set to redefine in-vehicle connectivity and entertainment. This era highlights a pivotal phase in IVIS development, focusing on safety, convenience, and technological integration, promising to shape the future landscape of IVIS. Table 7 summarizes the key advancements in the safety aspects of IVISs throughout history.

3.2.4. How Can the Safety of In-Vehicle Infotainment Systems Be Enhanced with Emerging Technology?

Touchscreen Interfaces

A safe touchscreen interface should have the following three design principles applied: (i) the touchscreen buttons should be at least as big as the size of an adult human fingertip, thereby facilitating visibility, preventing accidental activation of multiple buttons, and reducing distraction [48]; (ii) the menu should be list-style, which will result in smaller variance in glance durations compared with a grid-layout menu [14]; (iii) the number of menu items should not be large, since it has a significant increasing effect on the number of off-road glances [62]. Good interface design can significantly decrease driver distraction and make driving safer [93]. Adding a live stream display above the touchscreen could aid drivers and prevent collisions [70]. When compared with a swipe, kinetic scrolling leads to decreased visual sampling efficiency and increased visual load [65]. To use any command of the multi-touch interface (MTI) [59], drivers need to place three fingers anywhere on the screen, which will trigger the system to recognize their location and adapt to them. The use of MTI reduces drivers’ need to gaze at the screen to search for specific touch buttons spatially and minimizes their workload [59]. Ahmad et al. [36] developed an in-vehicle touchscreen with mid-air selection and demonstrated that a predictive display can significantly reduce the workload, effort, and duration of completing onscreen selection tasks in vehicles. Virtual touch systems can activate a touchscreen without physically touching it, thus increasing speed interaction and reducing off-road glances. One such system is a wearable virtual touch system, a pointing device for operating IVISs inside a vehicle through virtual touch using an eye gaze tracker, which minimizes cognitive load and reduces eye gaze switching [15]. Eye-gaze systems offer enormous potential and could enable the IVIS to be aware of the user’s current focus and eye activity [47]. Additionally, the positioning of the IVIS was evaluated by eye movement data, where results showed that the drivers’ focus of the visual field varies with the position of the in-vehicle display, and the conventional location of a display (center console) appeared to have the lowest level of galvanic skin response (GSR), which means the lowest mental stress level [84].

Gestural Interaction

Several studies examined the negative sides of using touchscreen devices in vehicles, including driver distraction [30,60], the transfer of the driver’s sight, the lack of attention [93], increased visual inattention [29], increased demands on the driver’s attention, and input inaccuracy [28]. To address the above-mentioned problems, all these studies proposed gestural interaction as an alternative and suggested that gestural interaction is a promising modality for providing input to the IVIS. Ting et al. [93] designed a prototype for gesture interaction and proposed the design of gestures. Considering the current gestural interaction constraints, Jiang et al. [60] developed an optimized gesture control system for mobile devices, which in case of recognized road complications, will warn the user to grab the steering wheel. These gestural constraints include physical constraints, environmental noises, and delay sensitivity [60]. Zhao et al. [30] discovered that although there is no significant difference in the completion time between the touchscreen and gestural interaction modalities when task completion rate is considered, gesture interaction shows obvious advantages in complex road conditions, which means that it can help reduce drivers’ distraction. Sterkenburg et al. [29] built and evaluated the first auditory-supported air gesture-operated grid menu. The evaluation results revealed that the layout of a two-by-two grid gesture-operated menu, along with auditory feedback, had the most safety benefits when compared to a four-by-four grid menu, touchscreen-operated menu, and menu without auditory feedback [29]. Such safety benefits include reduced frequency of off-road glances and lower driver workload.
While existing technology can recognize a driver’s gestures of varying complexity, direct control via hand gestures provides limited choices, and both training and customization are inevitable to reduce errors [28]. When compared to touch-based contact, Graichen et al. [50] discovered that gestures are a better option for in-vehicle interaction because the impacts on driver distraction are less substantial. Apart from the advantages, they also highlight two downsides of gestures: (i) initial cognitive workload for gesture interaction may be even higher for beginners since drivers must learn which gesture is needed for which function; (ii) complex hand and finger movements require more physical effort. Čegovnik et al. [54] found that gesture interaction is less safe and results in much worse vehicle control than the touchpad and ordinary button interface. They speculate it could be the result of an interaction design that requires drivers to perform different mid-air gestures causing not only hand-off-the-wheel but also certain movements of the entire driver’s torso. To address the abovementioned problem, Köpüklü et al. [95] designed an HCI system that is based on dynamic recognition of drivers’ micro hand gestures that are natural and do not distract drivers or require them to take their hands off the wheel while performing them, thus ensuring driving safety. Lee et al. [13] evaluated Braille hand gestures where both hands were used, and the on-wheel gestures distinguished between the left and the right hand. The Braille dots above each switch represented the number of fingers that needed to be spread. Such a system would minimize visual and biomechanical distractions when compared with off-wheel hand gestures and traditional interfaces like touchscreens.
In addition, combining ultrasonic mid-air haptic sensations with hand gestures has the potential to eliminate the need for vision while interacting with the IVIS. Haptics for in-vehicle information displays aim to increase safety, awareness, and response times to emergencies for drivers [31]. Its main disadvantage is the loss of tactile feedback, a key ingredient in the sense of agency [66]. Furthermore, haptic signals may be overused since they can startle and annoy drivers with unexpected vibrations [31].

Tactile Feedback

Since the skin is the largest organ in the human body, it offers many possibilities for tactile feedback from various locations in the vehicle, which can improve driving performance and safety. For instance, tactile feedback can reduce the duration of glances off the road, thus minimizing visual distraction [20]. When tactile feedback is considered, the steering wheel has proven to be a better location for interaction with the in-vehicle information system than the center console, and input should prevail over output location when designing interaction techniques for drivers [20]. Alotaibi [98] examined the design of electro-tactile feedback to enable the creation of effective cues that could be used in a steering wheel, determining the relative importance of the different parameters and understanding where those cues can be used to notify the driver with urgency. Since vibrotactile feedback is indicative and can convey instructional information, Zhu et al. [104] developed a vibrotactile feedback device placed on the wrist, which can be used to provide drivers with navigation information and is especially suitable for people with visual or hearing disabilities.

Speech-Based Interfaces

Braun et al. [32] predicted that the rise of digital assistants like Siri and Alexa will enable interaction with vehicles in the form of a natural spoken conversation, which could cause a trade-off between how well drivers can follow the conversation and safely maneuver the vehicle. As a solution, they suggested visualization of the conversation to support speech interaction, thus allowing the driver to follow up on the current conversation [32]. The same authors proposed a design space for conversational IVISs composed of five dimensions including user input (speech, gaze behavior, gestures, and body language), system output (ambient lights, tactile feedback, smells, visual output, and audio feedback), adaptability (the IVIS adapts its behavior such as timing, visual or auditory output to the driver’s cognitive load, traffic situations, and the resulting emotions), conversation start (system-initiated, wake-up word, push-to-talk, look-to-talk, and gesture-to-talk), the content available (vehicle’s status, safety features, navigation, multimedia, and communication), and position (on central information display, on head-up displays, on a digital instrument cluster, on a windshield, in the mirrors, on top of the dashboard). Their findings suggest that a text-based summary of the visualization in the form of keywords improves recognition and reduces cognitive burden while having no negative impact on driving ability [32]. However, such visual feedback might compete with the visual focus on driving the vehicle.
To address inherent usability issues, such as a turn-taking problem, short-term memory workload, inefficient controls, and difficulty correcting errors in speech-based interfaces, Jung et al. [34] supplemented the voice user interface (VUI) with tactile interaction. When compared with the voice-only interface, the combination of voice and tactile interaction shortens task completion time and enhances user experience without causing additional distractions while driving. When it comes to the gender of the voice, Ji et al. [33] found that in the context of autonomous vehicles, the female voice is more trustworthy, acceptable, and pleasurable, and is easier to hear in noisy conditions when compared to the male voice. Reading the news while driving is more distracting than listening to news podcasts. In that respect, Kurt and Gören [103] developed an Android-based mobile application that enables users to access updates from various newspapers in one place and listen to the news while driving without distraction. Yan et al. [96] proposed an obstacle judgment model, which employs eye-tracking technology to predict whether and what type of obstacles in interaction with in-vehicle VUI systems are encountered. The results of their study pointed out that a long response time has the worst impact on drivers rather than NLP errors, which are commonly considered in the related current literature. Dicke et al. [78] stated that combining auditory and tactile information is effective and advantageous for both blind and visually impaired individuals [78]. Gable et al. [71] investigated the effects of auditory cues when performing a song search on a mobile phone. The advanced auditory cue was found to have a much longer visual fixation duration on driving while searching for song items in the list than performing the same task without the sound, thus reducing drivers’ distraction [71]. Instead of Bluetooth-supplemented HUD systems, Li et al. [87] suggest designing Bluetooth-supplemented VUI systems because they can aid drivers by keeping an acceptable level of workload and situation awareness during the ride. Wang et al. [4] designed a driver-friendly interface composed of a Bluetooth module mounted on the steering wheel and a HUD unit on the windshield, which allows drivers to activate speech interaction and receive visual aid remotely while driving. Findings of a study carried out by the same authors indicate that the introduced interface reduces driver distraction and has better usability when compared with a hand-held mode of mobile devices.

Context-Aware Interfaces

The necessity for driver-passengers to continuously monitor the behavior of the vehicle disappears in conditionally automated vehicles, but they must be able to regain control at any time and on short notice in response to a take-over request (TOR) [94]. Although this requirement necessitates a certain level of situation awareness and motor readiness from drivers, it also allows them longer engagement in non-related driving tasks (NRDTs) [94]. Results of a study in which Schartmuller et al. [94] mounted a touchscreen keyboard and a haptic keyboard on an adaptive prototype interface and compared them with a notebook on the lap baseline case have shown significant improvements regarding gaze reaction, typing performance, subjective ratings, and take-over reaction in favor of the haptic keyboard. Fasanya et al. [86] identified image obscuration, high display brightness in dark driving conditions, poor visibility when raining, distraction, cognitive workload, time for decision-making, reliability, and risk as the most common safety hazards related to electronic side view mirror systems (ESMS) in recent cars. To ensure driving safety and reduce the accident rate of elderly drivers, Tanaka et al. [100] proposed a driver agent system that provides support in the form of advice during the ride and encourages changes in driving behavior based on driving evaluations. According to the results of their study, the agent did not distract elderly drivers and was well-accepted by them [100].

Head-Up Displays

An ever-increasing body of literature shows that head-up displays (HUDs) are the current state-of-the-art solution intended to reduce driver errors originating from distractive interfaces, such as onboard entertainment displays or navigation systems. HUDs have been demonstrated to have faster response times to unexpected road events than HDDs, to have fewer navigational errors, and to produce fewer variations in lateral acceleration and steering wheel angle [78]. The results of several studies [23,24,25] indicate that the use of HUD interfaces results in better driving safety, performance, and experience than the traditional systems currently utilized in vehicular interiors. Furthermore, driver focus is kept on the road with a transparent windshield display with an understandable display layout and familiar icon style for the in-vehicle user interface [24]. Moreover, displaying hazard warnings, traffic sign/signal notifications, and driving instructions aligns with the widely held belief that the advantage of automotive HUD rests in improving driving performance and safety by aiding in both primary and secondary driving tasks [25]. The advantages of displaying information in the driver’s field of view using HUDs are demonstrated in previously mentioned studies.
On the other hand, other studies reported safety risks concerning the use of HUDs, such as visual clutter [16,19], redundant information [15], cognitive capture [18], and lower lateral stability [47]. To avoid the aforementioned safety risks, a good interface design should enable the selection of necessary information inside a vehicle HUD system without exceeding the driver’s cognitive load [23]. Additionally, some studies focused on developing multimodal HUD interfaces by combining HUD with other interfaces, such as hand gestures, eye gaze systems, or voice user interfaces. By keeping their hands on the steering wheel and their eyes on the road at all times, users of a gesture recognition interface designed by Lagoo et al. [88] for the HUD demonstrated a notable increase in collision avoidance. This confirms that gesture-based interaction (GBI) can improve driving performance and boost driver safety because there is no longer a need to look visually for haptic interaction elements of the user interface [50]. Prabhakar and Biswas [10] pointed out that an eye gaze controlled-HUD system could be used to improve operational efficiency and reduce eyes-off-road duration, hence increasing road safety. Jakus et al. [47] analyzed an audio-visual HUD which proved to be more efficient for performing secondary tasks when compared to the audio-only display because it allowed drivers to adapt to highly dynamic situations while driving.

Augmented Reality

In the future, there is an expected increase in large, augmented reality (AR) HUD fields of view, where information could be placed anywhere, from windshield-fixed locations to conformal graphics that are perceptually connected to real-world referents, thus potentially creating dangerous and distracting interfaces [26]. On the other hand, when compared to the in-vehicle screen display, HUD technology combined with AR looks promising to overcome existing bottlenecks such as increasing visual information displayed to drivers [27]. Driving safety can be increased by using an AR HUD that provides drivers with visual feedback, easy access to basic vehicle and ambient information, and assistance in maintaining eye contact with the road [19].
Gabbard et al. [26] introduced a driving simulator that they employed to examine two AR HUD navigation interfaces: a conformal display where blend-in arrows are shown laying on the road and a screen-fixed display that shows arrows floating in the air, indicating where to go. According to the results of their study, screen-fixed displays may be more useful in some situations than conformal displays, suggesting that conformal graphics may not be the best option for all AR interfaces [26]. As good as AR HUD sounds on paper, it could potentially direct the driver’s attention away from critical road elements that are not augmented by the display [16].

Warning Systems

When it comes to warning systems designed for alerting fatigued drivers, Zuki and Sulaiman [17] stated that visual sensory modality is not recommended as the primary source of warnings in time-critical situations since this kind of information should be relayed urgently. Their definition of an ideal in-vehicle warning system is one where all the sensory modalities are used to balance out the drivers’ mental workload. This approach would keep drivers more alert rather than occupied with focusing on only one sense while driving.
The recent growth in the use of smart devices has enabled and facilitated measuring and tracking drivers’ heart rates and general health status. While fitness trackers can provide information on drivers’ activity during the journey based on their heart rate, smartwatches can employ biometric data as an indicator of drivers’ drowsiness [10]. The advances in the field of machine learning have led to the development of an algorithm for determining the level of drivers’ drowsiness based on their eye blinks and head movements [97]. By applying vision-based technologies (e.g., capturing eye gaze, eye blinking, head orientation, etc.), vehicle monitoring systems (onboard diagnosis devices), and wearable sensing technologies, systems that are beneficial for detecting interruptible moments have been developed [85].
Oh et al. [7] introduced a driver warning algorithm based on the fuzzy model, which employs integrated human–vehicle interface data (face and bio-signal) and in-vehicle data (e.g., turn signal) to detect drowsiness and distracted driving, nonfulfillment of safety distance, and lane departure warnings. Zeng et al. [84] proposed a vehicle human–machine interaction interface evaluation method that takes input from eye movement tracking, finger tracking, questionnaires, and interviews to identify problems in the design of IVIS. According to their findings, the layout of IVIS interface functionalities can improve drivers’ efficiency in performing tasks, thus reducing the eyes-off-road time.

Distraction Detection Systems

Other studies touched upon driver distraction evaluation by reviewing vision-based distraction detection systems [12], applying a proposed test protocol that combines driving-related measures with the visual detection response task to compare smart glasses with a conventional head-down display [52], and testing criterion validity (real-world prediction of results) of two low-cost formative design and evaluation methods (Lane Change Test and Occlusion) [99]. When designing a distraction detection system, the simulated environment must be an exact representation of the real environment [101].
Goh et al. [80] created a prototype of a life-saving system that, in the event of an accident, employs crash and pressure sensors to trigger an auto-dialer that calls a pre-programmed list of helplines. Global System for Mobile Communications (GSM) and Global Positioning System (GPS) communication modules are used for sending vehicle registration numbers and a GPS location to all interested parties, such as the authorities and the insurance agency [80]. Drivers perform many tasks during the ride, including keeping the distance between other vehicles, lane switching, and manipulating the environment inside the vehicle using the IVIS. These tasks require working memory. Thus, Miura et al. [73] proposed a method for the quantitative measurement of drivers’ visuospatial workload under the dual tasks of interactions with a target interface and the pattern span test, which could be used to improve driving and IVIS manipulation.
On the other hand, manipulating IVISs while driving can cause motion sickness, which is a detrimental factor to the experience of driving [51]. Given that auditory displays provide passengers with intuitive and helpful audio cues, their feeling of motion sickness is significantly lower when compared to the condition without sound because they can anticipate upcoming maneuvers more easily [51]. When it comes to auditory-verbal interfaces, one major concern is driver interruptibility [67]. To prevent and proactively limit user-initiated interactions when drivers are deemed to be uninterruptible, Kim et al. [89] proposed an interruptibility framework for proactive speech tasks that considers three dimensions of driver interruptibility (driving safety, auditory-verbal performance, and overall perceived difficulty).

Artificial Intelligence

With the advent of artificial intelligence (AI), the number of studies dealing with vehicle driver-monitoring systems that measure human emotions, fatigue, drowsiness, intoxication, stress, and attentiveness has notably increased [64]. However, it should be noted that emotions cannot be read accurately. Even psychologists, who are skilled in this field, often incorrectly reach conclusions associated with emotions [28]. Ahmad et al. [37] developed intent-aware interactive displays that can determine, early in the pointing gesture, the item a user intends to select. The findings of a study carried out by Schömig et al. [83] revealed that encountering difficulties in solving IVIS tasks can result in drivers’ frustration and anger and eventually lead to non-acceptance of new technologies, which is why IVISs need to be well-designed. Recognizing these negative emotions using facial recognition technology could aid drivers in situations when they are feeling emotional discomfort [83]. Wintersberger et al. [35] found that the implementation of attentive user interfaces in automated vehicles decreases stress and increases driving performance as well as performance in non-driving related tasks when the vehicle takes control of the driving. Pretto et al. [82] stated that the transition from autonomous driving to manual driving and vice versa should induce as little cognitive load as possible. In that respect, they conceptualized a fluid interface where the system monitors drivers’ psychophysical states and adapts accordingly, thus ensuring a seamless transfer of control over the vehicle while providing a comfortable and safe ride. The key to the safe deployment of automated driving features is effectively conveying system limits to drivers and preparing them to take over manual vehicle control [74].

Gamification

A lack of engagement while driving leads young drivers to feel bored [41]. Insufficient preoccupation can cause passive task-related tiredness due to lower mental activity, thus making drivers less alert at first and eventually fatigued, which can cause car accidents [58]. Gamification of IVIS aims to solve the aforementioned problems. However, it is crucial to properly examine every concept from a road safety psychology standpoint rather than just implementing gamification [41]. The stimulation provided by gamification lowers boredom and enhances driving performance in a way that raises the driver’s attention and excitement throughout the journey and contributes to a significant reduction in speed, hence encouraging safe driving [58].
Schroeter et al. [38] suggested four reward categories that could be gamified along with the supplementation of augmented reality technology, including the reward of glory (leaderboard), the reward of sustenance (potions, health packs), the reward of access (keys, passwords), and the reward of the facility (improved abilities). Steinberger et al. [39] provided a few examples of blending gamification and augmented reality in the car to turn mundane driving into a fun scenario, including a bird-throwing-inspired game concept that visualizes the driver’s performance concerning smoothness of braking (y-axis) and steering (x-axis), and a zombies-on-the-road theme that encourages safe driving practices by keeping the distance from other vehicles.
As a follow-up, Steinberger et al. [40] implemented two prototypes in the form of smartphone applications where BrakeMaster encourages drivers to slow down slightly when approaching a red traffic light rather than braking suddenly, and CoastMaster motivates drivers to coast down to new speed limits without needing to use the pedal excessively.
Table 8 summarizes how emerging technologies can enhance the safety of IVIS.

3.2.5. What Are the Shortcomings in the Research Conducted So Far Regarding the Safety Aspects of In-Vehicle Infotainment Systems?

A thorough examination of research indicates a lack of evaluation in a real environment because the current evaluation was mostly executed in a simulated environment. When IVIS is considered, the simulated environment and the real environment are vastly different. In that respect, Aksjonov et al. [101] suggested that the simulated environment must be a perfect replica of the real environment when developing IVIS. Moreover, it is impossible to simulate various road situations and traffic participants from different countries while evaluating such systems in laboratory-based simulation trials [10]. Most of the studies evaluated their proposals at a single driving speed. Additionally, some of the current studies did not mention whether drivers’ disabilities were considered during prototyping and evaluation.
Only Goh et al. [80] considered the scenario where a road accident has already occurred, trying to reduce the damage caused. On a related note, most prototypes did not mention any of the fallback features and emergency systems in case of failure. The majority of studies conducted research on cars only, without taking into consideration vans, trucks, and other types of vehicles. Although driving varies significantly around the world in terms of vehicle maintenance, road conditions, traffic rules, driving practices, and anthropometric characteristics of drivers, there is still a deficit of HCI studies concerning uncommon standards in a driving context [3].
Göktürk et al. [81] concluded that in-vehicle navigation systems are successful in terms of the functionalities they provide, but they lack support for safe driving. Depending on how perceptual forms of graphical components exhibited on the HUD are designed, augmented reality driver interfaces can be either informative or distractive [16]. Agudelo et al. [43] emphasized that there is no standard for developing IVISs, which in turn does not provide sufficient safety for the driver and makes performing tasks while driving troublesome. Current literature suggests that speech-based interactions are promising ways to render NDRTs, though their effects on driving performance and visual search behavior have not been well understood yet [87]. Similarly, Zuki and Sulaiman [17] have pointed out that there is a scarcity of studies on warning systems for fatigued drivers, hence the best sensory modality or multisensory modalities to be applied in this type of warning system remains unknown.
Regarding emotions, it is very hard to determine how happy, angry, or sad a driver is. Even when it comes to music, we cannot deduce if an upbeat song will make the driver happy or not [6]. Therefore, IVIS should not rely on emotion detection [64].
Table 9 summarizes the shortcomings in the research conducted so far regarding the safety aspects of IVIS.

4. Discussion

This section critically evaluates and expands upon the findings from the systematic literature review concerning IVISs, adding new perspectives and posing questions that aim to deepen academic discourse about their integration into the driving experience and their broader implications for vehicle safety and driver behavior. The analysis is focused on key aspects such as efficiency, response times, robustness of embedded systems, signal integrity, and ethical considerations, enriched by insights from related studies and previous findings.
Our systematic literature review on the safety aspects of IVISs from 2012 to 2023 offers a detailed examination that both complements and diverges from the findings of earlier literature reviews. For example, the review by Agudelo et al. [43] primarily focused on the design of automotive infotainment system interfaces, briefly touching upon safety elements but not exploring them extensively. In contrast, our review specifically targets safety issues such as driving distraction, cognitive load, and interaction success, providing a deeper understanding of how interface design directly impacts driver safety.
Similarly, the systematic literature review conducted in 2021 by the same group [44] identified numerous gaps in IVIS interface design, particularly the absence of standardized design criteria. Our findings build on this by specifying which interface features increase risk, such as touchscreens and head-down displays, which necessitate that drivers divert their gaze from the road.
Moreover, Rhiu et al. [42] reviewed general smart car technologies with a broad focus on human–vehicle interaction, without focusing exclusively on IVISs. Our review narrows this by concentrating specifically on IVIS technologies, thus offering a focused critique of how these systems interact with driver cognitive processes and situational awareness.
Additionally, Park and Park [25] used a literature review to investigate the functional requirements of automotive head-up displays (HUDs). While their review provided insights into expectations around HUDs from developers, researchers, and users, it offered limited exploration of the direct safety implications of these systems. Our study addresses this oversight by analyzing how HUDs and other advanced display technologies affect driving performance and safety, offering critical insights into their real-world impacts.
Earlier literature reviews have often either generalized about smart car technologies or focused narrowly on specific aspects of interface design without integrating the safety implications of these technologies in everyday driving scenarios. Our review uniquely highlights safety as a primary concern, providing evidence from recent studies that emphasize the need for more comprehensive evaluations and real-world testing to ensure that advancements in IVIS technologies do not compromise road safety.
Efficiency in IVISs is essential for safe vehicle operation. Augmented reality (AR) systems, for example, are highlighted for their exceptional operational efficiency. Zhao et al. [30] demonstrated that AR interfaces significantly enhance interaction efficiency by integrating intuitive navigation cues within the driver’s visual field, thus minimizing distractions and cognitive load. This improvement is crucial for reducing the likelihood of driver errors by maintaining attention on the road. In contrast, touch-based systems demand greater driver interaction, reducing overall efficiency. Ahmad et al. [37] point out that touchscreens require substantial visual and manual engagement, thus increasing cognitive load and potentially compromising safety. The need for frequent physical interaction with touch interfaces can lead to increased reaction times and reduce the driver’s ability to respond swiftly to road changes or hazards, underscoring the need for interface improvements that can retain functionality while minimizing distraction.
Response time is critical for the safety enhancements offered by IVISs. Speech-based interfaces, analyzed by Lee et al. [13], provide rapid responses that help maintain driver focus on the roadway. However, their efficacy decreases in noisy environments where voice recognition reliability diminishes. Kurt and Gören [103] note that ambient noise can result in voice command errors, leading to operational delays and compromised safety. Future developments in IVIS should include adaptive noise cancellation technologies that enhance voice recognition accuracy in diverse auditory environments. This would ensure that the benefits of speech-based interfaces are not diminished under less-than-ideal conditions, thus maintaining their effectiveness in enhancing driving safety.
The robustness of embedded systems within IVISs is vital for consistent functionality, particularly in applications where safety is paramount. Köpüklü et al. [95] explore the significance of fault-tolerant designs in gesture recognition systems, ensuring system reliability even when some components fail. Enhancing robustness involves not only improving hardware reliability but also integrating software solutions that can predict and mitigate failures before they affect system performance. This dual approach ensures that IVIS can maintain high standards of operational integrity, even under stressful conditions or when components experience unexpected failures.
Signal integrity issues like drops or losses in IVISs are significant, particularly for systems that rely on real-time data such as GPS and emergency notifications. Wang et al. [4] show how Bluetooth signal loss can interrupt crucial communications. To combat this, IVISs must be equipped with advanced redundancy protocols and error correction methods that ensure continuous operation. Future systems could benefit from incorporating emerging technologies such as 5G, which offers lower latency and higher reliability than current communication standards, potentially reducing the risks associated with signal loss.
The insights from mobile telecommunications and computer networking offer valuable lessons for IVISs, particularly regarding data integrity and system reliability. Adopting robust error correction and redundancy strategies from these fields can significantly enhance the reliability of IVISs. Additionally, as IVISs increasingly interface with broader digital ecosystems, security becomes a paramount concern. Techniques from cybersecurity can be adapted to safeguard IVISs against potential breaches that could compromise vehicle safety, providing a holistic approach to reliability and security.
Advancements in IVIS technology, designed to reduce driver cognitive load, do not always consider the long-term effects of IVIS interaction. Studies like those by Hart and Staveland [105] have discussed workload as a critical factor, yet traditional metrics such as task completion time do not capture the cumulative cognitive effects. Future research should consider longitudinal study designs that evaluate cognitive load and fatigue over diverse driving scenarios and extended periods. Incorporating physiological measures like EEG and ECG, similar to approaches discussed in studies reviewed in [12], could provide deeper insights into how IVIS designs affect driver safety in the long term.
Enhancements in IVIS technology have improved safety and usability, yet they also risk causing an overreliance that could atrophy driving skills. The phenomenon of ‘automation complacency’ has been well-documented across various domains, including automotive systems, suggesting a need for a more critical examination of dependency on automation. Future studies should explore how drivers adapt to and integrate these systems over time, particularly focusing on maintaining engagement and manual driving competencies, as suggested by [7].
The integration and impacts of IVISs can vary significantly across different socioeconomic and cultural contexts. The majority of the data reviewed reflects findings from technologically advanced regions, potentially skewing the universality of the findings. Future research should aim to include a broader demographic to assess the consistency of benefits and limitations observed across diverse populations. This inclusive approach could guide the customization of IVIS interfaces to better serve various user needs, thereby enhancing both usability and safety, as indicated in studies from regions with less representation in the current literature [17].
The expansion of IVIS capabilities has raised complex ethical issues around privacy and data security, yet these dimensions receive limited attention in current research. As IVISs incorporate more sophisticated data collection and processing functionalities, ethical considerations should be integrated from the outset, not as an afterthought. Developing ethical frameworks that accompany technological advancements is crucial for building trust and transparency, promoting ethical use, and ensuring privacy, as discussed in [64].
The literature primarily examines IVISs as standalone systems without sufficiently addressing their interactions with emerging automotive technologies such as autonomous vehicles (AVs) and electric vehicles (EVs). The convergence of these technologies introduces new challenges and opportunities for enhancing vehicle safety and driver interaction. A systemic approach to studying these interactions, as highlighted in studies like [4], is crucial for understanding how IVISs can complement the autonomous features of AVs, potentially serving as a critical interface for driver oversight and control. Table 10 provides a summary of the study findings on IVISs.

5. Implications

This systematic literature review on IVISs reveals critical insights and forward-looking recommendations, urging stakeholders to go beyond traditional practices and embrace innovative methods to boost road safety and interaction between drivers and systems. It particularly emphasizes the role of researchers in advancing how IVISs integrate with driving experiences, suggesting the development of predictive behavioral models that utilize large datasets to foresee distractions and adapt system behaviors proactively for enhanced safety.
The review also advises designers and developers to implement multi-modal interfaces that extend beyond typical visual and manual inputs, incorporating advanced auditory and tactile feedback mechanisms to reduce drivers’ mental load. Techniques like spatial audio and haptic feedback can improve spatial awareness and convey important navigation and road condition information without requiring visual focus, thereby maintaining driver concentration on the road.
For policymakers, the integration of cutting-edge technologies such as augmented reality and artificial intelligence into IVISs presents both challenges and opportunities. Establishing comprehensive regulatory frameworks is crucial; these should cover safety, efficacy, privacy, and ethical aspects of technology use while promoting innovation and ensuring transparent, secure implementation respecting user privacy.
Furthermore, inclusive design must guide all stages of IVIS development to ensure accessibility for all drivers, including those with disabilities and older adults. This involves creating adaptable interfaces that cater to various physical and sensory capabilities, ensuring clear visibility and interaction under different conditions.
The review underlines the necessity of long-term safety and usability studies to understand the IVIS’s sustained impact on driving behavior over time and under various conditions. It suggests that policymakers support this research through funding and partnerships, which could shape future IVIS technology iterations to meet evolving road safety standards and driver needs.
By adopting these recommendations, stakeholders can significantly contribute to the development of IVIS technologies that not only innovate but also align with the highest standards of safety, accessibility, and driver support, setting new benchmarks for the functionality and safety of automotive infotainment systems, ultimately improving the driving experience and reducing road-related risks.

6. Limitations

The literature review on IVISs has several limitations that impact the interpretation of its results and future research directions. It only includes studies published from January 2012 to September 2023, potentially overlooking foundational and recent advancements in IVIS technologies. This limited time frame may not capture the evolving interactions and long-term effects of newer technologies on safety.
Focused primarily on studies in English, the review could suffer from geographical and linguistic biases, limiting the generalizability of its findings. Important research from non-English sources or diverse driving conditions and vehicle types might be neglected, which could enhance the robustness and applicability of the conclusions globally.
The exclusive reliance on published academic studies introduces a publication bias favoring studies with positive results, likely leading to an overrepresentation of the benefits of IVIS and underreporting of associated risks. The selection of specific databases and journals for this literature may also result in selection bias, excluding relevant grey literature and unpublished studies.
Variability in study design, safety metrics, and assessment methods across the included research complicates data synthesis, posing challenges in drawing definitive conclusions about the IVIS’s safety impacts. The review process might be further limited by reviewer bias and the use of narrow analytical frameworks that do not fully account for the nuances of individual studies.
Moreover, many studies in the review use simulations or controlled environments, which fail to capture the full complexity of real-world driving scenarios. This raises concerns about the practical applicability of the findings to actual driving conditions.
The research focuses solely on non-autonomous IVISs, excluding how these systems function within autonomous vehicles, which feature significantly different dynamics due to advanced autonomous technologies.
Lastly, the review confines its observations to the internal environment of vehicles, ignoring external factors that could also influence driver safety. This narrow focus omits the complex interplay between internal controls and external conditions that can affect safety outcomes.

7. Future Work Directions

Drawing on the insights gained from the systematic literature review, several recommendations can guide future research to address identified gaps and enhance the understanding and development of IVISs.
Future studies should widen the temporal scope beyond the 2012–2023 window of the current review. This expansion would allow researchers to include earlier foundational research as well as the latest technological advancements in IVISs. Including studies from non-English sources and different geographical locations would enrich the findings, making them more universally applicable and reflective of a broader range of driving behaviors and regulatory environments.
The review highlights a significant gap in real-world testing of IVISs. Future research should prioritize naturalistic driving studies that evaluate the impact of IVISs under real traffic conditions. Such studies are essential for assessing the actual effects of these systems on driver behavior and safety, providing a more accurate depiction of their implications in everyday driving scenarios.
To overcome methodological inconsistencies across studies, there is a pressing need for standardized methodologies in evaluating IVISs. This effort should focus on harmonizing definitions of key variables like distraction and cognitive load and employing consistent and reliable measurement techniques. Standardization would ensure comparability and reproducibility of results, enhancing the reliability of conclusions drawn from this research.
Investigating how multimodal interfaces, which integrate visual, auditory, and haptic feedback, affect IVIS usability and safety could reduce the cognitive demands on drivers. Furthermore, exploring advanced technologies such as augmented reality and gesture control could offer innovative ways to interact with IVISs, potentially reducing distraction. Research in this area should assess both the benefits and challenges of these technologies to optimize IVIS design for safety.
The complexity of IVIS systems and their broad impact on driver behavior necessitate an interdisciplinary approach. Collaboration across fields such as human–computer interaction, psychology, automotive engineering, and traffic safety is essential. Such collaborative efforts can lead to more holistic approaches to IVIS evaluation and design, ultimately improving the integration of these systems into vehicles.
There is a need for focused research on developing comprehensive safety and regulatory standards for IVISs. Studies should aim to establish clear guidelines that dictate acceptable levels of distraction, usability requirements, and safety benchmarks that must be met by IVISs. Establishing these standards is critical for guiding the design and integration of safer IVIS technologies in vehicles.
Future research should expand the scope to include both autonomous and non-autonomous IVISs. This expanded focus should examine how IVISs designed for autonomous vehicles (AVs) interacts with both the driver and the external driving environment. By broadening the scope to assess these external factors alongside internal dynamics, future studies can provide a more holistic understanding of IVIS impacts on overall driver safety in both autonomous and traditional driving contexts. Such research is essential for developing IVISs that effectively enhance safety across the full spectrum of modern vehicular technologies.
By addressing these recommendations, future research can significantly advance our understanding of IVISs and contribute to the development of systems that enhance, rather than impair, driver performance and road safety. This approach will help ensure that IVIS technologies are both effective and safe for use in diverse driving environments.

8. Conclusions

The principal outcomes of this review underline the widespread issue of distractions associated with IVISs. Technologies such as head-down displays and touchscreens, despite their innovative nature, are frequently critiqued for their role in promoting distractions by requiring drivers to divert their gaze from the road. Conversely, technologies that employ speech-based interfaces and Bluetooth integration are recognized for their potential to diminish manual interactions, thereby potentially improving driving safety by enabling drivers to maintain their focus on the road and their hands on the steering wheel.
Despite these technological advancements, the review points out a significant deficiency in empirical evaluations conducted in real-world settings. Most research relies on simulated conditions, which fail to accurately reflect the complexities inherent in actual driving environments. This gap underscores the necessity for enhanced research methodologies capable of assessing the efficacy and safety of IVISs in diverse and unpredictable driving situations.
Moreover, the review addresses the need for superior interface designs that prioritize safety and user-friendliness. It proposes the development of augmented reality head-up displays that project essential information within the driver’s field of vision, thereby reducing the necessity to look away from the road. Such displays could markedly decrease driver errors by integrating vital data smoothly with the external driving conditions.
A further critical issue identified in the review is the lack of standardized manufacturing protocols for IVISs. The absence of such standards results in significant variations in the safety features and performance of these systems across different manufacturers and vehicle models. Establishing stringent manufacturing and safety standards could ensure that all IVISs meet essential safety requirements, thus mitigating the risk of accidents associated with these systems.
The review also highlights the critical role of educational initiatives concerning the use of IVIS. It is imperative that drivers are thoroughly educated on the optimal utilization of these systems to minimize safety risks. Educational programs and comprehensive user manuals should be devised to instruct drivers on the safe operation of IVISs, emphasizing the risks associated with excessive dependence and the potential for distraction.
While IVISs hold the promise of substantially enhancing the driving experience and vehicle control, their impact on safety is ambivalent. The ongoing challenge for manufacturers, researchers, and policymakers lies in advancing the technology while concurrently enforcing stringent safety protocols. Future research should concentrate on extensive real-world testing, the creation of safer user interfaces, and the formulation of comprehensive safety standards. Through such endeavors, the evolution of IVISs can continue with a foremost focus on improving road safety and curbing the incidence of technology-induced driving distractions.

Author Contributions

Conceptualization, R.K., A.Ž. and T.O.; methodology, R.K., A.Ž. and T.O.; software, R.K., A.Ž. and T.O.; validation, R.K., A.Ž. and T.O.; formal analysis, T.O.; investigation, R.K. and A.Ž.; resources, R.K., A.Ž. and T.O.; data curation, R.K., A.Ž. and T.O.; writing—original draft preparation, R.K. and A.Ž.; writing—review and editing, T.O.; visualization, R.K., A.Ž. and T.O.; supervision, T.O.; project administration, T.O.; funding acquisition, T.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. World Health Organization. Road Traffic Injuries. Available online: https://www.who.int/news-room/fact-sheets/detail/road-traffic-injuries (accessed on 25 April 2023).
  2. The World Counts. Top 10 Causes of Death. Available online: https://www.theworldcounts.com/populations/world/deaths (accessed on 27 December 2022).
  3. Verma, I.; Nath, S.; Karmakar, S. Research in Driver–Vehicle Interaction: Indian Scenario. In Ergonomics in Caring for People; Ray, G., Iqbal, R., Ganguli, A., Khanzode, V., Eds.; Springer: Singapore, 2015; pp. 353–361. [Google Scholar] [CrossRef]
  4. Wang, Y.; He, S.; Mohedan, Z.; Zhu, Y.; Jiang, L.; Li, Z. Design and Evaluation of a Steering Wheel-Mount Speech Interface for Drivers’ Mobile Use in Car. In Proceedings of the 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), Qingdao, China, 8–11 October 2014; pp. 673–678. [Google Scholar] [CrossRef]
  5. Fan, B.; Ma, J.; Jiang, N.; Dogan, H.; Ali, R. A Rule Based Reasoning System for Initiating Passive ADAS Warnings without Driving Distraction Through an Ontological Approach. In Proceedings of the 2018 IEEE International Conference on Systems, Man, and Cybernetics, Miyazaki, Japan, 7–10 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 3511–3517. [Google Scholar] [CrossRef]
  6. Jeon, M. A systematic approach to using music for mitigating affective effects on driving performance and safety. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing, Pittsburgh, PA, USA, 5–8 September 2012; ACM: New York, NY, USA, 2012; pp. 1127–1132. [Google Scholar] [CrossRef]
  7. Oh, Y.D.; Ryu, D.W.; Park, S.H. The development of driver warning algorithm for safety driving support. In Proceedings of the 2016 IEEE International Conference on Consumer Electronics-Asia, Seoul, Republic of Korea, 26–28 October 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–4. [Google Scholar] [CrossRef]
  8. Gaffar, A.; Kouchak, S.M. Quantitative driving safety assessment using interaction design benchmarking. In Proceedings of the 2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation Conference, San Francisco, CA, USA, 4–8 August 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–8. [Google Scholar] [CrossRef]
  9. Orehovački, T.; Oreški, G.; Šajina, R. Driving Habits and the Need for Fatigue and Attention Monitoring Devices: Insights from Croatian Drivers. In Proceedings of the 47th ICT and Electronics Convention, Opatija, Croatia, 23 May 2024; pp. 1682–1687. [Google Scholar]
  10. Prabhakar, G.; Biswas, P. A Brief Survey on Interactive Automotive UI. Transp. Eng. 2021, 6, 100089. [Google Scholar] [CrossRef]
  11. Starkey, N.J.; Charlton, S.G. Drivers Use of In-Vehicle Information Systems and Perceptions of Their Effects on Driving. Front. Sustain. Cities 2020, 2, 39. [Google Scholar] [CrossRef]
  12. Fernández, A.; Usamentiaga, R.; Carús, J.; Casado, R. Driver Distraction Using Visual-Based Sensors and Algorithms. Sensors 2016, 16, 1805. [Google Scholar] [CrossRef] [PubMed]
  13. Lee, S.H.; Yoon, S.O.; Shin, J.H. On-wheel finger gesture control for in-vehicle systems on central consoles. In Adjunct Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Nottingham, UK, 1–3 September 2015; ACM: New York, NY, USA, 2015; pp. 94–99. [Google Scholar] [CrossRef]
  14. Yiwei, D. The effect of different interaction architecture on Driver Distraction. In Proceedings of the 2019 2nd International Conference on Information Systems and Computer Aided Education, Dalian, China, 28–30 September 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 359–361. [Google Scholar] [CrossRef]
  15. Han, X.; Patterson, P. The effect of information availability in a user interface (UI) on in-vehicle task performance: A pilot study. Int. J. Ind. Ergon. 2017, 61, 131–141. [Google Scholar] [CrossRef]
  16. Kim, H.; Gabbard, J.L. Assessing Distraction Potential of Augmented Reality Head-Up Displays for Vehicle Drivers. Hum. Factors 2022, 64, 852–865. [Google Scholar] [CrossRef] [PubMed]
  17. Zuki, F.S.M.; Sulaiman, S. Investigating sensory modalities in fatigue driver warning systems. In Proceedings of the 2016 4th International Conference on User Science and Engineering, Melaka, Malaysia, 23–25 August 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 128–132. [Google Scholar] [CrossRef]
  18. Charissis, V.; Falah, J.; Lagoo, R.; Alfalah, S.F.M.; Khan, S.; Wang, S.; Altarteer, S.; Larbi, K.B.; Drikakis, D. Employing Emerging Technologies to Develop and Evaluate In-Vehicle Intelligent Systems for Driver Support: Infotainment AR HUD Case Study. Appl. Sci. 2021, 11, 1397. [Google Scholar] [CrossRef]
  19. Yontem, A.O.; Li, K.; Chu, D.; Meijering, V.; Skrypchuk, L. Prospective Immersive Human-Machine Interface for Future Vehicles: Multiple Zones Turn the Full Windscreen Into a Head-Up Display. IEEE Veh. Technol. Mag. 2021, 16, 83–92. [Google Scholar] [CrossRef]
  20. Vo, D.B.; Brewster, S. Investigating the effect of tactile input and output locations for drivers’ hands on in-car tasks performance. In Proceedings of the 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Virtual Event, DC, USA, 21–22 September 2020; ACM: New York, NY, USA, 2020; pp. 1–8. [Google Scholar] [CrossRef]
  21. Truschin, S.; Schermann, M.; Goswami, S.; Krcmar, H. Designing interfaces for multiple-goal environments. ACM Trans. Comput.-Hum. Interact. 2014, 21, 7. [Google Scholar] [CrossRef]
  22. Prabhakar, G.; Rajkhowa, P.; Harsha, D.; Biswas, P. A wearable virtual touch system for IVIS in cars. J. Multimodal User Interfaces 2022, 16, 87–106. [Google Scholar] [CrossRef]
  23. Charissis, V.; Larbi, K.B.; Lagoo, R.; Wang, S.; Khan, S. Design Principles and User Experience of Automotive Head-Up Display Development. In Proceedings of the International Display Workshops, Virtual, 1–3 December 2021; Volume 28, pp. 662–665. [Google Scholar] [CrossRef]
  24. Chen, B.H.; Huang, S.C.; Tsai, W.H. Eliminating Driving Distractions: Human-Computer Interaction with Built-In Applications. IEEE Veh. Technol. Mag. 2017, 12, 20–29. [Google Scholar] [CrossRef]
  25. Park, J.; Park, W. Functional requirements of automotive head-up displays: A systematic review of literature from 1994 to present. Appl. Ergon. 2019, 76, 130–146. [Google Scholar] [CrossRef]
  26. Gabbard, J.L.; Smith, M.; Tanous, K.; Kim, H.; Jonas, B. AR DriveSim: An Immersive Driving Simulator for Augmented Reality Head-Up Display Research. Front. Robot. AI 2019, 6, 98. [Google Scholar] [CrossRef] [PubMed]
  27. Pauzie, A. Head Up Display in Automotive: A New Reality for the Driver. In Design, User Experience, and Usability: Interactive Experience Design; Marcus, A., Ed.; Lecture Notes in Computer Science, 9188; Springer: Cham, Switzerland, 2015; pp. 505–516. [Google Scholar] [CrossRef]
  28. Riener, A. Gestural Interaction in Vehicular Applications. Computer 2012, 45, 42–47. [Google Scholar] [CrossRef]
  29. Sterkenburg, J.; Landry, S.; Jeon, M. Design and evaluation of auditory-supported air gesture controls in vehicles. J. Multimodal User Interfaces 2019, 13, 55–70. [Google Scholar] [CrossRef]
  30. Zhao, D.; Wang, C.; Liu, Y.; Liu, T. Implementation and Evaluation of Touch and Gesture Interaction Modalities for In-vehicle Infotainment Systems. In Image and Graphics; Zhao, Y., Barnes, N., Chen, B., Westermann, R., Kong, X., Lin, C., Eds.; Lecture Notes in Computer Science, 11903; Springer: Cham, Switzerland, 2019; pp. 384–394. [Google Scholar] [CrossRef]
  31. Karam, M.; Wilde, R.; Langdon, P. Somatosensory Interactions: Exploring Complex Tactile-Audio Messages for Drivers. In Advances in Human Factors and System Interactions; Nunes, I., Ed.; Advances in Intelligent Systems and Computing, 497; Springer: Cham, Switzerland, 2017; pp. 117–128. [Google Scholar] [CrossRef]
  32. Braun, M.; Broy, N.; Pfleging, B.; Alt, F. Visualizing natural language interaction for conversational in-vehicle information systems to minimize driver distraction. J. Multimodal User Interfaces 2019, 13, 71–88. [Google Scholar] [CrossRef]
  33. Ji, W.; Liu, R.; Lee, S. Do Drivers Prefer Female Voice for Guidance? An Interaction Design About Information Type and Speaker Gender for Autonomous Driving Car. In HCI in Mobility, Transport, and Automotive Systems; Krömker, H., Ed.; Lecture Notes in Computer Science, 11596; Springer: Cham, Switzerland, 2019; pp. 208–224. [Google Scholar] [CrossRef]
  34. Jung, J.; Lee, S.; Hong, J.; Youn, E.; Lee, G. Voice + Tactile: Augmenting In-vehicle Voice User Interface with Tactile Touchpad Interaction. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; ACM: New York, NY, USA, 2020; pp. 1–12. [Google Scholar] [CrossRef]
  35. Wintersberger, P.; Riener, A.; Schartmüller, C.; Frison, A.K.; Weigl, K. Let Me Finish before I Take Over. In Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Toronto, ON, Canada, 23–25 September 2018; ACM: New York, NY, USA, 2018; pp. 53–65. [Google Scholar] [CrossRef]
  36. Ahmad, B.I.; Langdon, P.M.; Godsill, S.J.; Donkor, R.; Wilde, R.; Skrypchuk, L. You Do Not Have to Touch to Select: A study on Predictive In-car Touchscreen with Mid-air Selection. In Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Ann Arbor, MI, USA, 24–26 October 2016; ACM: New York, NY, USA; pp. 113–120. [Google Scholar] [CrossRef]
  37. Ahmad, B.I.; Murphy, J.K.; Langdon, P.M.; Godsill, S.J.; Hardy, R.; Skrypchuk, L. Intent Inference for Hand Pointing Gesture-Based Interactions in Vehicles. IEEE T. Cybern. 2016, 46, 878–889. [Google Scholar] [CrossRef] [PubMed]
  38. Schroeter, R.; Oxtoby, J.; Johnson, D. AR and Gamification Concepts to Reduce Driver Boredom and Risk Taking Behaviours. In Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Seattle, WA, USA, 17–19 September 2014; ACM: New York, NY, USA, 2014; pp. 1–8. [Google Scholar] [CrossRef]
  39. Steinberger, F.; Schroeter, R.; Lindner, V.; Fitz-Walter, Z.; Hall, J.; Johnson, D. Zombies on the road. In Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Nottingham, UK, 1–3 September 2015; ACM: New York, NY, USA, 2015; pp. 320–327. [Google Scholar] [CrossRef]
  40. Steinberger, F.; Schroeter, R.; Foth, M.; Johnson, D. Designing Gamified Applications that Make Safe Driving More Engaging. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; ACM: New York, NY, USA, 2017; pp. 2826–2839. [Google Scholar] [CrossRef]
  41. Steinberger, F.; Schroeter, R.; Lindner, V. From gearstick to joystick—Challenges in designing new interventions for the safety-critical driving context. In Proceedings of the OzCHI 2015 Workshop, Melbourne, Australia, 7 December 2015; Available online: https://eprints.qut.edu.au/89990/1/Steinberger%202015%20OzCHI%20Ethical%20Encounters%20Workshop.pdf (accessed on 8 May 2022).
  42. Rhiu, I.; Kwon, S.; Bahn, S.; Yun, M.H.; Yu, W. Research Issues in Smart Vehicles and Elderly Drivers: A Literature Review. Int. J. Hum.-Comput. Interact. 2015, 31, 635–666. [Google Scholar] [CrossRef]
  43. Agudelo, A.F.; Bambague, D.F.; Collazos, C.A.; Luna-García, H.L.; Fardoun, H. Design Guide for Interfaces of Automotive Infotainment Systems Based on Value Sensitive Design: A Systematic Review of the Literature. In Proceedings of the VI Iberoamerican Conference on Human Computer Interaction, Arequipa, Perú, 16–18 September 2020; Rosas-Paredes, K., Villalba-Condori, K.O., Eds.; Universidad Católica de Santa María: Arequipa, Perú, 2020; pp. 140–150. Available online: https://ceur-ws.org/Vol-2747/paper13.pdf (accessed on 8 May 2022).
  44. Agudelo, A.F.; Bambague, D.F.; Collazos, C.A.; Luna-García, H.; Fardoun, H. Identification of problems in the design of infotainment system interfaces. In Proceedings of the VII Iberoamerican Conference on Human Computer Interaction, São Paulo, Brazil, 8–10 September 2021; Satoshi Kawamoto, A.L., García-Holgado, A., Zaina, L., Ruiz, P.H., Farinazzo Martins, V., Agredo Delgado, V., Eds.; Universidade Presbiteriana Mackenzie: São Paulo, Brazil, 2021; pp. 141–146. Available online: http://ceur-ws.org/Vol-3070/short02.pdf (accessed on 8 May 2022).
  45. Keele, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; Technical Report 2016, Ver. 2.3 Technical Report; EBSE: Durham, UK, 2007. [Google Scholar]
  46. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef] [PubMed]
  47. Jakus, G.; Dicke, C.; Sodnik, J. A user study of auditory, head-up and multi-modal displays in vehicles. Appl. Ergon. 2015, 46, 184–192. [Google Scholar] [CrossRef]
  48. Kim, H.; Kwon, S.; Heo, J.; Lee, H.; Chung, M.K. The effect of touch-key size on the usability of In-Vehicle Information Systems and driving safety during simulated driving. Appl. Ergon. 2014, 45, 379–388. [Google Scholar] [CrossRef]
  49. Kim, H.; Song, H. Evaluation of the safety and usability of touch gestures in operating in-vehicle information systems with visual occlusion. Appl. Ergon. 2014, 45, 789–798. [Google Scholar] [CrossRef] [PubMed]
  50. Graichen, L.; Graichen, M.; Krems, J.F. Evaluation of Gesture-Based In-Vehicle Interaction: User Experience and the Potential to Reduce Driver Distraction. Hum. Factors 2019, 61, 774–792. [Google Scholar] [CrossRef]
  51. Maculewicz, J.; Larsson, P.; Fagerlönn, J. Intuitive and subtle motion-anticipatory auditory cues reduce motion sickness in self-driving cars. Int. J. Hum. Factors Ergon. 2021, 8, 370–392. [Google Scholar] [CrossRef]
  52. Wiedemann, K.; Schömig, N.; Naujoks, F.; Neukum, A.; Keinath, A. Method for the Evaluation of Distraction Effects of Head-Up Displays in Vehicles Using the Example of Smart Glasses. Int. J. Hum. Factors Ergon. 2023, 10, 235–264. [Google Scholar] [CrossRef]
  53. Jing, C.; Bryan-Kinns, N.; Yang, S.; Zhi, J.; Zhang, J. The influence of mobile phone location and screen orientation on driving safety and the usability of car-sharing software in-car use. Int. J. Ind. Ergon. 2021, 84, 103168. [Google Scholar] [CrossRef]
  54. Čegovnik, T.; Stojmenova, K.; Tartalja, I.; Sodnik, J. Evaluation of different interface designs for human-machine interaction in vehicles. Multimed. Tools Appl. 2020, 79, 21361–21388. [Google Scholar] [CrossRef]
  55. Grogna, D.; Stojmenova, K.; Jakus, G.; Barreda-Ángeles, M.; Verly, J.G.; Sodnik, J. The impact of drowsiness on in-vehicle human-machine interaction with head-up and head-down displays. Multimed. Tools Appl. 2018, 77, 27807–27827. [Google Scholar] [CrossRef]
  56. Kang, B.; Lee, Y. High-Resolution Neural Network for Driver Visual Attention Prediction. Sensors 2020, 20, 2030. [Google Scholar] [CrossRef]
  57. Regan Michael, A.; Strayer David, L. Towards an Understanding of Driver Inattention: Taxonomy and Theory. Ann. Adv. Automot. Med. 2014, 58, 5–14. [Google Scholar]
  58. Bier, L.; Emele, M.; Gut, K.; Kulenovic, J.; Rzany, D.; Peter, M.; Abendroth, B. Preventing the risks of monotony related fatigue while driving through gamification. Eur. Transp. Res. Rev. 2019, 11, 44. [Google Scholar] [CrossRef]
  59. Cohen-Lazry, G.; Borowsky, A. Improving Drivers’ Hazard Perception and Performance Using a Less Visually-Demanding Interface. Front. Psychol. 2020, 11, 2216. [Google Scholar] [CrossRef] [PubMed]
  60. Jiang, L.; Xia, M.; Liu, X.; Bai, F. Givs: Fine-Grained Gesture Control for Mobile Devices in Driving Environments. IEEE Access 2020, 8, 49229–49243. [Google Scholar] [CrossRef]
  61. Pauzie, A.; Orfila, O. Methodologies to assess usability and safety of ADAS and automated vehicle. IFAC-PapersOnLine 2016, 49, 72–77. [Google Scholar] [CrossRef]
  62. Feng, F.; Liu, Y.; Chen, Y. Effects of Quantity and Size of Buttons of In-Vehicle Touch Screen on Drivers’ Eye Glance Behavior. Int. J. Hum.-Comput. Interact. 2018, 34, 1105–1118. [Google Scholar] [CrossRef]
  63. Akamatsu, M.; Green, P.; Bengler, K. Automotive Technology and Human Factors Research: Past, Present, and Future. Int. J. Veh. Technol. 2013, 526180, 1–27. [Google Scholar] [CrossRef]
  64. McStay, A.; Urquhart, L. In cars (are we really safest of all?): Interior sensing and emotional opacity. Int. Rev. Law. Comput. Technol. 2022, 36, 470–493. [Google Scholar] [CrossRef]
  65. Kujala, T. Browsing the information highway while driving: Three in-vehicle touch screen scrolling methods and driver distraction. Pers. Ubiquitous Comput. 2013, 17, 815–823. [Google Scholar] [CrossRef]
  66. Young, G.; Milne, H.; Griffiths, D.; Padfield, E.; Blenkinsopp, R.; Georgiou, O. Designing Mid-Air Haptic Gesture Controlled User Interfaces for Cars. In Proceedings of the ACM on Human-Computer Interaction; ACM: New York, NY, USA, 2020; Volume 4, pp. 1–23. [Google Scholar] [CrossRef]
  67. Kim, A.; Choi, W.; Park, J.; Kim, K.; Lee, U. Interrupting Drivers for Interactions. In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies; ACM: New York, NY, USA, 2018; Volume 2, pp. 1–28. [Google Scholar] [CrossRef]
  68. Niu, J.; Zhou, Y.; Wang, D.; Liu, X. Influences of Gesture-Based Mobile Phone Use While Driving. Transp. Res. Rec. 2021, 2675, 1324–1335. [Google Scholar] [CrossRef]
  69. Xiao, Y.; He, R. The intuitive grasp interface: Design and evaluation of micro-gestures on the steering wheel for driving scenario. Univers. Access Inf. Soc. 2020, 19, 433–450. [Google Scholar] [CrossRef]
  70. Buchhop, K.; Edel, L.; Kenaan, S.; Raab, U.; Böhm, P.; Isemann, D. In-Vehicle Touchscreen Interaction: Can a Head-Down Display Give a Heads-Up on Obstacles on the Road? In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Oldenburg, Germany, 24–27 September 2017; ACM: New York, NY, USA, 2017; pp. 21–30. [Google Scholar] [CrossRef]
  71. Gable, T.M.; Walker, B.N.; Moses, H.R.; Chitloor, R.D. Advanced auditory cues on mobile phones help keep drivers’ eyes on the road. In Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Eindhoven, The Netherlands, 28–30 October 2013; ACM: New York, NY, USA, 2013; pp. 66–73. [Google Scholar] [CrossRef]
  72. Kujala, T.; Grahn, H. Visual Distraction Effects of In-Car Text Entry Methods. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Oldenburg, Germany, 24–27 September 2017; ACM: New York, NY, USA, 2017; pp. 1–10. [Google Scholar] [CrossRef]
  73. Miura, T.; Yabu, K.I.; Tanaka, K.; Ozawa, H.; Furukawa, M.; Michiyoshi, S.; Yamamoto, T.; Ueda, K.; Ifukube, T. Visuospatial Workload Measurement of an Interface Based on a Dual Task of Visual Working Memory Test. In Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Ann Arbor, MI, USA, 24–26 October 2016; ACM: New York, NY, USA, 2016; pp. 9–17. [Google Scholar] [CrossRef]
  74. Naujoks, F.; Wiedemann, K.; Schömig, N. The Importance of Interruption Management for Usefulness and Acceptance of Automated Driving. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Oldenburg, Germany, 24–27 September 2017; ACM: New York, NY, USA, 2017; pp. 254–263. [Google Scholar] [CrossRef]
  75. Rajan, R.; Selker, T.; Lane, I. Effects of Mediating Notifications Based on Task Load. In Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Ann Arbor, MI, USA, 24–26 October 2016; ACM: New York, NY, USA, 2016; pp. 145–152. [Google Scholar] [CrossRef]
  76. Seppelt, B.; Seaman, S.; Angell, L.; Mehler, B.; Reimer, B. Differentiating Cognitive Load Using a Modified Version of AttenD. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Oldenburg, Germany, 24–27 September 2017; ACM: New York, NY, USA, 2017; pp. 114–122. [Google Scholar] [CrossRef]
  77. Wei, W.; Xue, Q.; Yang, X.; Du, H.; Wang, Y.; Tang, Q. Assessing the Cognitive Load Arising from In-Vehicle Infotainment Systems Using Pupil Diameter. In Proceedings of the Cross-Cultural Design, Copenhagen, Denmark, 23–28 July 2023; Rau, P.-L.P., Ed.; Springer Nature: Cham, Switzerland, 2023; pp. 440–450. [Google Scholar] [CrossRef]
  78. Dicke, C.; Jakus, G.; Sodnik, J. Auditory and Head-Up Displays in Vehicles. In Human-Computer Interaction. Applications and Services; Kurosu, M., Ed.; Lecture Notes in Computer Science, 8005; Springer: Berlin/Heidelberg, Germany, 2013; pp. 551–560. [Google Scholar] [CrossRef]
  79. Huang, G.; Chen, Y. Effects of Visual and Cognitive Load on User Interface of Electric Vehicle—Using Eye Tracking Predictive Technology. In Proceedings of the HCI in Mobility, Transport, and Automotive Systems, Copenhagen, Denmark, 23–28 July 2023; Krömker, H., Ed.; Springer Nature: Cham, Switzerland, 2023; pp. 375–384. [Google Scholar] [CrossRef]
  80. Goh, K.N.; Chen, Y.Y.; Arumugam, D. Road Accident Auto-dialer via Pressure Sensor. In HCI International 2013—Posters’ Extended Abstracts; Stephanidis, C., Ed.; Communications in Computer and Information Science, 374; Springer: Berlin/Heidelberg, Germany, 2013; pp. 308–312. [Google Scholar] [CrossRef]
  81. Göktürk, M.; Pakkan, A. Effects of In-Car Navigation Systems on User Perception of the Spatial Environment. In Design, User Experience, and Usability. User Experience in Novel Technological Environments; Marcus, A., Ed.; Lecture Notes in Computer Science, 8014; Springer: Berlin/Heidelberg, Germany, 2013; pp. 57–64. [Google Scholar] [CrossRef]
  82. Pretto, P.; Mörtl, P.; Neuhuber, N. Fluid Interface Concept for Automated Driving. In HCI in Mobility, Transport, and Automotive Systems. Automated Driving and In-Vehicle Experience Design; Krömker, H., Ed.; Lecture Notes in Computer Science, 12212; Springer: Cham, Switzerland, 2020; pp. 114–130. [Google Scholar] [CrossRef]
  83. Schömig, N.; Naujoks, F.; Hammer, T.; Tomzig, M.; Hinterleitner, B.; Mayer, S. Experimental Induction and Measurement of Negative Affect Induced by Interacting with In-vehicle Information Systems. In Human-Computer Interaction. Theories, Methods, and Human Issues; Kurosu, M., Ed.; Lecture Notes in Computer Science, 10901; Springer: Cham, Switzerland, 2018; pp. 441–452. [Google Scholar] [CrossRef]
  84. Zeng, M.; Guo, G.; Tang, Q. Vehicle Human-Machine Interaction Interface Evaluation Method Based on Eye Movement and Finger Tracking Technology. In HCI International 2019—Late Breaking Paper; Stephanidis, C., Ed.; Lecture Notes in Computer Science, 11786; Springer: Cham, Switzerland, 2019; pp. 101–115. [Google Scholar] [CrossRef]
  85. Chun, J.; Kim, S.; Dey, A.K. Exploring the Value of Information Delivered to Drivers. In Advances in Human Aspects of Transportation; Stanton, N., Landry, S., Di Bucchianico, G., Vallicelli, A., Eds.; Advances in Intelligent Systems and Computing, 484; Springer: Cham, Switzerland, 2017; pp. 963–977. [Google Scholar] [CrossRef]
  86. Fasanya, B.K.; Anand, S.; Kallepalli, G.S. Researchers and Public Views on Electronic Sideview Mirror System (ESMS) in the 21st Century Cars. In Advances in Human Aspects of Transportation; Stanton, N., Ed.; Advances in Intelligent Systems and Computing, 1212; Springer: Cham, Switzerland, 2020; pp. 99–106. [Google Scholar] [CrossRef]
  87. Li, G.; Zhu, F.; Zhang, T.; Wang, Y.; He, S.; Qu, X. Evaluation of Three In-Vehicle Interactions from Drivers’ Driving Performance and Eye Movement behavior. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems, Maui, HI, USA, 4–7 November 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 2086–2091. [Google Scholar] [CrossRef]
  88. Lagoo, R.; Charissis, V.; Chan, W.; Khan, S.; Harrison, D. Prototype gesture recognition interface for vehicular head-up display system. In Proceedings of the 2018 IEEE International Conference on Consumer Electronics, Las Vegas, NV, USA, 12–14 January 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–6. [Google Scholar] [CrossRef]
  89. Kim, A.; Choi, W.; Park, J.; Kim, K.; Lee, U. Predicting opportune moments for in-vehicle proactive speech services. In Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers, London, United Kingdom, 9–13 September 2019; ACM: New York, NY, USA, 2019; pp. 101–104. [Google Scholar] [CrossRef]
  90. Broccia, G. Model-based analysis of driver distraction by infotainment systems in automotive domain. In Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems, Lisbon, Portugal, 26–29 June 2017; ACM: New York, NY, USA, 2017; pp. 133–136. [Google Scholar] [CrossRef]
  91. Angelini, L.; Sonderegger, A.; Baumgartner, J.; Carrino, F.; Carrino, S.; Caon, M.; Khaled, O.A.; Sauer, J.; Lalanne, D.; Mugellini, E. A comparison of three interaction modalities in the car: Gestures, voice and touch. In Actes de la 28ième Conference Francophone sur L’Interaction Homme-Machine, Fribourg, Switzerland, 25–28 October 2016; ACM: New York, NY, USA, 2016; pp. 188–196. [Google Scholar] [CrossRef]
  92. Xie, C.; Zhu, T.; Guo, C.; Zhang, Y. Measuring IVIS Impact to Driver by On-road Test and Simulator Experiment. Procedia Soc. Behav. Sci. 2013, 96, 1566–1577. [Google Scholar] [CrossRef]
  93. Ting, H.-C.; Chen, S.-S.; Labille, K.; Tsai, Y.W.; Chen, Y.H.; Ruan, S.-J. Intelligent applications design in automotive infortainment systems. In Proceedings of the 2012 IEEE Asia Pacific Conference on Circuits and Systems, Kaohsiung, Taiwan, 2–5 December 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 376–379. [Google Scholar] [CrossRef]
  94. Schartmuller, C.; Wintersberger, P.; Frison, A.K.; Riener, A. Type-o-Steer: Reimagining the Steering Wheel for Productive Non-Driving Related Tasks in Conditionally Automated Vehicles. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium, Paris, France, 9–12 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1699–1706. [Google Scholar] [CrossRef]
  95. Köpüklü, O.; Ledwon, T.; Rong, Y.; Kose, N.; Rigoll, G. DriverMHG: A Multi-Modal Dataset for Dynamic Recognition of Driver Micro Hand Gestures and a Real-Time Recognition Framework. In Proceedings of the 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition, Buenos Aires, Argentina, 16–20 November 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 77–84. [Google Scholar] [CrossRef]
  96. Yan, X.; Hou, W.; Xu, X. Obstacle Judgment Model of In-vehicle Voice Interaction System Based on Eye-tracking. In Proceedings of the 2021 IEEE 24th International Conference on Computer Supported Cooperative Work in Design, Dalian, China, 5–7 May 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 569–574. [Google Scholar] [CrossRef]
  97. Dreißig, M.; Baccour, M.H.; Schäck, T.; Kasneci, E. Driver Drowsiness Classification Based on Eye Blink and Head Movement Features Using the k-NN Algorithm. In Proceedings of the 2020 IEEE Symposium Series on Computational Intelligence, Canberra, ACT, Australia, 1–4 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 889–896. [Google Scholar] [CrossRef]
  98. Alotaibi, Y. The use of Electrotactile Feedback in Cars. In Proceedings of the 33rd International BCS Human Computer Interaction Conference, Keele University, UK, 6 July 2020; BCS Learning & Development, Keele University: Keele, UK, 2020; pp. 51–54. [Google Scholar] [CrossRef]
  99. Burnett, G.; Neila, N.; Crundall, E.; Large, D.; Lawson, G.; Skrypchuk, L.; Thompson, S. How do you assess the distraction of in-vehicle information systems? A comparison of occlusion, lane change task and medium-fidelity driving simulator methods. In Proceedings of the 3rd International Conference on Driver Distraction and Inattention, Gothenburg, Sweden, 4–6 September 2013; Available online: https://www.researchgate.net/profile/David-Large-2/publication/306099996_How_do_you_assess_the_distraction_of_in-vehicle_information_systems_A_comparison_of_occlusion_lane_change_task_and_medium-fidelity_driving_simulator_methods/links/57b5840408ae19a365fbfe5f/How-do-you-assess-the-distraction-of-in-vehicle-information-systems-A-comparison-of-occlusion-lane-change-task-and-medium-fidelity-driving-simulator-methods.pdf (accessed on 8 May 2022).
  100. Tanaka, T.; Fujikake, K.; Yoshihara, Y.; Karatas, N.; Aoki, H.; Kanamori, H. Study on Acceptability of and Distraction by Driving Support Agent in Actual Car Environment. In Proceedings of the 7th International Conference on Human-Agent Interaction, Kyoto, Japan, 6–10 October 2019; ACM: New York, NY, USA, 2019; pp. 202–204. [Google Scholar] [CrossRef]
  101. Aksjonov, A.; Nedoma, P.; Vodovozov, V.; Petlenkov, E. An Enhancement of the Driver Distraction Detection and Evaluation Method Based on Computational Intelligence Algorithms. In Proceedings of the 16th International Conference on Industrial Informatics, Porto, Portugal, 18–20 July 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 201–206. [Google Scholar] [CrossRef]
  102. Kim, K.; Kim, D.; Choi, H.; Jang, B. Requirements for older drivers in the driver-adaptive vehicle interaction system. In Proceedings of the 2017 International Conference on Information and Communication Technology Convergence, Jeju, Republic of Korea, 18–20 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1021–1023. [Google Scholar] [CrossRef]
  103. Kurt, B.; Gören, S. Development of a Mobile News Reader Application Compatible with In-Vehicle Infotainment. In Mobile Web and Intelligent Information Systems; Younas, M., Awan, I., Ghinea, G., Catalan Cid, M., Eds.; Lecture Notes in Computer Science, 10995; Springer: Cham, Switzerland, 2018; pp. 18–29. [Google Scholar] [CrossRef]
  104. Zhu, Y.; Liu, W.; Zhu, D. Design Research on Vibration Tactile Feedback in Vehicle Navigation Information Application. In Proceedings of the Eighth International Workshop of Chinese CHI, Honolulu, HI, USA, 26 April 2020; ACM: New York, NY, USA, 2020; pp. 47–56. [Google Scholar] [CrossRef]
  105. Hart, S.G.; Staveland, L.E. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. In Advances in Psychology; Hancock, P.A., Meshkati, N., Eds.; Human Mental Workload: North-Holland, The Netherlands, 1988; Volume 52, pp. 139–183. [Google Scholar] [CrossRef]
Figure 1. PRISMA flow diagram.
Figure 1. PRISMA flow diagram.
Electronics 13 02563 g001
Figure 2. Distribution of studies by year of publication.
Figure 2. Distribution of studies by year of publication.
Electronics 13 02563 g002
Table 1. The initial and final set of studies found in relevant databases.
Table 1. The initial and final set of studies found in relevant databases.
DatabaseInitial Set of StudiesFinal Set of Studies
ACM Digital Library12524
Elsevier ScienceDirect3510
Emerald Insight60
Google Scholar12427
Hindawi21
IEEE Xplore2820
Inderscience Publishers82
MDPI53
SAGE Journals83
SpringerLink20224
Taylor & Francis Online812
Table 2. The distribution of selected studies across journals.
Table 2. The distribution of selected studies across journals.
Name of the JournalNumber of StudiesReferences
Applied Ergonomics5[25,47,48,49]
Journal on Multimodal User Interfaces3[22,29,32]
Human Factors: The Journal of the Human Factors and Ergonomics Society2[16,50]
IEEE Vehicular Technology Magazine2[19,24]
International Journal of Human Factors and Ergonomics2[51,52]
International Journal of Industrial Ergonomics2[15,53]
Multimedia Tools and Applications2[54,55]
Sensors2[12,56]
ACM Transactions on Computer-Human Interaction1[21]
Annals of Advances in Automotive Medicine1[57]
Applied Sciences1[18]
Computer1[28]
European Transport Research Review1[58]
Frontiers in Psychology1[59]
Frontiers in Robotics and AI1[26]
IEEE Access 1[60]
IEEE Transactions on Cybernetics1[37]
IFAC-PapersOnLine1[61]
International Journal of Human–Computer Interaction1[62]
International Journal of Vehicular Technology1[63]
International Review of Law, Computers & Technology1[64]
Personal and Ubiquitous Computing1[65]
Proceedings of the ACM on Human–Computer Interaction1[66]
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies1[67]
Transportation Engineering1[10]
Transportation Research Record: Journal of the Transportation Research Board1[68]
Universal Access in the Information Society1[69]
Table 3. The distribution of selected studies across conference proceedings.
Table 3. The distribution of selected studies across conference proceedings.
Name of the ConferenceNumber of StudiesReferences
International Conference on Automotive User Interfaces and Interactive Vehicular Applications13[13,20,35,36,38,39,70,71,72,73,74,75,76]
International Conference on Human–Computer Interaction10[27,33,77,78,79,80,81,82,83,84]
International Conference on Applied Human Factors and Ergonomics3[31,85,86]
CHI Conference on Human Factors in Computing Systems2[34,40]
IEEE Conference on Intelligent Transportation Systems 2[4,87]
IEEE International Conference on Consumer Electronics2[7,88]
ACM Conference on Ubiquitous Computing1[6]
ACM International Joint Conference on Pervasive and Ubiquitous Computing and ACM International Symposium on Wearable Computers1[89]
ACM SIGCHI Symposium on Engineering Interactive Computing Systems1[90]
Conférence Francophone sur l’Interaction Homme-Machine1[91]
COTA International Conference of Transportation Professionals1[92]
Iberoamerican Conference on Human Computer Interaction1[43]
IEEE Asia Pacific Conference on Circuits and Systems1[93]
IEEE Intelligent Vehicles Symposium1[94]
IEEE International Conference on Automatic Face and Gesture Recognition1[95]
IEEE International Conference on Computer Supported Cooperative Work in Design1[96]
IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation Conference1[8]
IEEE Symposium Series on Computational Intelligence1[97]
International BCS Human Computer Interaction Conference1[98]
International Conference on Driver Distraction and Inattention1[99]
International Conference on Human–Agent Interaction1[100]
International Conference on Humanizing Work and Work Environment1[3]
International Conference on Image and Graphics1[30]
International Conference on Industrial Informatics1[101]
International Conference on Information and Communication Technology Convergence1[102]
International Conference on Information Systems and Computer Aided Education1[14]
International Conference on Mobile Web and Intelligent Information Systems1[103]
International Conference on Systems, Man, and Cybernetics1[5]
International Conference on User Science and Engineering1[17]
International Display Workshops1[23]
International Workshop of Chinese CHI1[104]
OzCHI Workshop1[41]
Table 4. Theoretical knowledge on IVISs.
Table 4. Theoretical knowledge on IVISs.
ConceptDefinitionReferences
IVIS DefinitionAny form of interaction that transfers information to the driver in a safe and accessible manner. This includes hardware and software components designed to enhance the driving experience without compromising safety.[18]
Information TypesVarious types of information conveyed to the driver, including vehicle state information (e.g., speed, fuel level), safety information (e.g., warning alerts), navigation information (e.g., GPS directions), infotainment information (e.g., music, videos), and outside environment information (e.g., weather, traffic conditions).[91]
DashboardA primary IVIS component that displays crucial vehicle state information directly in the driver’s line of sight behind the steering wheel. It typically includes speedometer, fuel gauge, and warning lights.[10,17]
Central ConsoleA secondary IVIS component usually located in the middle front of the vehicle. It provides navigation, infotainment, and outside environment information, as well as controls for secondary functions like adjusting the radio volume or air conditioner.[10,17]
Engagement with IVISThe quantity and quality of task-oriented mental resources dedicated to interacting with the IVIS. High engagement means the driver is heavily focused on the tasks provided by the IVIS.[40]
Distraction vs. InattentionDistraction refers to the diversion of attention away from driving due to external factors (e.g., interacting with IVISs). Inattention refers to preoccupation with internal thoughts (e.g., daydreaming) that reduce focus on driving.[57]
WorkloadThe perceived relationship between the mental processing capability of the driver and the demands of the driving task. It measures the amount of mental effort required to complete driving tasks effectively.[77,105]
Cognitive Load/OverloadCognitive load refers to the total amount of mental effort being used in the working memory. Cognitive overload occurs when the driver’s cognitive capacity is exceeded, often due to spontaneous processes like daydreaming or excessive information from the IVIS.[12]
Cognitive UnderloadA state where the driver experiences a lack of mental stimulation, often due to repetitive and monotonous tasks, leading to decreased alertness and potential safety risks.[94]
Visual LoadThe demand placed on the driver’s visual system when performing a visually demanding secondary task simultaneously with driving, leading to divided attention between tasks.[65,79]
Driver DistractionThe diversion of attention from activities critical for safe driving (e.g., monitoring the road) to non-driving related tasks (e.g., using a phone), which can lead to increased risk of accidents.[12]
Working Memory (WM)A cognitive system with a limited capacity that temporarily holds information available for processing. It is crucial for reasoning, guidance of decision-making, and behavior.[85]
Interruptible MomentA specific time interval during which a driver can receive and process new information without significantly compromising driving safety.[85]
Tactile FeedbackThe use of electro-tactile feedback to convey information through electrical stimulation of the skin, providing sensory feedback that can inform the driver of various conditions or alerts.[98]
Haptic FeedbackThe use of vibrations or other touch-based sensations to communicate information to the driver without requiring visual or auditory attention. This feedback can enhance the driver’s situational awareness and response times.[16,66]
Gestural InteractionInteraction with the IVIS through predefined physical movements, typically hand gestures, allowing the driver to control the system without physical contact, thereby reducing the need to look away from the road.[28,54]
Speech Interaction (VUI)Interaction with the IVIS through voice commands, enabling drivers to control the system hands-free, thus maintaining focus on driving tasks.[25,30,33]
Head-Up Display (HUD)An electronic display that projects information onto the windshield or a transparent screen in the driver’s upper field of view, allowing the driver to access information without looking away from the road.[18,27]
Augmented Reality (AR) HUDCombines real-world view with virtual information, projecting augmented data (e.g., navigation arrows, safety alerts) onto the windshield to enhance situational awareness and driving safety.[16,19]
Multimodal InterfacesInterfaces that combine two or more user input modes (e.g., touch, voice, gesture) in a coordinated manner, allowing more flexible and efficient interaction with the IVIS.[22,78]
GamificationThe application of game elements and design techniques in non-game contexts, such as driving, to enhance driver engagement, motivation, and safety. Examples include reward systems for safe driving practices.[38,39,40,58]
Table 5. Factors affecting the safety of IVIS.
Table 5. Factors affecting the safety of IVIS.
FactorKey PointsReferences
Driving DistractionLow response to traffic events, main cause of traffic accidents, visual and cognitive distraction from IVISs, inattention leads to accidents[3,4,5,6,7,12,18]
Situational AwarenessDecreased workload and increased situational awareness improve safety, visual overload, and multitasking divide gaze attention[17,60,61,82,84]
Cognitive LoadHigh mental workload and bad driving posture reduce safety, minimizing non-driving related tasks is critical, visual and manual distractions harmful[12,20,25,48,59,62,72]
Driving PerformanceReaction time critical for safety, excessive dashboard devices distract, touch buttons should be large to reduce visual demands[18,31,48,62,68,87]
Interaction SuccessfulnessReducing off-road glances and task completion times improves performance, interface layout and manipulation efficiency important[19,25,32,87,89]
Emotional StateFatigue, drowsiness, intoxication, stress harm safety, anxiety, sadness, frustration from NLP errors, boredom in young drivers[6,7,40,49,55,64]
Table 6. Safety evaluation of current IVIS.
Table 6. Safety evaluation of current IVIS.
FactorKey PointsReferences
Dashboard DisplaysLocated on the lower part of the driver’s field of view; visual messages increase cognitive load and decrease safety; visual warnings can be distracting[18,19,27]
Center ConsoleUses buttons and switches requiring physical touch; newer vehicles use touchscreens; can be distracting as they require drivers to take their gaze off the road[20,22,93]
TouchscreensSimilar safety issues to dashboards; kinetic scrolling increases visual load; touch interfaces lead to distractions and inattention[65,90]
In-Vehicle Navigation SystemsUseful but can cause incidents; drivers may become disengaged from the environment; increases averted gaze time and reduces lane position maintenance[49,81]
Speech InterfacesReduce off-road glances and peripheral impairment; NLP errors can cause distractions and increase task-switching; higher voice pitches enhance urgency perception[21,30,33,75,87,96]
Mobile PhonesHigh usage among drivers; Bluetooth integration helps but auditory tasks can still be distracting; windshield placement at eye level is safer[8,18,32,53,103]
Advanced Driver Assistance Systems (ADAS)Features like lane-keeping assistance and collision warnings help reduce cognitive load but can add visual messages and distract drivers[5,19,61]
Augmented Reality Head-Up Displays (AR HUDs)Project information onto the windshield, reducing the need to look away from the road; enhances situational awareness[27]
Adaptive User InterfacesAdjust displayed information based on driving context; prioritize essential information during high-demand situations[19]
Enhanced Natural Language Processing (NLP)More accurate voice commands reduce manual interaction; improvements in machine learning minimize errors[30,96]
AI and Machine LearningPredict and mitigate safety risks by analyzing driver behavior; provide proactive alerts to prevent accidents[17]
Multimodal Feedback SystemsUse visual, auditory, and haptic feedback; ensures critical information is communicated effectively[17]
Enhanced ConnectivityReal-time data exchange with external sources; features like traffic updates and hazard notifications improve safety[103]
Table 7. Historical advancements in the safety of IVIS.
Table 7. Historical advancements in the safety of IVIS.
PeriodKey DevelopmentsReferences
1940s–1960sIntroduction of passive-safety technology (e.g., sponge rubber instrument panel); establishment of safety organizations[63]
1970s–1980sFirst broadcast of traffic information via radio; initial studies on sensory modalities for warning signals[17]
1990s–2010sIntroduction of visual, auditory, and tactile warning feedback; development of collision warning systems[17]
2000sPopularization of mobile phones; introduction of Bluetooth hands-free vehicle kits; integration of phone calls via Bluetooth[10,103]
2010s–PresentEvolution to touchscreens with customizable interfaces; enhanced voice recognition and natural language processing[30,93,96,103]
Recent AdvancementsIntegration with ADAS (e.g., lane departure warnings, collision avoidance); AI for predicting driver needs; AR HUDs[5,16,19,61]
Current TrendsCloud-based services for real-time updates; ongoing advancements with AR, AI, and 5G integration[16,19]
Table 8. Enhancements in the safety of IVIS with emerging technology.
Table 8. Enhancements in the safety of IVIS with emerging technology.
TechnologyEnhancements and Safety ImprovementsReferences
Touchscreen InterfacesLarge buttons to prevent accidental activation, list-style menu for consistent glance duration, minimal menu items to reduce off-road glances, live stream display to aid drivers, mid-air selection and virtual touch systems to minimize cognitive load and visual attention, eye-gaze systems for focus[14,15,36,47,48,59,62,65,70,84,93]
Gestural InteractionReduces driver distraction, optimized gesture control system, auditory-supported air gesture-operated menu, dynamic recognition of micro hand gestures for natural interaction, combined with ultrasonic mid-air haptic sensations for enhanced safety[13,28,29,30,50,54,60,93,95]
Tactile FeedbackReduces duration of off-road glances, steering wheel interaction preferred, electro-tactile feedback on steering wheel, vibrotactile feedback for navigation, suitable for visually or hearing-impaired drivers[20,31,98,104]
Speech-Based InterfacesNatural spoken conversation interaction, visual feedback to support conversation, combined voice and tactile interaction, female voice for trustworthiness and clarity, obstacle judgment using eye-tracking, auditory cues for reduced distraction[4,32,33,34,71,78,87,96,103]
Context-Aware InterfacesAdapt to driver behavior and road conditions, haptic keyboard for improved typing performance and take-over reaction, driver agent system for elderly driver support[86,94,100]
Head-Up Displays (HUDs)Faster response times, fewer navigational errors, transparent windshield display for hazard warnings and driving instructions, gesture-based and eye gaze control for improved interaction and reduced distraction[10,16,23,24,25,47,78,88]
Augmented Reality (AR)AR HUD for enhanced situational awareness, screen-fixed displays for navigation, potential for dangerous distraction if not designed properly, visual feedback for road safety[19,26,27]
Warning SystemsMultimodal warning systems to balance mental workload, fitness trackers and smartwatches for drowsiness detection, machine learning algorithms for predicting driver drowsiness and distraction[7,10,17,80,84,85,97]
Distraction Detection SystemsVision-based distraction detection, auto-dialer system for emergencies, quantifying visuospatial workload, auditory displays to reduce motion sickness, interruptibility framework for proactive speech tasks[12,51,52,67,73,80,89,99,101]
Artificial Intelligence (AI)Driver-monitoring systems for emotion, fatigue, and attention detection, intent-aware interactive displays, Fluid interface for seamless control transition, attentive user interfaces to reduce stress and increase performance[28,35,37,64,74,82,83]
GamificationReduces boredom and enhances driving performance, various gamified rewards (leaderboard, health packs, keys), gamified applications for safe driving practices[38,39,40,41,58]
Table 9. Shortcomings in research on the safety aspects of IVISs.
Table 9. Shortcomings in research on the safety aspects of IVISs.
ShortcomingDescriptionReferences
Lack of Real Environment EvaluationCurrent evaluations mostly executed in simulated environments, which differ significantly from real environments[10,101]
Inability to Simulate Diverse Road SituationsImpossible to simulate various road situations and traffic participants from different countries in laboratory-based simulations[10]
Single Driving Speed EvaluationMost studies evaluated their proposals at a single driving speed[10]
Consideration of Driver DisabilitiesSome studies did not mention whether drivers’ disabilities were considered during prototyping and evaluation[10]
Lack of Fallback FeaturesMost prototypes did not mention fallback features and emergency systems in case of failure[80]
Limited Vehicle TypesResearch conducted mostly on cars, without considering vans, trucks, and other types of vehicles[3]
Diverse Driving ContextsDeficit of HCI studies concerning uncommon standards in driving contexts globally[3]
Support for Safe DrivingIn-vehicle navigation systems lack support for safe driving[81]
Standardization IssuesNo standard for developing IVISs, making task performance while driving troublesome[43]
Effects of Speech-Based InteractionsEffects on driving performance and visual search behavior not well understood[87]
Studies on Warning SystemsScarcity of studies on warning systems for fatigued drivers, best sensory modalities unknown[17]
Emotion Detection ChallengesDifficulty in determining driver emotions accurately, unreliability in emotion-based IVISs[6,64]
Table 10. Key study findings on IVIS.
Table 10. Key study findings on IVIS.
Key AspectFindingsReferences
EfficiencyAR interfaces enhance interaction efficiency, reducing distractions and cognitive load. Touch-based systems demand more interaction, increasing cognitive load.[30,37]
Response TimesSpeech-based interfaces provide rapid responses but are less effective in noisy environments. Adaptive noise cancellation is suggested to improve accuracy.[13,103]
Robustness of Embedded SystemsImportance of fault-tolerant designs in gesture recognition systems to ensure reliability. Integration of hardware and software solutions for predictive failure mitigation[95]
Signal IntegrityIssues like Bluetooth signal loss can interrupt crucial communications. Advanced redundancy protocols and 5G technology are recommended to ensure continuous operation[4]
Cognitive LoadTraditional metrics do not capture cumulative cognitive effects. Future research should include longitudinal studies and physiological measures to understand long-term impacts[12,105]
Automation ComplacencyOverreliance on automation could atrophy driving skills. Future studies should explore driver adaptation and integration of IVIS to maintain manual driving competencies[7]
Socio-economic and Cultural ContextsThe impact of IVIS varies across different contexts. Research should include diverse demographics to guide customization of interfaces for enhanced usability and safety[17]
Ethical ConsiderationsPrivacy and data security concerns need more attention. Ethical frameworks should be integrated into IVIS development to ensure trust and transparency[64]
Integration with Emerging TechnologiesIVIS interactions with AVs and EVs introduce new challenges and opportunities. A systemic approach is necessary to understand how IVIS can complement autonomous features[4]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Krstačić, R.; Žužić, A.; Orehovački, T. Safety Aspects of In-Vehicle Infotainment Systems: A Systematic Literature Review from 2012 to 2023. Electronics 2024, 13, 2563. https://doi.org/10.3390/electronics13132563

AMA Style

Krstačić R, Žužić A, Orehovački T. Safety Aspects of In-Vehicle Infotainment Systems: A Systematic Literature Review from 2012 to 2023. Electronics. 2024; 13(13):2563. https://doi.org/10.3390/electronics13132563

Chicago/Turabian Style

Krstačić, Rafael, Alesandro Žužić, and Tihomir Orehovački. 2024. "Safety Aspects of In-Vehicle Infotainment Systems: A Systematic Literature Review from 2012 to 2023" Electronics 13, no. 13: 2563. https://doi.org/10.3390/electronics13132563

APA Style

Krstačić, R., Žužić, A., & Orehovački, T. (2024). Safety Aspects of In-Vehicle Infotainment Systems: A Systematic Literature Review from 2012 to 2023. Electronics, 13(13), 2563. https://doi.org/10.3390/electronics13132563

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop