Supporting Drivers of Partially Automated Cars Through an Adaptive Digital In-Car Tutor

Drivers struggle to understand how, and when, to safely use their cars’ complex automated functions. Training is necessary but costly and time consuming. A Digital In-Car Tutor (DIT) is proposed to support drivers in learning about, and trying out, their car automation during regular drives. During this driving simulator study, we investigated the effects of a DIT prototype on appropriate automation use and take-over quality. The study had three sessions, each containing multiple driving scenarios. Participants needed to use the automation when they thought that it was safe, and turn it off if it was not. The control group read an information brochure before driving, while the experiment group received the DIT during the first driving session. DIT users showed more correct automation use and a better take-over quality during the first driving session. The DIT especially reduced inappropriate reliance behaviour throughout all sessions. Users of the DIT did show some under-trust during the last driving session. Overall, the concept of a DIT shows potential as a low-cost and time-saving solution for safe guided learning in partially automated cars.


Introduction
Although commercial cars are increasingly equipped with combinations of automated functions such as Adaptive Cruise Control (ACC) and Lane Keeping Systems (LK), drivers appear to have a hard time getting used to them. Many drivers do not know which Advanced Driver Assistance Systems (ADAS) their car has, what they do, and how to safely use them [1,2]. Several aspects appear to contribute to the confusion about car automation among drivers. First, different car brands are introducing automated systems with similar names but with different functions, or different system names for similar functions [3,4]. Second, research showed that at least a quarter of all drivers do not receive any information about ADAS from their salesman when they buy a car equipped with such a system [5,6]. Furthermore, only a small proportion of drivers gets to actually drive with the automated functions at their sales point. This is worrisome as drivers need multiple interactions with an automated system to properly understand it [7,8]. Third, current driver-car interfaces often fail to follow widely accepted human factors and human machine interaction guidelines [4], leading to misinterpretations of the system's capabilities. Co-driving (alternatively referred to as cooperativeor shared control) (see, for example, [9][10][11]) has been suggested to reduce the need for frequent and complete control switches. Although this may take many forms, co-driving entails the shared control of the vehicle. Some responsibilities are allocated to the driver, while others are allocated to the car. Still, even in co-driving, a driver still needs to know how this shared control works, what the car's capabilities and limitations are, and when they are responsible for what particular driving task. All in all, a lack of understanding about ADAS may reduce traffic safety [12][13][14][15] and limit any prospected benefits of automated driving [16][17][18][19][20]. Drivers need to be supported in learning when it is (not) safe to use the automation in their car [21].
Several solutions have been proposed to support drivers in understanding, and safely using, the automation in their car. The first one is to stimulate the use of owners' manuals. However, not only are these usually long and complicated, studies suggest that practise is required to fully support safe automation use [22][23][24]. Driving simulators in particular allow drivers to practise with rare but critical driving situations [25][26][27]. The main downside to all these options is that additional training at a driving school or at a facility with a simulator requires high investments, both financially and time-wise.

Digital In-car Tutor (DIT)
In the present study, we explore the potential of a Digital In-car Tutor (DIT) to support drivers in using in-vehicle automation. A DIT guides drivers through the different automated systems in their own cars, during regular drives. While a DIT may take various forms, we particularly studied a DIT prototype using audio and an Augmented Reality (AR) overlay on the windscreen (see 2.3.2). The DIT is designed to be used in real cars during regular drives. The following three steps illustrate the core functionalities of our DIT prototype. First, the DIT introduces one of the automated car systems while the driver is driving manually. New systems are only introduced when the driver is in a low complex situation [28], like an empty straight road on a clear day. Such an introduction concerns the system's functionalities, handling, capabilities and limitations, and equipment. Second, the driver can try out the functionality while the DIT provides immediate feedback. Third, the DIT reminds drivers about specific systems capabilities and limitations when a related situation is encountered. Furthermore, rare situations are addressed when driving in similar, but more frequent, situations to keep the driver's mental model up to date [7]. A new system is introduced as the driver has safely driven with it for a certain amount of kilometres (for example 500 km), and the cycle repeats itself. A DIT could have many benefits over regular driving lessons, simulator training, and the use of owners' manuals. First, it is less time consuming and costly, as it is active in the driver's own car during regular drives. Second, a DIT allows for continuous and situated support over a longer period of time. Last, a DIT can be brand-and model-specific, and can be adjusted when automated functions are changed by software updates.

Adaptive Communication
To facilitate learning and avoid an excessive cognitive demand, a DIT should be adaptive in various ways. First, instructions by the DIT should concern the current driving situation so that the driver is able to immediately process and apply them. Furthermore, the modality, timing, and duration of the communication needs to be adjusted to the demand of the driving situation to avoid overload. Studies on the cognitive demands of feedback suggest that tutoring in highly complex driving situations should be condensed and action-based. Elaborate theory and reflection can be presented during low complexity situations [29][30][31]. Last, the feedback needs to adapt to the driver's performance, to update his or her mental model. This includes both direct but short feedback, and elaborate reflection after the situation. For example, drivers may need to be informed if they turn on the automation outside of its Operational Design Domain (ODD) [32]. These tutor strategies were implemented in our DIT prototype. Earlier, Simon [33] studied an auditory digital tutoring system for Adaptive Cruise Control (ACC). The tutor content was adapted to the traffic situation in general and to the driver's preferred maximum deceleration. However, the timing and duration did not adapt, nor was the information adjusted to the complexity of the traffic situation. These characteristics may, however, be required in a tutor system, as they may help to prevent driver overload. Simon [33] did find benefits to the tutor in terms of driving safety and a more efficient use of the ACC. However, with the introduction of a variety of automated systems, such research needs to be extended towards cars with multiple systems as these drastically increase the learning difficulty for drivers.

Present Study
In the current driving simulator study, we compared the effects of a DIT prototype (DIT group) with those of an information brochure (IB group) on the use of complex car automation during three driving sessions. In all driving scenarios, participants were required to decide whether they could rely on the automation or not. In the specific scenarios that required drivers to turn off the automation, the take-over quality was analysed. During the first driving session, the DIT group was supported by the DIT prototype in learning about the various automated car systems. In contrast, the IB group familiarized itself with the automation by reading an information brochure (IB group) before driving in the simulator. Two more driving sessions followed, one directly after the first and one after two weeks. During these sessions, the DIT was no longer active for the DIT group. The additional sessions were introduced to investigate how any effects of the DIT lasted over time. Last, multiple acceptance elements (e.g., ease of use) of DIT were assessed through a questionnaire.
Overall, we expected the DIT to provide drivers with a better understanding, and safer use, of the automation. Our first hypothesis was that using the DIT would result in more correct automation use. That is, drivers would only rely on the automation if it could deal with the situation safely, and take back control if it could not. A second hypothesis was that drivers were expected to show a better take-over performance in critical situations. A better take-over performance was defined as: taking-over earlier, braking less intensely, and showing a more stable vehicle control.
In conclusion, we examined whether a DIT was more beneficial for supporting drivers in safely using car automation, compared to drivers that received an information brochure. DITs may provide a more time-and cost-efficient solution to driver training of partially automated cars compared to training in driving simulators or on the road with driving instructors. Furthermore, it allows for situated and repeated learning. Lastly, any over-the-air updates of the automation can be directly integrated in the DIT, allowing for tailored instructions about the latest version of the automation. The results of this study allow us to gain insight in whether or not a DIT is an appropriate method to increase appropriate car automation use.

Participants
38 participants (23 female, 15 male) took part in the driving simulator study. 19 participants were part of the control condition (IB group) and 19 were part of the experimental condition (DIT group). All participants were students or employees of the University of Twente. All subjects gave their informed consent for inclusion before they participated in the study. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the University of Twente BMS Ethics Committee (nr. 191220). Their average age was 27.5 years (SD = 13.1 years, range = 18-65 years). On average, participants possessed their driver's license for 9.2 years (SD = 10.81, range: 1-47). Eight participants drove almost every day, and 15 drove multiple times a week. Eight participants drove once a week, and seven drove less than once per week. Most had experience with Cruise Control (N = 29). Seven participants had experience with Adaptive Cruise Control, and two with Lane Assist. The Affinity for Technology Interaction (ATI) scale [34,35] was used to determine the level of general affinity with technology of the participants. On this scale of 1 (low affinity with technology) to 6 (high affinity with technology), the participants scored an average of 3.9 (SD = 0.77). The groups did not significantly differ on any of these characteristics. Participants had to speak and understand English fluently to be able to participate as the experiment was conducted in English.

Driving Simulator & Simulated Automated Car
The experiment took place in the driving simulator of the University of Twente (Figure 1). This simulator includes a car mock-up with a steering wheel and pedals. Three beamers project the simulation on a 7.8 m by 1.95 m screen with a view angle of approximately 180 degrees. Rear-and side mirrors were projected on the screen. A tablet displayed the speedometer, tachometer, and an icon that showed whether the automation was on. The simulated car was equipped with level 2 automation which included 1) Adaptive Cruise Control (ACC), 2) Lane Keeping (LK), 3) Obstacle Detection (OD), 4) Traffic Light and Priority Sign Detection (TS), and 5) Priority Road Markings Detection (RM). These systems were designed specifically for this experiment and did not resemble a particular car model to prevent transfer from existing cars. Participants were informed about this. The steering wheel included a blue button to turn all automation on and off. Participants could not turn the automation off by braking or steering.

Experimental Condition: Information Brochure Training (IB Group)
At the start of the first driving session, participants in the IB group received a paper brochure on the five automated systems. They read this information for 10 min before driving. This brochure included the functions, handling, equipment, capabilities, and limitations of each system. It contained the same system information that the DIT group received from the DIT. However, as the information was given prior to the practise scenarios, it did not include any situation-and driver-adaptive feedback.

Experimental Condition: Digital In-car Tutor (DIT Group)
The DIT prototype introduced the five automated systems to the participants though auditory and visual information (ACC, LK, OD, TS, and RM). All visual information was projected as an overlay on the windscreen ( Figure 2). This reduced the need for drivers to look away from the road and allowed the information to be directly related to the driving situation. All visual information was accompanied by verbal explanations. The digital standard Google Assistant voice was used for the verbal communication, and had been pre-recorded. This voice was female with a British accent. Procedure. The DIT followed the following steps during Session 1 in the experiment. The DIT first introduced a specific automated system (e.g., Adaptive Cruise Control) at the start of the scenarios. This was always on a straight road without traffic. The DIT would verbally explain the functions, handling, equipment, capabilities, and limitations of this system (Figure 2a,b). The verbal explanations were supported by illustrations which were projected on the windscreen. The DIT then told participants to use the automation if they thought that it was safe. As participants approached the situation where they needed to either turn off the automation or leave it on, the DIT would remind the participant of the system capabilities and limitations that applied to the specific situation ( Figure 2c).
Adaptivity. The information from the DIT was expected to put some cognitive demand on drivers [36,37]. To avoid driver overload, the length and type of DIT messages were adapted to the complexity of the driving situation. This could be considered a 'safety filter' for our DIT as described by Van Gent et al. [29]. The communication was longer and more detailed in low complex situations, while it was condensed during highly complex situations. Furthermore, discussing theory and reflecting upon situations only occurred during low complex situations. This included the system introductions on the simple straight road at the start of each scenario [28], and reflection after each critical situation. As an example, the ACC introduction was: "ACC keeps the car at a set speed, and automatically speeds up, and slows down the car, to keep a set distance to the car ahead. The car has several cameras which are used to detect a car ahead of you." If the driver correctly left the automation on in this scenario (ACC1), the reflection was "Great job. The ACC detected the cars in front of you and slowed down to keep the set speed". These strategies were based upon studies that investigated tutoring strategies by driving instructors [38,39]. In a similar way that studies have used human processing and decision-making strategies as a base for robotics or intelligent vehicles with artificial processing and decision-making skills [40], we implemented the observed feedback strategies of human tutors in a digital tutor.
The DIT also adapted to the driving situation by reminding drivers of the system's capabilities and limitations specific to the current situation. In combination with the overlay visuals, this meant that the driver could directly perceive and process the information in their specific context. Drivers did not have to interpret information in an artificial context (e.g., a screen with a simplified visualisation of the situation) and then apply it to the current driving situation. For example, when the weather changed for the worst in a scenario, the DIT reminded the driver that the car cannot function reliably in heavy fog and rain ( Figure 2c). It is important to note that the DIT never explicitly told the driver that it was safe to leave the automation on, or that the automation needed to be turned off. This was decided as it would be unrealistic in a real-world driving scenario (driving a level 2 vehicle) both for safety and reliability issues. Similarly, the DIT is not intended to be used as a warning system. Rather, the DIT identifies some situations to provide situated tutoring and learning.
Last, the DIT adapted its feedback on the current performance of the driver. If the automation was used outside its ODD, the DIT reflected afterwards on why this was not safe. If the automation was unnecessarily turned off, the DIT would also reflect on this. The DIT would add that the driver's judgement was the most important, and that the automation should only be used if the driver thought that it could safely cope. The feedback was manually activated by the researcher.

Set-up and Procedure
The experiment was a between-subjects design with an experimental condition (DIT group) and a control group (IB group). Both groups drove in three sessions (Table 1), which each containing multiple scenarios. All participants were given the following task for each scenario: "You can start the scenario by driving manually. Turn on the automation whenever you think that the car can safely cope, and turn (or leave) it off if it cannot. The car can't cope with a situation if: traffic regulations have to be violated or the car will damage something or harm someone".
Participants were informed at the start of each session that they remained responsible for their safety and that of their fellow road users while using the automation. They also needed to adhere to the general traffic rules and speed limits. If the participant hit something or someone, a crash sound was played and the scenario ended. After each scenario, participants were asked by the researcher whether they thought that the car could safely cope with the previous situation and why.
At the start of Session 1, all participants received a written overview of the experiment procedure and filled out an informed consent form and a demographics questionnaire. Participants could get used to the simulator in a 10-min demo scenario. Overall, Session 1 consisted of 10 scenarios and lasted 1 h. The DIT provided information and feedback during all scenarios in session 1 (see 2.3.3), while the IB group read a brochure about the automation for 10 min before driving. Participants were reminded of their task before each scenario (mentioned above). Session 2 started after a 10-min break. This session contained 8 scenarios and lasted 30 min. Again, participants were reminded of their task before each scenario. The DIT was disengaged for all participants in this session. All participants were asked to participate in Session 3, which took place after two weeks. However, as not all participants were able to come back due to work or school commitments, each group contained 11 participants during Session 3. The set-up for Session 3 was identical to that of Session 2. This last session was included to investigate how any potential effects of the DIT evolved after repeated interaction with the automation.
The order of the scenarios was randomized in Sessions 2 and 3. The scenarios in Session 1 were not randomized and followed the order as depicted in Table 1. This way, the DIT could introduce the different automated systems in a realistic and logical order to the DIT group. The same order of scenarios was adhered to for the IB group to avoid that different orders between groups might influence the results. Table 1. Overview of the experiment set-up for the Digital-in Car Tutor (DIT) group and the Information Brochure (IB) group. Descriptions of all abbreviated driving scenarios are available in Tables 2 and 3.  TS1 TS2 RM1 RM2 T1 T2 T3 T4 T5 T6 T7 T8 T1 T2 T3 T4 T5 T6 T7 T8 DIT group Driving scenarios + Tutor Guidance Driving scenarios Driving scenarios TS1 TS2 RM1 RM2 T1 T2 T3 T4 T5 T6 T7 T8 T1 T2 T3 T4 T5 T6 T7 T8 2.2.5. Scenarios All scenarios started with a straight road without traffic so drivers could calmly start driving manually and turn on the automation if they thought that it was safe to do so. Furthermore, during Session 1, the DIT introduced a new system to the DIT group on this road as they were still driving manually. After the straight road, the specific driving scenario started. All scenarios contained an event area during which the automation should be on or off. Session 1 contained 10 driving scenarios (Table 2) of 3 to 4 min each. Each of the five automated systems described in 2.2.1 had two dedicated scenarios that addressed a particular capability or limitation of that system. Each system contained one scenario in which the automation could cope, and one in which the automation could not. During the first system-specific scenario, the DIT would explain the basic functionalities, capabilities, and limitations of the particular system. During the second scenario, the DIT would further elaborate on the limitations of the system. Sessions 2 and 3 both contained eight scenarios of 2 to 3 min each (Table 3). In each session, four scenarios required a take-over, and four did not. The scenarios in Session 3 were the same as those in Session 2 but with considerable changes to the environment. It made them look different to the participants, but still allowed for a comparison with Session 2. If a participant did not take back control in situations that the automation could not cope with, the car would crash and the scenario would end.

Variables
This study contained two independent variables: Training Method (DIT versus information brochure), and Session (Sessions 1, 2, and 3). Three dependent variables were measured during the experiment: acceptance, appropriate automation use, and take-over quality.
Acceptance. Participants indicated their acceptance of their training method in a questionnaire at the end of the first session. This questionnaire was a slight adaptation of the Technology Acceptance Questionnaire [41] and addressed six core aspects of technology acceptance: perceived ease of use, perceived usefulness, attitude, intention to use, self-efficacy, and social norm [42][43][44][45][46] (Appendix A).
Appropriate automation use. Each scenario contained an 'event area' during which the automation should be on or off. For events that required the automation to be off, the event area started at the latest moment the participant could turn off the automation and brake to avoid a crash. For example, when the participant was driving 100 km/h, the event area started 76 m before the point where the car would crash into something or someone (members.home.nl/johngrimbergen/remwegformule.htm). For scenarios in which the automation could be (left) on, the event area started directly after the straight road at the start of the specific scenario. Whether a scenario required the automation to be off was determined before the experiment, based on the system information used in the driver training. Four subcategories were used to specify the type of automation use during the event areas: 1) Correct take-over, the automation is off when necessary, 2) Correct reliance, the automation is on while it is safe, 3) Incorrect take-over, the automation is off while this is not necessary, 4) Incorrect reliance, the automation is on when this is not safe. It was decided not to include a knowledge test to determine the participants' explicit knowledge about the automated systems. In our previous studies [22], we found that a good score on the initial knowledge test did not predict actual use of the automation in the driving simulator study.
Take-over quality. In scenarios that required the automation to be (turned) off, three following take-over quality variables were measured from the moment the driver turned off the automation until the location of a possible collision: Time To Collision (TTC) (s), deceleration rate (m/s 2 ), and lateral acceleration (m/s 2 ) [47,48].
Appropriate automation use and take-over quality were already used as performance measures during Session 1. As the DIT is intended to be used by drivers in real cars during regular trips, Session 1 represented drivers' first on-road experience with the automation. For the DIT condition this would be when the DIT provides situated training to the driver while he or she is driving with the automation for the first time. For the IB group, this would be when the driver is driving with the automation for the first time after reading the information brochure. Careful assessment of the automation use was therefore already necessary during the first session as drivers need to be able to safely use the automation as soon as they start driving.

Analysis
The frequency data on 'appropriate automation use' was first analysed using a Chi-Square test. Next, we investigated how the 'appropriate automation use' evolved over time for each of the training methods. This was achieved through a mixed model approach, specifically Generalized Estimating Equation model (GEE). A Generalized Estimating Equation model was created as: our study was a 2 × 2 repeated measures design, the independent variable was binary, and we wanted to control for variations between scenarios [49,50]. In order to closer evaluate the specific types of (correct) automation use, a multinomial logistic regression model was created [51,52] to allow categorical response variables with more than two options. The response variable was 'automation use type' (correct take-over, correct reliance, incorrect take-over, and incorrect reliance).
The average lateral acceleration and deceleration rates were determined for the scenarios that required a take-over, starting directly after the participant turned off the automation until the end of the scenario. Then, any group differences on 'vehicle control' were analysed with unpaired independent t-tests. All research data is freely available in the Supplementary Materials and in the following data repository https://osf.io/xebrw/?view_only=eb59ffbbddc04bdf8f18d811f74d65ab.

Correct Take-Over and Reliance Behaviour
During the first session, the IB group used the automation incorrectly (either incorrect reliance or incorrect take-over) more often than the DIT group (N IB = 65, N DIT = 46) ( Table 4). This difference was significant overall (χ 2 (1, N = 379) = 4.285, p = 0.025), and also for the specific scenarios OD2 (χ 2 (1, N = 38) = 8.992, p = 0.003) and RM2 (χ 2 (1, N = 38) = 7.795, p = 0.006). In the scenario OD2, a pedestrian crossed the street from behind a large bus that is blocking the view of the car's cameras.
In RM2, the lane markings are missing just before a sharp curve. No significant differences were found in Session 2 (N IB = 32, N DIT = 26) (χ 2 (1, N = 301) = 0.720, p = 0.240) and Session 3 (N IB = 13, N DIT = 17) (χ 2 (1, N = 176) = 0.643, p = 0.274). The observed power was sufficient for the Chi-Square tests per session (1-β >.8, d = 0.3, α = 0.05), but insufficient for between group comparisons in specific scenarios (1-β < 0.6, d = 0.3, α = 0.05). Consequently, if we control for the number of scenarios through a rather conservative Bonferroni correction (α adjusted = 0.05/26 = 0.002), the differences found in individual scenarios are no longer significant (all p > 0.002). Some specific scenarios appeared to show particularly more incorrect automation uses compared to the other scenarios: ACC1 and T6. ACC1 (N = 34) was the very first scenario that any of the participants encountered during this study. T6 contained a signalized intersection with intersecting traffic (N session2 = 20, N session3 = 10). The car would stop for the crossing traffic through traffic signs and continue after all traffic had passed. Multiple participants indicated that they thought the buildings were too close to the intersection and might block the view of the cameras.
Next, a Generalized Estimating Equation procedure followed (2.3.7). The dependent variable was correct automation use. The random effects were the participants and scenarios. The fixed effects were the groups and sessions ( Table 5). The chosen working correlation matrix type was 'exchangeable', as this resulted in the lowest Quasi Likelihood under the Independence Model Criterion (QIC = 917.230) [50]. The binary logit model showed a significant effect of sessions (χ 2 (1, N = 856) = 17.158, p < 0.001), but no overall effect of groups (χ 2 (1, N = 856) = 0.249, p = 0.618), nor an overall interaction effect (χ 2 (2, N = 856) = 4.186, p = 0.123). However, there were near significant effects on a 0.05 significance level of group in Session 1 (χ 2 (1, N = 379) = 3.835, p = 0.050) and Session 2 (χ 2 (1, N = 301) = 3.688, p = 0.055). Looking at the specific types of incorrect automation use (incorrect take-over or incorrect reliance), it appeared that the IB group had more incorrect reliance decisions in Session 1 (N IB = 27, N DIT = 13), Session 2 (N IB = 16, N DIT = 12), and Session 3 (N IB = 6, N DIT = 2) (Figure 3). A Chi-Square analysis confirmed a difference between groups in incorrect reliance decisions but only for Sessions 1 (χ 2 (1, N = 190) = 6.20, p = 0.020). The DIT group had more incorrect take-overs in Session 3 (N IB = 7, N DIT = 15) (χ 2 (1, N = 88) = 3.879, p = 0.049). That is, they did not rely on the car when it was safe to do so more often than the IB group. The observed power for these Chi-Square tests was sufficient at > 0.8 (d = 0.3, α = 0.05). A multinomial logistic regression model was created next (Table 6). Similar to the GEE analysis, the fixed effects of the multinomial logistic regression were group and session, and the random effects were participant and scenario. The analysis confirmed an effect of both session and group on the specific types of automation use. Participants in the IB group were more likely to show an incorrect reliance behaviour (p = 0.030). Furthermore, participants were more likely to show incorrect reliance (p = 0.014) and incorrect take-overs (p = 0.044) during Session 1. No interaction effects of groups and sessions were found (all p > 0.05). Overview of the different types of (in)correct automation use. Incorrect take-over means that the driver unnecessarily turned off the automation. Incorrect reliance indicates that the automation was on when it was not safe. Table 6. Multinomial logistic regression model in which the response variable was 'automation use type', the fixed effects were 'group' and 'session', and the random effects were 'participant' and 'scenario'. Note. The automation use type 'correct reliance, the DIT group, and Session 3 were not included as these were the baseline. * = significant effect on a 0.05 level. The interaction effects were all non-significant (all p > 0.05) and were excluded from this table for readability purposes.

Summary.
Overall, the DIT group appeared to have a more correct automation use than the IB group during Sessions 1 and 2. However, a significant difference was only confirmed for Session 1. Considering the specific types of automation use, the DIT group consistently showed less incorrect reliance behaviour than the IB group throughout all sessions. This difference was confirmed through a multinomial regression. Surprisingly, however, the DIT group unnecessarily took back control (incorrect take-over) more often than the IB group in Session 3.

Take-over Quality and Vehicle Control
During the first driving session, the DIT group showed larger Times To Collision (TTC) at take-over in three (ACC2, OD2, and RM2) out of five scenarios that required a take-over ( Figure 4). For the scenario ACC2, the DIT group took back control significantly earlier (M DIT = 11.30, SD DIT = 7.54) than the IB group (M IB = 3.48, SD IB = 3.57) (t(20.59) = 3.80, p = 0.001). The DIT group also took back control significantly earlier in the scenario OD2 (t(27) = 2.45, p = 0.025), with a mean TTC of 6.19 s for the DIT group (SD = 2.55) and 3.67 s for the IB group (SD = 2.92). Similarly, the DIT group took back control significantly earlier in scenario RM2 (t(21.63) = 2.27, p = 0.034). In this scenario, the mean take-over distance was even negative for the IB group, indicating that take-over after the collision location had already passed (M IB = −0.03, SD IB = 2.12) (M DIT = 1.24, SD DIT = 0.93). In Sessions 2 and 3, it still appeared that the IB group took back control later in most scenarios that require a take-over; however, these results were not significant. In Sessions 1 and 2, none of the scenarios showed a significant difference between groups on the average lateral acceleration after take-over. In Session 3, only one scenario (Test 9) showed a significant difference between groups on the average lateral acceleration after take-over (t (19)  Summary. Overall, the DIT group showed significantly larger TTCs and smaller deceleration rates during the first session. This indicates earlier and consequently more gentle take-overs by the DIT group. While this still appeared to be the case in Sessions 2 and 3, the differences were no longer significant. Only one scenario across all sessions showed a difference between groups in the lateral acceleration. In this case, the DIT group showed a larger lateral acceleration. The possibility of Type II errors needs to be taken into account for the take-over quality and vehicle control variables, as the power was < 0.8 for these tests (d = 0.5, α = 0.05) [53].

Acceptance
At the end of the first session, participants rated their agreement to several statements about their training on a scale of 1 (Strongly disagree) to 7 (Strongly agree) ( Figure 6). Overall, the participants of the DIT group agreed that the DIT was easy to use (M = 5.79, SD = 0.93, 95% CI = 5.34-6.24) and useful (M = 5.72, SD = 1.18, 95% CI = 5.15-6.29). Participants were positive towards the DIT (M = 5.74, SD = 1.11, 95% CI = 5.20-6.27), and disagreed that it was annoying or frustrating (M = 2.63, SD = 1.28, 95% CI = 2.02-3.25). Furthermore, participants showed the intent to use the DIT if it was in their partially automated car (M = 5.05, SD = 1.65, 95% CI = 4.26-5.85), and felt that they were capable of using it (M = 5.87, SD = 0.47, 95% CI = 5.64-6.09). Participants disagreed that people who are important to them think that they should use the DIT (M = 3.79, SD = 2.12, 95% CI = 2.77-4.81). This seems logical as their friends and family most likely do not know about the system. The acceptance ratings could not be compared as each group only experienced one training method. Figure 6. Overview of the acceptance ratings. For the IB group, the words 'training system' were replaced by 'training'. Two 'ease of use' questions did not apply to the IB group. The error bars indicate the 95% Confidence Intervals.

Discussion
A Digital In-car Tutor (DIT) is proposed as a situated, low-cost, and time efficient method for drivers to learn about their partially automated car during regular driving trips. In this study, we evaluated a DIT prototype for a complex (simulated) partially automated car. It was hypothesized that the DIT prototype would support drivers in deciding when it is safe to use the automation, and consequently lead to better vehicle control when taking back control. To study this, we compared appropriate automation use and take-over quality, in two groups over three driving sessions. The control group received information about the car automation through a brochure (IB group), while the experimental group received the information from the DIT prototype during the first driving session (DIT group). The DIT provided situated information about the systems' capabilities and limitations. Drivers were instructed to turn on the automation whenever they thought that the car could safely cope with the situation, and turn (or leave) it off if they thought that it could not. Each scenario contained an event in which it was either safe or unsafe to use the automation. This way, the automation use could be classified as follows: 1) Correct take-over, the automation is off when necessary, 2) Correct reliance, the automation is on while it is safe, 3) Incorrect take-over, the automation is off while this is not necessary, and 4) Incorrect reliance, the automation is on when this is not safe. It is important to note that the DIT is not a warning system that prompts all upcoming events. Rather, it identifies certain scenarios to support situated learning. Furthermore, the DIT never stated that it was safe to leave the automation on, or that it was necessary to take back control. For technical, safety, and liability reasons, this would be unrealistic to expect if the DIT were to be implemented in commercial cars.
Correct automation use. During the first driving session, the DIT group showed overall a more correct automation use (combined correct take-overs and correct reliance) compared to the IB group. During the second session, in which the DIT was no longer active, this still appeared to be the case, but the difference was no longer significant. During the third session, the two groups showed a similar level of correct automation use. Although a significant difference could only be confirmed for the first session, this still has implications for traffic safety. As the DIT should be used in real cars during normal trips, drivers need to be able to use the automation appropriately and safely from the start without any possible confusion. In simulator training, one could require drivers to go through multiple driving sessions to get to a desired performance level (although we did still see more inappropriate reliance behaviour in the control group after three driving sessions, which we will discuss soon). But as drivers are using the DIT during regular driving in their own car, initial appropriate automation use is critical for traffic safety. Still, although most learning is believed to occur during the initial interaction [7,8,54], it may still be necessary to increase the duration of the DIT to obtain a higher final performance level, especially since multiple studies, like those by Beggiato [7,54] and Forster [8], have shown that the learning curve stabilizes after approximately five interactions (or 3.5 h) [7,8]. Extended DIT support may also be necessary as situations that have not been experienced for a long time can fade from the driver's mental model [7]. Longer (but not necessarily continuous) DIT support provides the option to highlight rare situations in similar frequently occurring situations. This needs further investigation in a more longitudinal study.
Incorrect reliance. The DIT group already showed less incorrect reliance during the first session, compared to the IB group. By the third session, the amount of incorrect reliances of the DIT group had further decreased to around two and a half percent of all interactions. While the IB group also showed a decrease in incorrect reliances over time, both the initial and final amount of incorrect reliances appeared to be higher compared to the DIT group. During the third session, the brochure group still showed around seven percent of incorrect reliances out of all interactions. Further analysis confirmed that the IB group was more likely to show incorrect reliance behaviour. These results follow our expectations based on both established and more recent models that describe the interaction between automation feedback and automation use. These include, amongst others, Lee and See [55], Seppelt [56,57], and Revell [58]. All these interaction models suggest that (external) information about the automation, as well as repeated interactions and automation feedback all affect automation use (and reliance). The results suggest that by combining all these elements in the DIT, it was effective in specifically decreasing inappropriate reliance behaviour. This is an important implication of the prototype as inappropriate reliance can lead to severe safety issues.
Incorrect take-over. Both groups had a similar number of unnecessary (incorrect) take-overs during the first driving session. While the number of unnecessary take-overs decreased over time for the IB group, this was not the case for the DIT group. It seems that the DIT group was more careful to rely on the automation throughout the driving sessions. These results are unexpected as they are not in line with the statement that repeated interactions, feedback, and background information lead to improved mental models and consequently appropriate automation use. Similarly, they are not in line with the research on a digital tutor for ACC by Simon [33], which showed fewer unnecessary take-overs from users of the digital tutor. However, interestingly, that study also showed a slight increase of unnecessary take-overs during the third driving session in specific scenarios. One would expect that the feedback of the DIT would in this case lead to fewer unnecessary take-overs, just as the lack of feedback for the IB group should lead to an over-or under-reliance depending on the experience of safe driving situations or crashes.
The amount of unnecessary take-overs for the DIT group might be explained by the Signal Detection Theory [59][60][61]. In our study, correct take-over and correct reliance correspond respectively to 'hit' and 'correct rejection', while incorrect take-over and incorrect reliance correspond to 'false alarm' and 'miss'. The information and explicit feedback by the DIT repeatedly stressed the limitations of the automation. This may have made drivers change their criterion and take a more conservative attitude when judging situations as being inside the ODD of the automation, consequently increasing the number of incorrect take-overs (false alarms) and reducing the amount of incorrect reliance (misses). Another explanation is that drivers were still in the phase of forming their core mental models about the automation by the third session [33]. It is important to realize that unnecessary take-overs are not necessarily dangerous and are arguably preferred in ambiguous situations. Still, unnecessary take-overs need to be limited so that the automation can be used to its full potential. If drivers are constantly disengaging the automation when it is unnecessary, potential benefits of the automation such as increased traffic safety and driver comfort may not be achieved.
Challenging scenarios. Two particular driving situations were very difficult for both groups: ACC1 and T6 (see 2.3.5). It was safe to leave the automation on in both situations. ACC1 was the very first scenario that all drivers encountered during the study. As discussed earlier, drivers need repeated experience and feedback to develop a calibrated level of trust [7,8,62]. While reassurance feedback may support a higher initial level of trust, a DIT should never suggest that the automation can perfectly handle a situation. Scenario T6 was a signalized intersection with crossing traffic. The automated car would detect the priority signs and stop to let the crossing cars pass. Drivers did not rely on the car as they thought that the houses were too close to the street and might block the view of the car's cameras. This suggests that the drivers were well aware of the limitations (blocked cameras) and capabilities (detecting priority signs) of the automation. However, as no specific camera ranges were provided during the training, this particular situation became ambiguous for the drivers. Taking back control was then arguably the safest decision.
Vehicle control. We expected to see better vehicle control for the DIT group after disengaging the automation in situations that required to take back control [63,64]. For example, Simon [33] found less intense braking behaviour for users of the digital ACC tutor. In our study, the DIT group took back control significantly earlier, and braked less hard, than the IB group during the first session. However, no significant differences were found between the groups in the second and third sessions. Still, the minimum Time To Collision at take-over was consistently larger, and the maximum deceleration was smaller, for the DIT group. While overall no differences between groups were found for the lateral acceleration after take-over, one scenario surprisingly showed a larger lateral acceleration for the DIT group. The possibility for Type II errors needs to be taken into consideration for the vehicle control variables as these tests had limited power.

Acceptance.
Our results show that participants found the DIT easy to use. Participants also indicated that the DIT made learning about, and using, the automation easier. They felt positively about the DIT and confident in using it. Participants indicated an intent to use the DIT, but did not think that their peers and family felt that they should use it.

Limitations
Certain limitations concerning this study have to be taken into account. First, participants in the control group were asked to read the brochure carefully before entering the driving simulator. However, in real life, a large share of drivers does not read the owner's manual, nor looks up any other information about the automation in their car [1,5]. Therefore, the group will not be representative of all drivers. A brochure was chosen for the control group as this is often used by car sellers as the main (and only) method of providing customers with information about the automation in their new car [5]. An additional study with a control group that does not receive any information about the automation before driving may be required for an improved representation of current drivers.
Second, it may be that the visual cues have contributed to the differences between groups during Session 1 due to a priming effect. Although the visuals were a core part of the DIT prototype as they allowed to address the systems' limitations in the current driving situation, further research is necessary to determine how the way that the information is presented influences learning. For example, it is unclear if a DIT that is strictly auditory will have similar effects.
Third, participants could only turn off the automation by pressing a button on the steering wheel. It is possible that the inability to disengage the automation through the brake has caused confusion among drivers in time-critical situations. However, participants were reminded that they had to disengage the automation through the button, and not the pedals, multiple times throughout the driving sessions.
Last, the current between-subject set-up did not allow us to compare the acceptance between the DIT and an information brochure. Additional studies with a within-subject design are required to examine the acceptance of the DIT more extensively.

Future Research
The results of this study provide multiple opportunities for further research. First, it is necessary to further investigate the specific information that needs to be included during the introduction of a new system. For example, it is unclear if it is necessary to include the technical equipment specifications.
Second, the effects of a DIT on driver distraction need to be assessed. By projecting the transparent images on the windscreen, the driver does not have to continuously shift his attention from the road to a secondary screen. However, the images are still expected to introduce glances away from the centre of the road and take up cognitive resources. They therefore need to be further refined so that they facilitate optimal learning while limiting distraction from the road. For example, the images may need to be located closer to the centre of the driver's field of view, without causing visual clutter [65,66], to adhere to the NHTSA guidelines on the number and duration of glances away from the centre of the road [67,68].
Last, while the concept prototype used the entire windscreen to project the images on, more practical implementations need to be explored. For example, the DIT may be implemented in an off-the-shelf head-up display device.

Conclusions
During the first driving session, in which the DIT was active for the experimental group, users of the DIT showed a more correct automation use (correct reliance and correct take-overs) and higher-quality take-overs. This first driving session represented the initial on-road contact with both the automation and DIT. However, the differences in correct automation use were reduced over time and disappeared by the last driving session, which took place two weeks after the first session. The IB group appeared to catch up with the DIT group and came to a similar level of correct automation use. Still, as the DIT is used in drivers' cars during regular drives, safe automation use is extremely important directly from the start. The DIT specifically led to less incorrect reliance behaviour throughout the driving sessions, something that would otherwise lead to immediate safety issues. While the IB and DIT groups both showed a decrease in incorrect reliance over the course of the driving sessions, the overall incorrect reliance was significantly lower in the DIT group throughout the sessions. That means that drivers relied less on the automation in situations that were outside of its Operational Design Domain. Still, further research is necessary on the precise required content of a DIT, and how the way of presenting the DIT information exactly influences learning. The results further indicated a possible under-trust of the automation among users of the DIT. While under-trust may be less dangerous, it may hinder the adoption (and proposed benefits) of automated driving. It is therefore necessary to investigate how to address the under-trust without the risk of creating overreliance. Finally, drivers found the DIT easy to use, useful, and felt confident in using it. Overall, this study provides an initial insight into the effects of a Digital In-Car Tutor on the appropriate use of complex car automation. The concept of a DIT shows some potential as a low-cost, time-efficient, situated, and long-term method for learning about partially automated cars, with additional benefits for instructing drivers after overnight software updates. Therefore, additional research is advised to further explore DIT content and form.

Conflicts of Interest:
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Appendix A -Acceptance questionnaire
The following acceptance questionnaire was completed by participants of the DIT group after the first session.
The following questions are specifically about the training system you experienced! Perceived ease of use.
I find the training system easy to use 2.
Learning how to use the training system is easy for me 3.
It is easy to become skillful at using the training system 4.
The training system makes learning about the automated car systems easier 5.
The training system makes using the automated car systems easier 6.
The training system makes using the automated car systems safer
Using the training system in an automated car is a good idea 8.
I am positive towards using the training system in an automated car 9.
Using the training system is annoying 10. Using the training system is frustrating Intention to use. Imagine that you own the partially automated car that you experienced today. Please indicate for each statement to what extent you (dis)agree. (1-Strongly agree, 7-Strongly disagree) 11. I would actively use the training system in my partially automated car

Self-efficacy.
Please indicate for each statement to what extent you (dis)agree. (1-Strongly agree, 7-Strongly disagree) 12. I feel confident in using the training system 13. I have the necessary skills to use the training system Social norm. Imagine that you own the partially automated car that you experienced today. Please indicate for each statement to what extent you (dis)agree. (1-Strongly agree, 7-Strongly disagree) 14. People who are important to me think I should use the training system