Takeover Safety Analysis with Driver Monitoring Systems and Driver ‐ Vehicle Interfaces in Highly Automated Vehicles

: According to SAE J3016, autonomous driving can be divided into six levels, and partially automated driving is possible from level three up. A partially or highly automated vehicle can en ‐ counter situations involving total system failure. Here, we studied a strategy for safe takeover in such situations. A human ‐ in ‐ the ‐ loop simulator, driver ‐ vehicle interface, and driver monitoring sys ‐ tem were developed, and takeover experiments were performed using various driving scenarios and realistic autonomous driving situations. The experiments allowed us to draw the following con ‐ clusions. The visual–auditory–haptic complex alarm effectively delivered warnings and had a clear correlation with the user’s subjective preferences. There were scenario types in which the system had to immediately enter minimum risk maneuvers or emergency maneuvers without requesting takeover. Lastly, the risk of accidents can be reduced by the driver monitoring system that prevents the driver from being completely immersed in non ‐ driving ‐ related tasks. We proposed a safe take ‐ over strategy from these results, which provides meaningful guidance for the development of au ‐ tonomous vehicles. Considering the subjective questionnaire evaluations of users, it is expected to improve the acceptance of autonomous vehicles and increase the adoption of autonomous vehicles.


Introduction
Autonomous cars are expected to become a new solution for safety, environmental, and traffic-related problems. Further, it is an important technology related to all core industries in the 4 th industrial revolution, including the sharing economy, artificial intelligence, and the Internet of Things [1]. The popularization of autonomous vehicles combined with electric or fuel cell technology is expected to play a key role in efficient energy consumption and the construction of sustainable eco-friendly smart cities [2]. According to the SAE J3016 standard, autonomous driving can be classified into six levels: Level 0 (completely manual) to Level 5 (completely autonomous); the classification criteria include the driving responsibilities of the driver and driving support function level of the system [3]. Up to Level 2, the driver must always supervise vehicle control. At Level 3, the autonomous driving system can temporarily take over the control of the vehicle under certain conditions. Nevertheless, at Level 3, the driver must be able to take control at any time while keeping an eye on the driving situation, and the autonomous driving system must determine whether the driver can start driving [4]. Recently, UNECE/WP.29, the international vehicle safety standard-setting organization (International Conference on Harmonization of Vehicle Standards), announced the worldʹs first binding international regulation on Level 3 autonomous driving, which is summarized below [5]:  [8,9].
Takeover control between the system and the driver is one of the most critical functions in terms of the safety of a highly automated vehicle. The first step in this process is determining how ready the driver is to start driving [10]. The driver monitoring system (DMS) has been included as a primary safety item in Euro NCAP's 2025 Roadmap [11]. Research using heart rate [12], respiration rate [13], and gaze response [14,15] is in progress to measure the cognitive load, distraction, and drowsiness of a person in the driver's seat. The approach in which sensors are attached to the driver's body decreases accuracy due to movement and causes inconvenience when driving. Although these shortcomings are being addressed [16], a monitoring system using a camera has been commercialized.
The takeover process proceeds with non-driving-related tasks (NDRTs), a disengagement scenario, and warnings raised through HMI as shown in Figure 2 [17,18]. Quantitatively evaluating the situation wherein the driver concentrates on other tasks in the autonomous driving mode is essential. Tasks such as phone calls, radio listening, video viewing, text entry, N-back tasks, and surrogate reference tasks (SuRTs) are then assigned to drivers, and metrics such as the NASA task load index (NASA-TLX) are used to evaluate the workload [19].  [17].
Next, based on statistical data, the disengagement scenarios are implemented under a sensor or system failure scenario or in other predefined scenarios [20,21]. Finally, the optic, acoustic, and haptic HMI are configured to warn the driver to return to driving, and the effectiveness of each notification method is analyzed [22][23][24]. The effect of the complexity of surrounding traffic vehicles or the type of NDRT on the takeover quality has been analyzed. The probability of collision and the tendency to avoid accidents through steering was significantly increased under high-density traffic conditions [25,26]. The type of NDRT did not significantly affect the time required for the takeover, but it affected the takeover quality [27]. In autonomous driving mode, the driver's gaze behavior reflected the level of distractedness, and specific gaze parameters were suitable to assess the adequacy of the driver's monitoring strategy [28]. Since studies on HMI configuration and driver response focus on average or median values or only on extremely urgent scenarios, differences among individuals must be considered [29].
In this study, we propose a strategy for increasing takeover safety in conditionally automated vehicle system failure situations. Various driver-vehicle interfaces (DVI) introduced in previous studies [17] and the UNECE ALKS proposal [5] were comprehensively analyzed. A realistic driving simulator was developed to conduct experiments and acquire driver data on high-risk situations where accidents may occur. Event scenarios were designed based on disengagement reports and traffic accident statistics to simulate situations in which takeover could occur. By having the participants perform NDRTs based on general time-consuming statistical data, the takeover requests were provided in realistic autonomous driving situations. As suggested for future work in previous studies, we devised a method to utilize DMS to improve driver readiness or reaction speed. Finally, we analyzed the characteristics of scenarios that are too risky to request takeover requests despite these efforts. Based on the overall experimental results, a safe takeover strategy was designed.
The paper proceeds as follows: Section 2 explains the development of an experimental environment that includes the driving simulator, DMS and DVI configuration, event scenarios, and the process of experiments. In Section 3, we discuss our analysis of the factors (DVI configuration, scenario types, DMS) affecting the safety of the transition process based on the experimental results. Finally, we propose a takeover strategy to maximize the safety of the takeover of highly automated vehicles.

Human-In-The-Loop Vehicle Simulator
Using a driving simulator for control transfer or HMI design has the following advantages: Various event scenarios can be created, and they can be provided equally to several experimenters. Experiments can be carried out even in high-risk situations where accidents may occur. For these reasons, related studies mainly used driving simulators [30][31][32][33][34]. To provide real vehicle experiences in the driving simulator, visual, auditory, and tactile cues and a vehicle motion generation system are important [35]. The driving simulator can be effectively used to study abnormal driver patterns such as drowsiness, fatigue, drunkenness, and taking drugs [36,37]. The experiment's validity using the driving simulator was verified by comparing the driver's responsiveness in the simulator and the real-world vehicle in several studies [38][39][40][41][42]. In a paper comparing the results of fixeddriving simulations with field tests, the validity of the simulation was verified in terms of surrogate task performance, visual attention, and driving performance [38]. The feasibility of driving simulators for speed, car-following distance, and reaction latency in the work zone was revealed, and some driving behaviors were slightly more aggressive in the simulator [39]. It was suggested that low-cost or fixed-base simulators could achieve similar effects to high-cost simulators and that the most important element for a successful behavioral validation study is a carefully designed experimental procedure and correct interpretation of the results [40]. It was found that there was no statistically significant difference between the simulator and the on-road vehicle for the human factor associated with the automated vehicle [41]. Figure 3 indicates the HITL simulator used in this study and depicts of the parts interacting with the driver and the actual hardware. The driver's steering, gear, and pedal inputs were collected from a laptop and real-time processor and were transmitted to the main PC where the simulator was driven. The virtual driving environment was presented to the driver, visually very similar to what is seen from the driver's seat of an actual vehicle, through a large-screen (55 inches) 4K resolution TV. A touch monitor for NDRT was placed between the driver and the TV. Static environments such as roads, traffic lights, and buildings, as well as dynamic obstacles such as vehicles, pedestrians, and bicycle occupants, were implemented using IPG CarMaker. This software provides solutions for virtual test driving, including basic vehicle systems such as dynamics and powertrain, driving scenario generators, autonomous driving sensor models, and realistic visual graphics. In the driving simulator, the steering torque feedback and pedal haptic feedback considerably influenced the driving accuracy, environmental awareness, and realism [43,44]. The overall configuration of the steering system and controller is illustrated in Figure 4. A steering system that can output a maximum torque of 17 Nm was designed to implement real vehicle torque feedback by connecting a reduction gearbox to a smart motor with a nominal power of 615 W. An encoder and a torque sensor mounted on the shaft were used to measure the steering input of the driver.
In autonomous driving, the control algorithm manipulated the steering hardware, and the steering angle (as measured by the encoder) was input into the simulation. In manual driving, the steering was controlled by a torque control mode consisting of a road reaction torque calculated in the vehicle dynamics simulator (CarMaker) and an assist torque that reduced the steering load. The latter was inversely proportional to the driving speed, which provided comfortable steering at low speeds and stable control at high speeds.

Driver Monitoring System (DMS)
Research has been conducted to determine the drowsiness, fatigue, and distraction of drivers using a DMS [45,46]. In manual driving, methods to use steering and pedal input patterns without additional sensors have been proposed [47,48]. The state of the driver must be determined by measuring vital signs such as gaze, electroencephalograms (EEGs), electrocardiograms (ECGs), skin temperature, and respiration because driver control input is not present in the autonomous driving mode [49]. Moreover, determining abnormal conditions such as the driver's excitement or drowsiness by calculating the heart rate (HR) and heart rate variability (HRV) is also possible with ECG data [50,51].
Methods to measure the driver's workload or drowsiness using a breathing pattern have been proposed [52,53]. Some studies have reported that sleep can stabilize the rhythm without causing significant changes in the respiratory rate [54]. Therefore, monitoring the driver's condition using biometric signals requires further analysis of experimental data. In addition, a sensing technology that can acquire data in a noncontact manner must be developed [55].
In this study, a respiration and ECG sensor was attached to the driver to analyze data patterns in the manual driving, autonomous driving, and takeover scenarios. As shown in Figure 5, biosignal data were acquired while driving, and peaks were detected in realtime to calculate heart rate and respiration rate. According to the UNECE ALKS regulation [5], four conditions (driver control input, eye blinking, eye closing, and conscious head or body movement) have been cited as examples for determining driver availability. Furthermore, it has been suggested that gaze direction toward the front road or rear-view mirror or head movement toward the driving task could be considered as a criterion for determining the driver's attentiveness. Studies predicting driving behavior from gaze patterns [56] or measuring cognitive workload during driving [57][58][59] have been conducted. In this study, the driver's gaze data were acquired using a Tobii eye tracker having an accuracy of less than 0.1° and a sampling rate of 90 Hz. Thus, the location information that the driver is watching on the simulation screen at a level could be used for research using this device via calibration as in Figure 6a [60]. The eye-tracking system was developed by referring to the hardware mounting condition and calibration method proposed in Gibaldi's study [41]. Figure 6b shows the results of eye tracking when manually driving the experimental course using the MATLAB heatmap function. In this figure, area A represents the simulator screen displayed as the front screen, area B is the cluster monitor between the steering and screen, and area C is the location of the touch display used for NDRT work.

Driver-Vehicle Interface (DVI)
In 2016, the National Highway Traffic Safety Administration (NHTSA) proposed overall DVI development guidelines [61] including visual, auditory, and haptic interfaces. In 2018, additional human factor design guidance was mentioned, especially for Level 2 and Level 3 automated vehicles [62].
Studies on the use and content of text, shape and color of icons, and display position for visual notifications have been conducted [63][64][65]. For auditory notifications, the type of sound (voice message or simple beep), volume, period, and direction were considered [66]. For haptic notification, the part where feedback is provided (steering, pedal, seat, or seat belt), the strength of the feedback, and the provision of spatial information were researched [66,67].
In this study, the visual, auditory, and haptic interfaces were constructed in the simulator by referring to the NHTSA DVI design guidelines [62], UNECE ALKS proposal [5], and previous studies [63][64][65][66][67]. Visual information was provided by the vehicle cluster monitor shown in Figure 7a and the heads-up display (HUD) on the front driving screen shown in Figure 7b. The cluster monitor can be vertically divided into three areas: A, B, and C. The primary information provided by each area, respectively, was as follows: 1. Vehicle speed, gear status, gaze recognition status LED (green = open eyes, red = closed eyes). 2. Autonomous driving system status (text message, image), emergency stoplight, driving map. 3. Steering control/forward lookup request (red = request, off = do not request), autonomous driving mode LED (green = autonomous, off = manual). The current vehicle speed and steering control request icon were displayed through the HUD. The auditory notification was provided as three different simple beeps: driving mode change, forward gaze request, and takeover request. The takeover request alarm gradually increased in frequency depending on the time remaining until autonomous driving was incapacitated, thereby sending an alert to the driver. Finally, the warning was delivered such that it strengthened the vibration intensity and frequency of the vibration module installed in the driver's seat of the simulator, considering the time until system failure. According to NHTSA's DVI guidelines, vibrotactile seat displays are highly likely to be detected by most drivers.

Autonomous Driving Disengagement Simulations
A total of 39 participants (20 women, 19 men) between 24 and 49 years of age (M = 34.69, SD = 8.24) with driving experience ranging from 1 to 29 years (M = 9.95, SD = 8.56) participated in the simulation. They were university students, housemakers, teachers, office workers, or people in driving-related occupations. As shown in Figure 8, the overall experimental process followed the order: predrive questionnaire, sensor attachment (calibration), practice driving, partially automated driving experiment, and postdrive questionnaire. The partially automated driving experiment was conducted on a complex urban and high-speed road with a total length of approximately 40 km. Drivers watched their favorite YouTube videos and other content or used their smartphones in the autonomous driving mode. In most related studies, participants performed specific secondary tasks such as N-back and surrogate reference tasks. However, if the available time increased because of the spread of automated driving cars, a gradual increase in time consumption, as shown in Figure 9, was also expected to occur in the car [68]. It was considered highly likely that the participants would use smartphones or tablet PCs to perform simple tasks such as online meetings and mailings or use applications such as SNS, games, streaming services, and the Internet.  [68].
Scenarios wherein ALKS can respond are defined in the UNECE proposal. In this experiment, events in which the autonomous driving system failed to operate at a level outside the scenario scope were configured on the driving course. The categories of the disengagement scenarios were as follows: A: A small dynamic obstacle suddenly enters the driving lane from the side. B: Vehicles with abnormal behavior poses a risk of collision from the front or side. C: Driving lanes are reduced because of long construction areas or objects falling in front of the vehicle. During one round of the course, 12 scenarios that fell into the above three types occurred. In three repeated driving tasks, a single modality notification and two or three complex modality notifications were used to provide takeover requests provided 5 s before the time to collision (TTC) or time to disengagement (TTD). According to the recommendations of the NHTSA DVI guidance [62] and UNECE ALKS proposal [5], when both buttons located on the steering were pressed simultaneously for more than a certain period (1 s here), the mode switched from manual to autonomous if the operation was possible. Conversely, if both buttons were pressed, the steering operated over a specific torque, or if the brake pedal was pressed, the driving mode changed to manual. After completing driving, the participants completed a subjective questionnaire evaluation focusing on the sense of stability, comfort, willingness, and commercialization of the automated driving system.

Takeover Experiment Results and Discussion
When comparing driver responsiveness, some abnormally large or small values can significantly distort the average value because each dependent variable differs slightly. The outliers in the collected data were removed using median absolute deviation (MAD), which is a filtering technique that removes outliers using the median of the sample data and distributes the sample normally. A method for calculating the for data and standardizing the data is given as In a scenario where the autonomous driving system is urgently canceled (when the driver is focusing on NDRT), the minimum required time for disengagement and effective notification methods can be determined by analyzing the time for takeover, eye movement, and vehicle control inputs ( Table 1). The mean and SD values of the response variables of the drivers for each notification method are presented in Tables 2 and 3. The data distribution is visualized as a box-and-whisker plot. In addition, for each variable, we evaluated whether there was a significant difference according to the notification method through analysis of variance (ANOVA). Regardless of the notification method in the takeover process, the scenario types that recorded a high accident rate were classified and analyzed for common characteristics.
In addition to the numerical simulation results, a questionnaire was evaluated for each notification method considering five items: the degree of stability, concentration improvement, understanding of the notification, overall satisfaction, and applicability to actual autonomous vehicles. A score from 1 (bad) to 7 (good) was quantitatively selected for each time. The survey results can be used as a basis for determining a notification method when the driverʹs response times were similar.
We determined whether the corresponding biometric data could serve as a criterion for determining driving concentration by analyzing the difference between the driver's breathing and heart rate in manual and autonomous driving. Finally, the change in takeover safety was observed by evaluating driver availability at regular intervals and providing a warning by referring to the UNECE ALKS proposal.

Single Modality Takeover Request
As described in Figure 8 and Section 2.4, single modality takeover experiments were conducted in the following order: predrive questionnaire, sensor attachment, practice manual driving, and partially automated driving experiment. Twelve event scenarios occurred while driving the course in Figure 3, and one of three single notifications was provided. Figure 10 shows the drivers' response to 264 takeover scenarios using a single modality notification method of visual, auditory, or haptic notification. Since we analyzed driver reaction after the notification was provided, the case wherein the driver was looking ahead for more than 1 s before the notification was excluded from the analysis.
None of the six variables showed distinct differences depending on the notification method as assessed with one-way analysis of variance (ANOVA) (tTOR, p = 0.14; tEyeIn, p = 0.556; tEyeMv, p = 0.903; Ap, p = 0.58; Bp, p = 0.73; Steer, p = 0.479). Analysis of variance is a hypothesis testing method used when comparing three or more groups. One-way ANOVA tests for significant differences between groups under the conditions of one independent variable and one dependent variable. This experiment evaluated whether the "type of notification" caused a difference in each independent variable. As shown in the results of Table 2, the single modality notification method took about 2.8 s of takeover time and about 2 s of gaze road fixation time on average. Recipients of all three notification methods tended to press the brake pedal before the steering operation (70% applied the brake first, and the overall average difference was 0.24 s), and the time difference between the two inputs in the visual notification was the shortest at 0.17 s.  Contrary to the results of the shortest time to takeover after a single visual notification, the questionnaire evaluation showed that it had the most negative results among the three methods ( Figure 11, average scores were 3.4, 4.9, 5.4). Most drivers with negative responses indicated that they could not freely focus on their secondary tasks in the autonomous driving mode as they had to constantly keep track of the cluster monitor to avoid missing the visual notification (76% opted for single visual notifications as an inefficient method). The haptic notification delivered through the vibrotactile sheet was the best in terms of improving concentration. However, there were cases of surprise or discomfort, as shown in Figure 12.

Multimodality Takeover Request
As shown in Figure 8, the multimodality takeover experiments were conducted in the same course as the single modality experiment. One of the complex notification methods was provided while driving the course twice. When a complex notification including visual, auditory, and haptic features was provided, the driver reaction to the takeover scenario was observed 377 times, and the results are shown in Figure 13. In the case of a complex modality composed of two notifications, no significant difference in any of the dependent variables with one-way ANOVA (tTOR, p = 0.065; tEyeIn, p = 0.22; tEyeMv, p = 0.98; Ap, p = 0.65; Bp, p = 0.10; Steer, p = 0.29) was observed, as with the single-modality result. However, a difference in terms of the increase in the switching time, brake, and steering operation timing was observed, which are important variables in the takeover response. If all three notifications were included in the multimodal notification, the brake timing (p = 0.034) and steering timing (p = 0.006) were distinct, while the takeover time (p = 0.058) was slightly different.
Regardless of the detailed notification configuration, when data were analyzed by dividing them into single, dual, and triple modalities, apparent statistical differences were observed (tTOR, tEyeIn, Bp, Steer: p < 0.0001, tEyeMv: p < 0.05). The results confirmed that driver responsiveness differed depending on the complexity of the notification rather than its type. The multimodal notification method took about 2 s of takeover time, and the road fixation time was within 1.5 s. Therefore, the warning delivery effect to the driver was superior to the single modality notification (Table 3).  As with the results of the single modality, the experimental results were good when a visual notification was included. However, there were many negative responses in the questionnaire, and the score was relatively low (Figure 14, average scores are 5.26, 5.31, 6.15, 6.52). In the case of auditory and haptic notifications, the overall score was similar to that when all notifications were provided, and the short-answer question about the effectiveness of the notification method in Figure 12 ranked second. Among the ten questions in the postdrive questionnaire, items with similar meanings were classified into four categories: "efficient/helpful", "inefficient/uncomfortable", "familiar/preferred", and "applicable to automated vehicles", as shown in Figure 12. Overall, as the most positively scored method for takeover notification, all three modes, sound and vibration, and sound and visual notification were selected, in that order. Many responses indicated that visual notifications were easy to miss if the participants did not keep an eye on the screen. Furthermore, there were cases wherein the sound notifications were not delivered well because of the heavy noise while driving, or they were mixed with the sound of the NDRT content. The vibrotactile seat notification, which is always in contact, was the most effective for getting the driver's attention, such that it was the most distinguished from the general driving scenario. However, there were negative evaluations of this notification method, which pointed out that it caused discomfort or anxiety (due to surprise) depending on the location or intensity of the vibration.
We analyzed the correlation between personal preference and takeover performance, which has been suggested as a subject for further study. Each participant's quantitative questionnaire evaluation results were regarded as personal preferences, and the correlation with gaze response and takeover time was verified. For correlation analysis, Pearson correlation coefficients were calculated. There was a clear correlation between the subjective preference for notification method and the gaze road fixation time, with a Pearson correlation coefficient of approximately 31% for r > 0.7 and 61.5% for r > 0.3. There was also a very strong correlation between road fixation time and takeover time (54% for r > 0.7, and 39% for r > 0.3). Therefore, it is valid to reflect subjective preference to improve takeover performance. Effectively delivering takeover warnings by providing visualsound-vibration notifications sequentially is possible depending on the degree of urgency based on TTC or TTD. In addition, notifying each person more effectively by selecting the type or location of the notification within a specific range according to driver preference is possible.

Accident Rate of Scenario
In this section, the accident rate that occurred during a total of 641 takeover situations in single and multimodality takeover alarm experiments is discussed according to each scenario. When an accident occurred within 3 s after the requested driver started manual control, it was assumed that the takeover process had failed (Figure 15a). Scenarios with a takeover failure rate of more than 5% included B3 (small dark obstacle on the forward driving road), A1/B2 (combined scenario: inability to drive autonomously due to road construction/worker crossing from the side), A5 (small wild animals dashing into the roadway), and A2 (unexpected pedestrian crossing at the bus stop). Scenarios with a high accident rate had a common point in that a small obstruction that was not conspicuous directly obstructed the path of the autonomous vehicle. The driver who received the warning had difficulty understanding the situation even though the same amount of time was available as in scenarios such as loss of lanes and construction sections that recorded a low accident rate. Therefore, the system must decide whether to execute the emergency behavior depending on both the time to disengagement and the type of cause. For highrisk cases, advancing the timing of the minimum risk maneuver (MRM) is necessary when the driver does not respond to the takeover request, and an emergency maneuver (EM) is required to avoid imminent collision risk.
Among the 641 takeover cases, 37 (5.8%) accidents occurred, and in the case of multimodality notification, it took approximately 2 s to takeover. Therefore, there was some margin compared to the time the notification was provided (TTD < 5 s). Nevertheless, about 47.6% of drivers assessed that the notification time was insufficient, and they requested an additional time of about 1.67 s on average (Figure 15b).

Driver Availability
The UNECE ALKS proposal recommended checking driver availability at regular intervals based on eye blinking and head or body movement and providing an immediate warning if these values are unavailable. We analyzed the difference in driver responses when driver availability warnings were provided periodically and when they were not provided. The driver availability experiment was conducted with 19 participants recruited in the second experiment schedule out of 39 participants. The experimental scenarios were designed with more serious avoidance difficulty than in the previous takeover experiments to observe the difference in the accident rate. The takeover request was provided in the same way as a multimodal notification, and the experimental sequence was randomly performed to exclude the learning effect caused by performing repeated experiments. Figure 16 shows the success rate of the takeover, response time, and concentration on the NDRT according to driver availability warning. When the driver was forced to look ahead periodically, the reaction time was higher than 0.9 s, and the accident rate was 23 percentage points lower ( Table 4). The focus on NDRT work during the entire driving time decreased by nearly 50 percentage points, from approximately 80% to 30%. Therefore, increasing the safety of highly automated vehicles is possible by applying a function that requests the driver to periodically check the driving situation. Current commercialized partially automated vehicles only check whether the driver is holding the steering wheel. However, it is also essential to check whether the driving situation is observed through the gaze or head direction for a certain amount of time (3 s here). A follow-up study on a method that can guide the driver to more rapidly and effectively grasp the current driving situation is needed.

Driver Biosignals
During all the takeover experiments, we collected driver biosignal data and attempted to reduce the error caused by driver movement through the filtering process The raw data of each sensor described in Section 2.2 were processed to measure the respiration rate, heart rate, and NDRT concentration while driving ( Figure 17). We conducted t-tests to determine whether there were significant differences in the drivers' heart rate or respiration rate in manual driving and autonomous driving situations. First, since there were 30 or more data points in each set of data, normality was assumed by the central limit theorem, and homogeneity of variance was confirmed from Levene's test. For heart rate and respiration rate, the difference according to driving mode was confirmed by an independent sample t-test under the assumption of equal variance. Figure 18 shows the average and change in heart rate for each driver for a total of 1021 min of manual driving and 1476 min of autonomous driving. The heart rate is generally lower during autonomous driving (M = 62.96, SD = 18.27) than that during manual driving (M = 64.29, SD = 18.81). However, the difference was not statistically significant with the t-test of the two groups (p = 0.81). The respiratory rate decreased slightly during autonomous driving (M = 20.35, SD = 3.49) compared to during manual driving (M = 20.73, SD = 3.35). The difference in the t-test was not significant (p = 0.49). Therefore, the use of heart rate and respiration rate as criteria for directly judging concentration on driving behavior is inappropriate. It is better to use them for identifying abnormal conditions such as drowsiness or shock.

Takeover Strategy
Experimental results on the DVI configuration method, accident rate by scenario type, and effectiveness of DMS in the takeover process are summarized as follows.  The effect of takeover request notification type is clearly distinguished for single/dual/triple modalities (in Section 3.2, tTOR, tEyeIn, Bp, Steer: p < 0.0001, tEyeMv: p < 0.05 with ANOVA for single/dual/triple modality analysis).  Notification modalities preferred by drivers are better at inducing driver concentration (in Section 3.2, driver preference and gaze road fixation time, 31% for r > 0.7, and 61.5% for r > 0.3 with Pearson's r).  Depending on the type of system failure cause, the timing of the transition to emergency behavior (MRM, EM) must be different (in Section 3.3, cases with high driver recognition difficulty).  The safety of the transition process is improved if the DMS causes the driver to check the driving scenario periodically (in Section 3.4, improvement of reaction time and takeover failure with DMS). A flow chart of the takeover process of a highly automated vehicle is provided in Figure 19 based on the results of all the experiments. At the beginning of the procedure, the system, which receives the driver's willingness to use through DVI, determines whether to wear a seat belt, and satisfies the system operational design domain (ODD). Then, it decides to switch to the autonomous driving mode. While the autonomous mode is active, the system continuously checks the driver availability with gaze and head orientation information and system operating conditions. The system periodically makes the driver aware of the driving environment to improve the reaction speed and success rate in takeover scenarios.
When an autonomous driving system detects a failure because of various causes, the following two responses are possible: First, if there is sufficient time for the takeover request, visual-auditory-haptic notifications are additionally provided based on TTD (e.g., visual → visual and auditory → visual, auditory, and haptic). Furthermore, the warning effect can be enhanced based on driver preference within a certain range for the notification intensity and the order of addition. If the driver cannot respond within the time or the situation worsens, severe system failure occurs, and the MRM must be initiated immediately. Second, the EM should be carried out in scenarios where it is difficult for the driver to determine the cause or in imminent collision risk. The EM should not be terminated unless the imminent collision risk disappears or until the driver deactivates the system. The proposed strategy can improve the safety of the takeover process and improve the process of switching control rights, which could be the most dangerous moment in an autonomous driving system of Level 3 or higher. Furthermore, improving the willingness to use an autonomous vehicle is possible by providing a sense of psychological stability to the user.

Conclusions
The purpose of this study was to develop a takeover strategy that can maximize safety by considering the factors affecting the takeover process partially covered in previous studies. We attempted to improve the secondary task missions and limited driving scenarios previously used for quantitative evaluation. To realize a situation in which a highly automated vehicle is used as much as possible, we designed driving scenarios in which disengagement occurs by referring to disengagement reports and traffic accident statistics data. Drivers were asked to take over while using a smartphone or watching content through a monitor (like the real scenarios) in the autonomous driving mode.
There are three critical considerations for takeover safety: 1. Delivering notifications through a complex configuration of the driver-vehicle interface is effective. At this time, the effectiveness of the notification delivery may vary according to driver preferences.
2. In situations where it is difficult for the driver to immediately understand the cause of the disengagement, it is better to enter the emergency behavior right away.
3. The risk of accidents can be reduced using a driver monitoring system to ensure that the driver is not completely distant from the driving task.
In future studies, we plan to analyze in detail the types of scenarios that require emergency behavior and specifically design and evaluate minimum risk maneuvers. In addition, research on DVI that can help the driver to understand the situation effectively is necessary when a driving situation check or takeover request is provided. Institutional Review Board Statement: Ethical review and approval were waived for this study since there was no threat to the health and life of the participants. The participants did not take any medications, drugs, or other medical treatments.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.