Effect of Proactive Interaction on Trust in Autonomous Vehicles

: With rapid advancements in autonomous vehicles (AVs), mistrust between humans and autonomous driving systems has become a focal concern for users. Meanwhile, proactive interaction (PI), as a means to enhance the efficiency and satisfaction of human–machine collaboration, is increasingly being applied in the field of intelligent driving. Our study investigated the influence of varying degrees of PI on driver trust in Level 4 (L4) AVs set against a virtual reality (VR)-simulated driving backdrop. An ex-periment with 55 participants revealed that, within an autonomous driving scenario without interference, elevated PI levels fostered increased trust in AVs among drivers. Within task scenarios, low PI resulted in enhanced trust compared to PI characterized by information provision. Compared to females, males demonstrated reduced trust in medium PIs. Drivers with elevated extroversion levels exhibited the highest trust in advanced PIs; however, the difference between excessively and moderately extroverted participants was not significant. Our findings provide guidance for interaction designs to increase trust, thereby enhancing the acceptance and sustainability of AVs.


Introduction
Trust is crucial for understanding system reliance decisions [1].The correct use of modern technology highly depends on human trust [2].As computer technology has become increasingly complex, artificial intelligence (AI) has rapidly expanded into everyday human life.Trust may reduce negative emotions toward AI [3] and play a key role in increasing AI acceptance, making the importance of human trust in AI even more significant [4].AI is a key technology for achieving functionality and performance of autonomous vehicles (AVs) and can improve energy efficiency by optimizing travel speed and reducing fuel consumption by reducing unnecessary acceleration and braking [5].Therefore, AVs represent a significant step toward sustainable development [6].As the level of autonomous driving gradually increases, the relationship between humans and vehicles has transformed from traditional "human control" to "shared control."The vehicle gains partial control and the human's role shifts from "driver" to a more advanced "supervisor."When humans have to monitor the system or act outside the driving task, the question of whether to trust and delegate the driving task to an autonomous driving system arises.
The proactive interaction (PI) theory originated from the concept of proactive behavior in sociology.With the development of AI technology, proactive behavior models based on machine learning have endowed robots with the capability for specific situational behaviors, such as sociability and seeking feedback [7].However, proactive voice interaction may increase cognitive load [8].In order to improve user acceptance, this human-like proactive behavior has been applied to systems with social attributes to enhance interaction efficiency with users.Current research has explored the effectiveness of proactive behavior in social robots [9] and perceptions of anthropomorphism at different levels of proactive behavior [7].Peng et al. designed the proactivity of social robots in three dimensions based on the autonomy of social robots [10].Kraus et al. classified the PI levels in robot assistants from low to high as none, notification, suggestion, and intervention and demonstrated the potential impact of different PI levels on user trust [11].Research on robotic assistants involves both how to determine the level of initiative and when to take initiative.For example, in the studies of Grosinger et al. [12] and Liu et al. [13], the system can use human-interaction data as training input to determine when to perform PI under appropriate conditions and times.
The application of PI has become increasingly common in the field of AVs.For example, smart cars use sensors or facial recognition technology to greet users through voice, light, and other media or provide appropriate route solutions based on users' travel needs.Among various PI methods, voice proactive interaction is particularly noteworthy for its natural and intuitive communication.AVs interact proactively with drivers through a series of strategies and algorithms, including vehicle and road condition information collection, data fusion and cognitive decision-making, and big data analysis [14,15].Through PI, information feedback between humans and vehicles becomes timelier and more effective, providing passengers with a unique and convenient interactive experience.
The Society of Automotive Engineers (SAE) classifies AVs into six levels ranging from L0 to L5 [16].In June 2022, Mercedes-Benz announced the world's first fully certified Level 3 AV Drive Pilot, which received approval from the German Department of Transportation.The advent of Level 4 (L4) AVs is expected to overcome technological constraints and various safety and legal issues, thereby attracting public attention.However, a recent study found that with the advancement of AV levels, public acceptance of AVs has declined [17].The trust crisis between humans and AVs is exacerbated by frequent accidents involving AVs.Technical system failures are not the only causes of accidents.A significant factor is the low vigilance of some drivers [18], resulting in the majority of accidents being caused by drivers rather than the AV systems [19].Although studies have confirmed that highly automated AVs are safer than human drivers [20], public trust in this new technology for autonomous driving assistance is decreasing.This significantly affects the effectiveness of the use of autonomous driving functions by drivers and public acceptance of AVs [21,22].Therefore, determining how trust works in AVs is crucial to enhance consumers' willingness to purchase AVs and the effectiveness of autonomous driving.

Related Work
Trust research in human-machine collaboration is largely based on psychological perspectives, such as Mayer et al.'s definition of trust as one party's willingness to be vulnerable to another's actions [23].The domain of human-machine trust research has also expanded to the design of interactions between humans and systems, particularly regarding system characteristics, personal traits, and specific scenarios [24,25].To establish trustworthy systems, people focus on the design, modeling, and implementation of AI systems, where there are three main factors for trust: system, personal, and situational factors.

System Factors
System factors primarily revolve around interfaces and human interaction.For example, studies have shown that in initial interactions with a system, its appearance may be more important than its reliability in gaining trust [26].Semantic symbols in feedback interfaces can influence changes in user trust compared to non-semantic symbols [27].Systems that adhere to good human etiquette can compensate for low trust due to low levels of automation reliability [28].Anthropomorphized systems may also promote the perception of machine agents as social agents, thereby increasing trust in the system [29] and leading to more frequent interactions with the device [30].Perceived capabilities may also affect trust in AVs [31].These system characteristics do not necessarily enhance the system's reliability but can strengthen users' trust in the system.However, despite some developments in the research area of trust in AVs, such as adjusting drivers' expectations to enhance trust [32], few studies have attempted to increase driver trust in the system based on interaction methods, such as exploring the relationship between visual design elements of the human-vehicle interaction interface and trust [29] and explainability of user interface information [33].There is still a lack of research exploring the PI forms of trust in AVs.

Personal Factors
Researchers have found that user trust in a system is significantly influenced by personal characteristics such as income level [34] and cultural background [35].Additionally, men may have lower levels of trust in automated systems [36].Human-machine trust is becoming more like trust among people.In addition, users' trust in system tools increasingly resembles trust in another human being.This has led to increasingly apparent differences in personalities among human trustees [2].A previous study reported that people with a relational orientation and utilitarian tendencies showed higher trust in the interactive behavior of apology and compensation, respectively, when faced with system errors [37].Users' innovative traits are positively correlated with their trust level in the system; users who are more conscientious tend to dislike socializing and are more inclined to trust text interfaces with poorer social attributes [38].Personal prior experience is also an important factor affecting the degree of trust in a system; people generally have positive expectations for unfamiliar systems, considering them trustworthy, and even expecting them to be better than human beings [8].Additionally, research shows that individuals with high self-efficacy are more likely to trust and use a system because they believe they can master the technology and effectively handle possible challenges [2].
In addition, researchers have found that users prefer system tools that match their own personalities [39].User trust in automated systems is promoted when moral judgments of automated systems are similar to their preferred values [40].In the most widely used personality structure model, the Big Five Personality Model, highly extroverted personalities exhibit sociable characteristics, unlike introverts [41].There may be a correlation between a person's proactive behavior tendencies and trust in proactive conversation assistants [42].Therefore, personal characteristics should be fully considered to achieve the positive effectiveness of a human-machine collaboration [43].

Situational Factors
As AI gradually extends to many aspects of life, people's trust perceptions of the system may vary in different situations, emphasizing the increasing importance of studying the factors of human-machine trust in real environments [24].For example, the behavior of the system in non-emergency phases does not affect the trust decisions people make in emergency situations [44].Atoyan et al. experimentally demonstrated that when people face complex multitasking environments beyond their capabilities, they may develop excessive trust in cooperative AI systems [45].In L4 AVs, complex situations involving multitasking are common.Owing to the high complexity of trust-influencing factors, conclusions regarding trust in AVs based on single driving task scenarios are still lacking.
In summary, to investigate the effect of PI on trust among drivers with different characteristics in L4 AVs under more realistic multitasking scenarios, this study simulated an autonomous driving environment to accurately understand and assess drivers' trust in AVs.

Materials and Methods
Based on the theory of PI and study of PI behavior in AVs, different levels of interaction behavior have been widely applied in intelligent vehicles.For example, in the case of road congestion, the system proactively asks the user whether they need to replan the route and informs the user that it has been rerouted when the driver deviates from the route, and the vehicle directly adjusts the speed without any interaction with the user when using the Automatic Cruise Control (ACC) system.Thus, we propose three different levels of voice-activated PI models for AVs, considering the system autonomy and frequency of interaction with the driver, as shown in Figure 1.(i) High PI: The system provides the driver with the situation in their surroundings or traffic circumstances based on information gathered from sensors.It then suggests decisions according to those sensory inputs and requests confirmation from the driver.After receiving feedback from the driver, it sends a received response and executes or does not execute the previous decision based on the driver's feedback.(ii) Medium PI: The system provides the driver with the situation in their surroundings or traffic circumstances based on information gathered from sensors.The system then provides the driver with a decision according to those sensory inputs and then directly executes the decision.(iii) Low PI: The system executes decisions directly without interactions with the driver.
By introducing the concept of PI levels in AVs, we explored the impact of different PI levels and personal factors such as gender and personality on trust in systems with different PI levels.The entire experiment was conducted in a virtual reality (VR) environment.

Hypotheses
For different PI levels in L4 AVs, voice interaction with the driver is expected to increase the perception of the system's capabilities [10].As the number of interactions with the system increases, the level of trust among users is also likely to increase [1].However, attention may be diverted when the drivers perform other tasks.Regarding individual differences, males may exhibit lower trust tendencies when faced with an automated system [36].Additionally, because people prefer systems that match their personalities [39], extroverts may trust systems that actively interact with them because of their social traits [41].Therefore, the following four hypotheses are formulated: H1.In a no-task scenario, the higher the PI level, the more trust drivers have in AVs.

H2.
In the task scenario, there is no significant difference in driver trust in the system at different PI levels.
H3.Under different tasks, females have higher trust than males at different PI levels.

H4.
Extroverted drivers trust AVs more with higher PI levels than introverted drivers do.

Scenario Design
In the VR environment, to control the variable of risk perception, we designed a scenario in which a vehicle merges onto a highway from a ramp.Ramp merging is a familiar driving scenario for participants, reducing the likelihood of increased risk perception due to unfamiliarity.As Car1 is about to merge into the lane, Car2 on the main road is in the driver's view.The autonomous driving system detects that Car2 is decelerating at a speed lower than Car1, as shown in Figure 2. In this situation, whether the driver chose to accelerate or maintain speed, a safe distance between the two cars was ensured.Thus, the risk was relatively moderate, avoiding the impact of excessive trust by participants on the experimental results.After Car1 merged with the highway, the experiment ended.

Interaction Methods
The experiment used the Wizard-of-Oz method, in which participants believed they were interacting with an autonomous system.However, in reality, the system was partially or fully operated by the experimenter [38].Based on the design of the AV PI shown in Figure 3, three PI levels were set: direct execution without PI-none (low PI), notification (medium PI), and inquiry (high PI).The specific manifestations of the experimental scenario are as follows.(i) Inquiry (high PI): AVs asked the participant, "We have the vehicle slowing down on the main road.Do you want to accelerate?"After the participant gave different feedback (such as "Yes" or "Do not speed up"), the system replied "Okay" and executed acceleration or no acceleration based on the participant's feedback.(ii) Notification (medium PI): AVs informed the participant, "We have the vehicle slowing down on the main road.We're about to speed up" and then accelerated to pass.(iii) None (low PI): AVs did not notify the participant and directly accelerated to pass.

Task Design
To simulate the behavior of drivers in realistic L4 AV scenarios, such as using personal smartphones, a monetary incentive task [46] was introduced into the scene, which was an arithmetic judgment task to distract participants' attention.This task involved simple additions and subtractions of <20.To encourage participants to engage deeply in the arithmetic task and enhance the effectiveness of completing the subtask, we promised participants before the experiment that they would be rewarded with 0.5 RMB for each correct answer and emphasized that the calculation results were unrelated to the driving behavior and decision-making of the AVs.
The interface for participants without the arithmetic task is shown in Figure 3.There is no arithmetic judgment board in the scene, and other steps are identical to those for participants in the arithmetic task.

Participants
A purposive sampling method was used to recruit 55 participants (32 females and 23 males) from Tianjin University through a questionnaire survey, excluding those with no driving experience.All participants possessed a driver's license and had no experience with AVs at any level.The ages of the participants ranged from 19 to 44 years (M = 23.95,SD = 3.997).Participants signed an informed consent form prior to the experiment.

Stimuli and Apparatus
As shown in Figure 4, a VR device (PICO Neo 3, Pico, Qingdao, China) and two accompanying controllers were used.The PICO Neo 3 headset (4 K resolution, 120 Hz refresh rate, 98° FOV) features integrated spatial audio and high-quality stereo speakers, capable of playing white noise from the car engine during the experiment to maximally simulate a real driving environment.The experimenter operated one controller to provide secondary feedback for voice interaction and to control vehicle acceleration based on the feedback.The experimenter monitored the experimental scenario in real time using a monitor.The other controller was used by the participants in the task group to simulate the clicking of buttons on the screen in a VR environment by pressing the trigger button.

Experimental Procedure
The total duration of the experiment was less than 20 min.The overall experimental process is illustrated in Figure 5. Step 1 Survey.
Participants completed the task in a quiet room.They were investigated using a single-item survey for basic demographic details (gender, age, and occupation) and driving experience, including driving age, total driving kilometers, and self-assessment of driving proficiency to fully understand the features of the participants.
The experimenter briefly introduced the AVs and the precautions during the experimental process: "Hello! Autonomous driving was divided into six levels from 0 to 5. Level 4, high driving automation, refers to highly automated AVs that can drive autonomously in specific environments and tasks without driver intervention; however, manual control may still be required.During the driving process, the system can verbally interact with the user.If necessary, one only needs to respond with simple commands such as 'yes' or 'no need'".
After introducing the background of AVs, the participants were asked to complete a trust survey questionnaire based on their previous understanding of AVs and the experimenter's introduction.The questionnaire content contained a commonly used automation trust scale [47,48] to measure participants' trust in the system.
Participants were seated in a spacious area and randomly assigned to either the interference or non-interference group.All participants were required to complete a Big Five personality questionnaire [41], which classifies each dimension into five stages, numbered 1-5 from lowest to highest.Subsequently, the experimenter instructed the participants in the interference group on how to use the controller.Afterwards, all participants were assisted in wearing the VR device and adjusting their seating position to ensure comfort and clear visibility of the simulated environment on the screen.
After preparation, the participants entered the prebuilt AV scenario and familiarized themselves with the driving process of the AVs on the ramp.Participants in the non-task group only needed to focus on the driving behavior of the system, whereas those in the interference group had to complete a subtask displayed on the screen inside the vehicle.The familiarization process lasted approximately 30 s.The system then interactively engaged participants with three different PI levels in random order, and a trust scale questionnaire was completed after each scenario.
After the experiment, participants were asked to freely express their opinions on the three forms of interaction.

Results
This study employed SPSS 27.0 for a repeated-measures analysis of variance (ANOVA) to determine significant differences in trust in AVs with different PI levels with the driver as well as significant differences in trust in different PI levels among drivers of the two genders and varying personality traits.Following ANOVA, the least significant difference (LSD) test was conducted for a post hoc analysis of the main effects.

Task and Non-Task
Table 1 shows the results of the repeated-measures ANOVA and LSD tests.The ANOVA results indicated significant between-group effects, as evidenced by an F-statistic of F3,51 = 6.703.This value is calculated from the ratio of the variance between the groups to the variance within the groups.In the context of our study, the p-value indicates the likelihood that the observed differences could have occurred by chance.A p-value of 0.05 or less (p ≤ 0.05) is considered evidence of a statistically significant difference.A p-value of 0.01 (p ≤ 0.01) or 0.001 (p ≤ 0.001) represents increasingly stronger evidence against the null hypothesis of no effect or no difference.The ANOVA results indicates a strong statistical significance between-group effects, as evidenced by a p-value of less than 0.001(p < 0.001).Specifically, significant main effects were observed for both the task group (F3,25 = 15.549,p < 0.001) and the non-task group (F3,24 = 12.507, p < 0.001), suggesting that the performance differences within these groups were consistent and not random.The effect sizes, measured by partial η 2 , ranged from medium to large, which implies a substantial impact of the independent variable on the dependent variable.The t-value indicates the significance of the difference between two groups.A larger absolute t-value signifies a more significant difference, with its sign indicating whether the task group's mean is higher or lower than the non-task group's.The LSD results revealed significant differences in trust among the non-task group for the pretest (M = 4.722, SD = 0.886), low-PI (M = 4.790, SD = 1.090), medium-PI (M = 5.130, SD = 0.941), and high-PI (M = 5.438, SD = 0.744) conditions (p = 0.000 < 0.001).Significant differences were found between the pretest and high PI (p = 0.000 < 0.001), low PI and medium PI (p = 0.011 < 0.05), low PI and high PI (p = 0.000 < 0.001), and medium PI and high PI (p = 0.046 < 0.05).No significant differences were observed between the pretest and low PI (p = 0.761 > 0.05) or pretest and medium PI (p = 0.115 > 0.05).These differences are illustrated in Figure 6. Figure 6 shows that the participants had low trust in the system before experiencing autonomous driving, with no significant difference between the pretest and low PI conditions.Trust in the medium-PI condition was slightly higher than that in the low-PI condition, and trust in the high-PI interaction was significantly higher.
As shown in Figure 6, trust in both the low-PI and high-PI conditions significantly increased compared to before the experience, but there was no significant difference in trust between the low-and high-PI conditions, and there was no difference between the pretest and medium-PI conditions.

Gender
To further explore whether there were significant differences in the responses to different levels of interaction among participants of different genders under task and nontask conditions, we conducted a repeated-measures ANOVA.The results showed significant main effects for gender (p < 0.05) and a significant three-way interaction between the PI level, the presence of other tasks, and gender (F3,49 = 3.165, p = 0.033 < 0.05, partial η 2 = 0.162).In the simple effect analysis, significant simple effects were found for male participants' trust in different PI levels without other tasks (F = 7.59, p < 0.001) but not for female participants (F = 2.14, p = 0.098 > 0.05).However, when the participants focused on other tasks, significant simple effects were found for both males (F = 3.041, p = 0.031 < 0.05) and females (F = 8.73, p < 0.001).
Without other tasks, the male participants showed significant differences in their responses to different PI levels, whereas the female participants did not.This indicates that males were more sensitive to PI levels than were females in the no-task condition.When participants focused on calculation tasks, both males and females showed significant differences in their responses to different PI levels, suggesting the need to explore whether the differences in responses to different PI levels vary by gender.
To further analyze the data, we conducted independent-sample t-tests for the task and no-task groups to evaluate whether there is a significant difference in trust between the two independent gender groups under specific conditions.Table 2 shows the results of the independent-sample t-tests.The results indicated no significant differences in trust perceptions at different PI levels between genders in the no-task group.However, in the task group, there were significant differences in trust in the medium PI between males (M = 4.451, SD = 0.938) and females (M = 5.448, SD = 0.808) (p = 0.006 < 0.05), with males showing lower trust than females.No significant differences were found between males (M = 4.618, SD = 0.612) and females (M = 4.656, SD = 0.824) in the pretest (p = 0.894 > 0.05), between males (M = 4.979, SD = 0.717) and females (M = 5.563, SD = 0.830) in the low-PI condition (p = 0.062 > 0.05), or between males (M = 5.111, SD = 0.856) and females (M = 5.625, SD = 0.991) in the high-PI condition (p = 0.162 > 0.05).These differences are illustrated in Figure 7.

Extroversion
To further analyze the data, we conducted a one-way ANOVA for participants' extroversion in the non-task group because the distribution of extroversion in the task group was uneven.Table 3 shows the results of the one-way ANOVA and post hoc comparisons.Levels 3, 4, and 5 on the extraversion scale of the Big Five personality questionnaire correspond to moderate, high, and very high degrees of extraversion, respectively.The results indicated no significant differences in trust perception for the none (low-PI) condition between participants with extroversion levels of 3 (M = 4.681, SD = 0.480), 4 (M = 5.092, SD = 0.372), and 5 (M = 4.750, SD = 0.445) (p = 0.853 > 0.05) or in the notification medium PI between participants with extroversion levels of 3 (M = 5.042, SD = 0.407), 4 (M = 5.233, SD = 0.315), and 5 (M = 5.143, SD = 0.377) (p = 0.956 > 0.05).However, significant differences were found in trust perception for the inquiry (high PI) condition between participants with extroversion levels of 3 (M = 5.000, SD = 0.294), 4 (M = 5.808, SD = 0.228), and 5 (M = 4.810, SD = 0.272) (p = 0.050).Post hoc tests revealed significant differences between participants with extroversion levels of 3 and 4 (p = 0.042 < 0.05) and those with extroversion levels of 4 and 5 (p = 0.011 < 0.05).These differences are illustrated in Figure 8.As shown in Figure 8, participants with different levels of extroversion mostly showed no differences in trust perception for low and medium PI; however, for high PI, participants with an extroversion level of 4 had the highest trust in the system.

Discussion
The main effects of different PI levels in the non-task group were significant, indicating that different levels of proactive behavior had a significant impact on drivers' trust in AVs.There are significant differences between the three PI levels (low, medium, and high), with higher levels of interaction leading to higher trust in the system, thus supporting H1.The main effects of the different PI levels in the task group were also significant, leading to the rejection of H2.However, further exploration revealed that in more realistic L4 autonomous driving scenarios, when drivers were performing other tasks simultaneously, different levels of proactive behavior had different impacts on trust.Drivers trusted notification-based medium PI less than direct execution low PI, and males trusted notificationbased medium PI significantly less than females.Meanwhile, the significant three-way interaction between the PI level, presence of tasks, and gender supported H3.Specifically, we found that driver extroversion personality impacted trust.Drivers with higher extroversion levels trusted high-level PI more, but there was no significant difference between overly extroverted and less extroverted participants, leading to the rejection of hypothesis H4.
According to the experimental results, systems without PI were the least trusted when the driver was merely focused on the driving task, which is consistent with previous findings, possibly due to situational awareness [49] and the perceived capabilities of AVs affecting trust in the system [31].Among the systems with a PI, a high PI was more trusted than a medium PI.Many participants mentioned in the interviews that making their own decisions regarding driving behavior made them feel a greater sense of control over AVs.
However, when participants concentrated on other tasks, the trust of AVs with low PI was higher than that of AVs with medium PI.There was no difference compared with AVs with high PI.Furthermore, a subsequent analysis of gender differences revealed that males trusted notification-based medium PI less than females, which is consistent with previous results that men are less likely to trust automated systems [36].This phenomenon could be due to several reasons: First, proactive voice interaction increases cognitive load [8].Being busy may also lead to aversion to PI systems [39].This potentially outweighs the positive effects of perceived system capabilities, thereby reducing trust in AVs.Additionally, drivers trusted systems with a high PI more in the absence of distractions than when multitasking, which, in addition to the cognitive load, could also be related to the level of awareness of AVs.Because most participants were college students and staff, they might have understood that environmental perception is part of the system's capabilities, and PI in distracting environments might be perceived as a lack of intelligence in AVs, leading to lower trust.
Consistent with previous research, the level of trust in AVs may be related to extroversion [50].For high-level PI, higher extroversion tended to trust AVs, but there was no significant difference in trust between participants with overly extroverted and less extroverted personalities.

Conclusions
Our study designed different PI levels for AVs, subdivided voice PI behavior, and explored the differences in trust toward different PI levels by driving supervisors as well as the impact of personal factors on trust in AVs.The results provide practical guidance for the design of AVs that will increase drivers' trust.Research on gender and personality traits helps understand how individual characteristics modulate the relationship between interaction behaviors and trust, offering a new perspective for personalized design and optimization of interaction strategies in AVs.Here are the key recommendations of AV interactive design according to our study: (i) In the future, trust in the system can be enhanced by increasing the PI level.During voice interaction with the driver, it will be more helpful for increasing trust when AVs proactively report current situational changes to the driver, delegate the authority for driving behavior decisions to the driver under suitable conditions, and inquire whether to execute the provided suggestions.(ii) When L4 AVs detect that the driver is handling other tasks, they may need to be cautious about informing the driver of driving decisions, especially male drivers, as this may have a negative impact on trust.Instead, AVs can choose to inquire or simply execute the decision directly without any interaction.(iii) It is also helpful to gather information about the driver's personality beforehand to select interaction schemes for different extroverted groups.For drivers with extroverted but not extremely extroverted personalities, a higher level of PI can be utilized more frequently to enhance their trust in AVs.
The results of our study could serve as a reference for PI between drivers and AVs, which will increase drivers' trust, acceptance of technology, and sustainability.However, this study has some limitations.Although we asked participants to complete the trust scale immediately after each experimental scenario to improve the timeliness of the data, the evaluations could only reflect the retrospective assessment of the participants at the time of completion, instead of the real-time psychological state during the experiment.This is an unavoidable limitation of the scale.In future research, physiological measurement techniques such as electroencephalography (EEG) and eye tracking can be used to obtain real-time trust in the system from participants.

Figure 2 .
Figure 2. Schematic diagram of the experimental scenario.

Figure 3 .
Figure 3. Driver's view in the VR environment: (a) task group; (b) non-task group.

Figure 5 .
Figure 5. Processes of the experiment.

Figure 6 .
Figure 6.Differences in perceived trust under different task conditions.

Figure 7 .
Figure 7. Differences in trust in AV systems between genders.

Figure 8 .
Figure 8. Differences in trust among individuals with varying levels of extroversion.

Table 1 .
Trust levels under different task conditions.Mean values (M) and standard deviations (SD) are presented.

Table 2 .
Comparison of perceived trust in PI between genders.

Table 3 .
Comparison of trust in PI among participants with different levels of extroversion.