Next Article in Journal
Characteristics and Reduction of Carbon Dioxide (CO2) Emissions during the Construction of Urban Parks in South Korea
Previous Article in Journal
Exploring the Nexus of Perceived Organizational CSR Engagement, Job Satisfaction, Organizational Pride, and Involvement in CSR Activities: Evidence from an Emerging Economy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Effect of Proactive Interaction on Trust in Autonomous Vehicles

1
Key Laboratory of Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, Tianjin 300354, China
2
School of Intelligent Media and Design Arts, Tianjin Ren’ai College, Tianjin 301636, China
*
Author to whom correspondence should be addressed.
Sustainability 2024, 16(8), 3404; https://doi.org/10.3390/su16083404
Submission received: 7 March 2024 / Revised: 4 April 2024 / Accepted: 16 April 2024 / Published: 18 April 2024
(This article belongs to the Section Economic and Business Aspects of Sustainability)

Abstract

:
With rapid advancements in autonomous vehicles (AVs), mistrust between humans and autonomous driving systems has become a focal concern for users. Meanwhile, proactive interaction (PI), as a means to enhance the efficiency and satisfaction of human–machine collaboration, is increasingly being applied in the field of intelligent driving. Our study investigated the influence of varying degrees of PI on driver trust in Level 4 (L4) AVs set against a virtual reality (VR)-simulated driving backdrop. An experiment with 55 participants revealed that, within an autonomous driving scenario without interference, elevated PI levels fostered increased trust in AVs among drivers. Within task scenarios, low PI resulted in enhanced trust compared to PI characterized by information provision. Compared to females, males demonstrated reduced trust in medium PIs. Drivers with elevated extroversion levels exhibited the highest trust in advanced PIs; however, the difference between excessively and moderately extroverted participants was not significant. Our findings provide guidance for interaction designs to increase trust, thereby enhancing the acceptance and sustainability of AVs.

1. Introduction

Trust is crucial for understanding system reliance decisions [1]. The correct use of modern technology highly depends on human trust [2]. As computer technology has become increasingly complex, artificial intelligence (AI) has rapidly expanded into everyday human life. Trust may reduce negative emotions toward AI [3] and play a key role in increasing AI acceptance, making the importance of human trust in AI even more significant [4]. AI is a key technology for achieving functionality and performance of autonomous vehicles (AVs) and can improve energy efficiency by optimizing travel speed and reducing fuel consumption by reducing unnecessary acceleration and braking [5]. Therefore, AVs represent a significant step toward sustainable development [6]. As the level of autonomous driving gradually increases, the relationship between humans and vehicles has transformed from traditional “human control” to “shared control.” The vehicle gains partial control and the human’s role shifts from “driver” to a more advanced “supervisor.” When humans have to monitor the system or act outside the driving task, the question of whether to trust and delegate the driving task to an autonomous driving system arises.
The proactive interaction (PI) theory originated from the concept of proactive behavior in sociology. With the development of AI technology, proactive behavior models based on machine learning have endowed robots with the capability for specific situational behaviors, such as sociability and seeking feedback [7]. However, proactive voice interaction may increase cognitive load [8]. In order to improve user acceptance, this human-like proactive behavior has been applied to systems with social attributes to enhance interaction efficiency with users. Current research has explored the effectiveness of proactive behavior in social robots [9] and perceptions of anthropomorphism at different levels of proactive behavior [7]. Peng et al. designed the proactivity of social robots in three dimensions based on the autonomy of social robots [10]. Kraus et al. classified the PI levels in robot assistants from low to high as none, notification, suggestion, and intervention and demonstrated the potential impact of different PI levels on user trust [11]. Research on robotic assistants involves both how to determine the level of initiative and when to take initiative. For example, in the studies of Grosinger et al. [12] and Liu et al. [13], the system can use human-interaction data as training input to determine when to perform PI under appropriate conditions and times.
The application of PI has become increasingly common in the field of AVs. For example, smart cars use sensors or facial recognition technology to greet users through voice, light, and other media or provide appropriate route solutions based on users’ travel needs. Among various PI methods, voice proactive interaction is particularly noteworthy for its natural and intuitive communication. AVs interact proactively with drivers through a series of strategies and algorithms, including vehicle and road condition information collection, data fusion and cognitive decision-making, and big data analysis [14,15]. Through PI, information feedback between humans and vehicles becomes timelier and more effective, providing passengers with a unique and convenient interactive experience.
The Society of Automotive Engineers (SAE) classifies AVs into six levels ranging from L0 to L5 [16]. In June 2022, Mercedes-Benz announced the world’s first fully certified Level 3 AV Drive Pilot, which received approval from the German Department of Transportation. The advent of Level 4 (L4) AVs is expected to overcome technological constraints and various safety and legal issues, thereby attracting public attention. However, a recent study found that with the advancement of AV levels, public acceptance of AVs has declined [17]. The trust crisis between humans and AVs is exacerbated by frequent accidents involving AVs. Technical system failures are not the only causes of accidents. A significant factor is the low vigilance of some drivers [18], resulting in the majority of accidents being caused by drivers rather than the AV systems [19]. Although studies have confirmed that highly automated AVs are safer than human drivers [20], public trust in this new technology for autonomous driving assistance is decreasing. This significantly affects the effectiveness of the use of autonomous driving functions by drivers and public acceptance of AVs [21,22]. Therefore, determining how trust works in AVs is crucial to enhance consumers’ willingness to purchase AVs and the effectiveness of autonomous driving.

2. Related Work

Trust research in human–machine collaboration is largely based on psychological perspectives, such as Mayer et al.’s definition of trust as one party’s willingness to be vulnerable to another’s actions [23]. The domain of human–machine trust research has also expanded to the design of interactions between humans and systems, particularly regarding system characteristics, personal traits, and specific scenarios [24,25]. To establish trustworthy systems, people focus on the design, modeling, and implementation of AI systems, where there are three main factors for trust: system, personal, and situational factors.

2.1. System Factors

System factors primarily revolve around interfaces and human interaction. For example, studies have shown that in initial interactions with a system, its appearance may be more important than its reliability in gaining trust [26]. Semantic symbols in feedback interfaces can influence changes in user trust compared to non-semantic symbols [27]. Systems that adhere to good human etiquette can compensate for low trust due to low levels of automation reliability [28]. Anthropomorphized systems may also promote the perception of machine agents as social agents, thereby increasing trust in the system [29] and leading to more frequent interactions with the device [30]. Perceived capabilities may also affect trust in AVs [31]. These system characteristics do not necessarily enhance the system’s reliability but can strengthen users’ trust in the system. However, despite some developments in the research area of trust in AVs, such as adjusting drivers’ expectations to enhance trust [32], few studies have attempted to increase driver trust in the system based on interaction methods, such as exploring the relationship between visual design elements of the human–vehicle interaction interface and trust [29] and explainability of user interface information [33]. There is still a lack of research exploring the PI forms of trust in AVs.

2.2. Personal Factors

Researchers have found that user trust in a system is significantly influenced by personal characteristics such as income level [34] and cultural background [35]. Additionally, men may have lower levels of trust in automated systems [36]. Human–machine trust is becoming more like trust among people. In addition, users’ trust in system tools increasingly resembles trust in another human being. This has led to increasingly apparent differences in personalities among human trustees [2]. A previous study reported that people with a relational orientation and utilitarian tendencies showed higher trust in the interactive behavior of apology and compensation, respectively, when faced with system errors [37]. Users’ innovative traits are positively correlated with their trust level in the system; users who are more conscientious tend to dislike socializing and are more inclined to trust text interfaces with poorer social attributes [38]. Personal prior experience is also an important factor affecting the degree of trust in a system; people generally have positive expectations for unfamiliar systems, considering them trustworthy, and even expecting them to be better than human beings [8]. Additionally, research shows that individuals with high self-efficacy are more likely to trust and use a system because they believe they can master the technology and effectively handle possible challenges [2].
In addition, researchers have found that users prefer system tools that match their own personalities [39]. User trust in automated systems is promoted when moral judgments of automated systems are similar to their preferred values [40]. In the most widely used personality structure model, the Big Five Personality Model, highly extroverted personalities exhibit sociable characteristics, unlike introverts [41]. There may be a correlation between a person’s proactive behavior tendencies and trust in proactive conversation assistants [42]. Therefore, personal characteristics should be fully considered to achieve the positive effectiveness of a human–machine collaboration [43].

2.3. Situational Factors

As AI gradually extends to many aspects of life, people’s trust perceptions of the system may vary in different situations, emphasizing the increasing importance of studying the factors of human–machine trust in real environments [24]. For example, the behavior of the system in non-emergency phases does not affect the trust decisions people make in emergency situations [44]. Atoyan et al. experimentally demonstrated that when people face complex multitasking environments beyond their capabilities, they may develop excessive trust in cooperative AI systems [45]. In L4 AVs, complex situations involving multitasking are common. Owing to the high complexity of trust-influencing factors, conclusions regarding trust in AVs based on single driving task scenarios are still lacking.
In summary, to investigate the effect of PI on trust among drivers with different characteristics in L4 AVs under more realistic multitasking scenarios, this study simulated an autonomous driving environment to accurately understand and assess drivers’ trust in AVs.

3. Materials and Methods

Based on the theory of PI and study of PI behavior in AVs, different levels of interaction behavior have been widely applied in intelligent vehicles. For example, in the case of road congestion, the system proactively asks the user whether they need to replan the route and informs the user that it has been rerouted when the driver deviates from the route, and the vehicle directly adjusts the speed without any interaction with the user when using the Automatic Cruise Control (ACC) system. Thus, we propose three different levels of voice-activated PI models for AVs, considering the system autonomy and frequency of interaction with the driver, as shown in Figure 1.
(i)
High PI: The system provides the driver with the situation in their surroundings or traffic circumstances based on information gathered from sensors. It then suggests decisions according to those sensory inputs and requests confirmation from the driver. After receiving feedback from the driver, it sends a received response and executes or does not execute the previous decision based on the driver’s feedback.
(ii)
Medium PI: The system provides the driver with the situation in their surroundings or traffic circumstances based on information gathered from sensors. The system then provides the driver with a decision according to those sensory inputs and then directly executes the decision.
(iii)
Low PI: The system executes decisions directly without interactions with the driver.
By introducing the concept of PI levels in AVs, we explored the impact of different PI levels and personal factors such as gender and personality on trust in systems with different PI levels. The entire experiment was conducted in a virtual reality (VR) environment.

3.1. Hypotheses

For different PI levels in L4 AVs, voice interaction with the driver is expected to increase the perception of the system’s capabilities [10]. As the number of interactions with the system increases, the level of trust among users is also likely to increase [1]. However, attention may be diverted when the drivers perform other tasks. Regarding individual differences, males may exhibit lower trust tendencies when faced with an automated system [36]. Additionally, because people prefer systems that match their personalities [39], extroverts may trust systems that actively interact with them because of their social traits [41]. Therefore, the following four hypotheses are formulated:
H1. 
In a no-task scenario, the higher the PI level, the more trust drivers have in AVs.
H2. 
In the task scenario, there is no significant difference in driver trust in the system at different PI levels.
H3. 
Under different tasks, females have higher trust than males at different PI levels.
H4. 
Extroverted drivers trust AVs more with higher PI levels than introverted drivers do.

3.2. Experimental Design

3.2.1. Scenario Design

In the VR environment, to control the variable of risk perception, we designed a scenario in which a vehicle merges onto a highway from a ramp. Ramp merging is a familiar driving scenario for participants, reducing the likelihood of increased risk perception due to unfamiliarity. As Car1 is about to merge into the lane, Car2 on the main road is in the driver’s view. The autonomous driving system detects that Car2 is decelerating at a speed lower than Car1, as shown in Figure 2. In this situation, whether the driver chose to accelerate or maintain speed, a safe distance between the two cars was ensured. Thus, the risk was relatively moderate, avoiding the impact of excessive trust by participants on the experimental results. After Car1 merged with the highway, the experiment ended.

3.2.2. Interaction Methods

The experiment used the Wizard-of-Oz method, in which participants believed they were interacting with an autonomous system. However, in reality, the system was partially or fully operated by the experimenter [38]. Based on the design of the AV PI shown in Figure 3, three PI levels were set: direct execution without PI—none (low PI), notification (medium PI), and inquiry (high PI). The specific manifestations of the experimental scenario are as follows.
(i)
Inquiry (high PI): AVs asked the participant, “We have the vehicle slowing down on the main road. Do you want to accelerate?” After the participant gave different feedback (such as “Yes” or “Do not speed up”), the system replied “Okay” and executed acceleration or no acceleration based on the participant’s feedback.
(ii)
Notification (medium PI): AVs informed the participant, “We have the vehicle slowing down on the main road. We’re about to speed up” and then accelerated to pass.
(iii)
None (low PI): AVs did not notify the participant and directly accelerated to pass.

3.2.3. Task Design

To simulate the behavior of drivers in realistic L4 AV scenarios, such as using personal smartphones, a monetary incentive task [46] was introduced into the scene, which was an arithmetic judgment task to distract participants’ attention. This task involved simple additions and subtractions of <20. To encourage participants to engage deeply in the arithmetic task and enhance the effectiveness of completing the subtask, we promised participants before the experiment that they would be rewarded with 0.5 RMB for each correct answer and emphasized that the calculation results were unrelated to the driving behavior and decision-making of the AVs.
The interface for participants without the arithmetic task is shown in Figure 3. There is no arithmetic judgment board in the scene, and other steps are identical to those for participants in the arithmetic task.

3.3. Participants

A purposive sampling method was used to recruit 55 participants (32 females and 23 males) from Tianjin University through a questionnaire survey, excluding those with no driving experience. All participants possessed a driver’s license and had no experience with AVs at any level. The ages of the participants ranged from 19 to 44 years (M = 23.95, SD = 3.997). Participants signed an informed consent form prior to the experiment.

3.4. Stimuli and Apparatus

As shown in Figure 4, a VR device (PICO Neo 3, Pico, Qingdao, China) and two accompanying controllers were used. The PICO Neo 3 headset (4 K resolution, 120 Hz refresh rate, 98° FOV) features integrated spatial audio and high-quality stereo speakers, capable of playing white noise from the car engine during the experiment to maximally simulate a real driving environment. The experimenter operated one controller to provide secondary feedback for voice interaction and to control vehicle acceleration based on the feedback. The experimenter monitored the experimental scenario in real time using a monitor. The other controller was used by the participants in the task group to simulate the clicking of buttons on the screen in a VR environment by pressing the trigger button.

3.5. Experimental Procedure

The total duration of the experiment was less than 20 min. The overall experimental process is illustrated in Figure 5.
  • Step 1 Survey.
Participants completed the task in a quiet room. They were investigated using a single-item survey for basic demographic details (gender, age, and occupation) and driving experience, including driving age, total driving kilometers, and self-assessment of driving proficiency to fully understand the features of the participants.
  • Step 2 Introduction.
The experimenter briefly introduced the AVs and the precautions during the experimental process:
“Hello! Autonomous driving was divided into six levels from 0 to 5. Level 4, high driving automation, refers to highly automated AVs that can drive autonomously in specific environments and tasks without driver intervention; however, manual control may still be required. During the driving process, the system can verbally interact with the user. If necessary, one only needs to respond with simple commands such as ‘yes’ or ‘no need’”.
After introducing the background of AVs, the participants were asked to complete a trust survey questionnaire based on their previous understanding of AVs and the experimenter’s introduction. The questionnaire content contained a commonly used automation trust scale [47,48] to measure participants’ trust in the system.
  • Step 3 Preparation.
Participants were seated in a spacious area and randomly assigned to either the interference or non-interference group. All participants were required to complete a Big Five personality questionnaire [41], which classifies each dimension into five stages, numbered 1–5 from lowest to highest. Subsequently, the experimenter instructed the participants in the interference group on how to use the controller. Afterwards, all participants were assisted in wearing the VR device and adjusting their seating position to ensure comfort and clear visibility of the simulated environment on the screen.
  • Step 4 Experiment.
After preparation, the participants entered the prebuilt AV scenario and familiarized themselves with the driving process of the AVs on the ramp. Participants in the non-task group only needed to focus on the driving behavior of the system, whereas those in the interference group had to complete a subtask displayed on the screen inside the vehicle. The familiarization process lasted approximately 30 s. The system then interactively engaged participants with three different PI levels in random order, and a trust scale questionnaire was completed after each scenario.
  • Step 5 Interview.
After the experiment, participants were asked to freely express their opinions on the three forms of interaction.

4. Results

This study employed SPSS 27.0 for a repeated-measures analysis of variance (ANOVA) to determine significant differences in trust in AVs with different PI levels with the driver as well as significant differences in trust in different PI levels among drivers of the two genders and varying personality traits. Following ANOVA, the least significant difference (LSD) test was conducted for a post hoc analysis of the main effects.

4.1. Task and Non-Task

Table 1 shows the results of the repeated-measures ANOVA and LSD tests. The ANOVA results indicated significant between-group effects, as evidenced by an F-statistic of F3,51 = 6.703. This value is calculated from the ratio of the variance between the groups to the variance within the groups. In the context of our study, the p-value indicates the likelihood that the observed differences could have occurred by chance. A p-value of 0.05 or less (p ≤ 0.05) is considered evidence of a statistically significant difference. A p-value of 0.01 (p ≤ 0.01) or 0.001 (p ≤ 0.001) represents increasingly stronger evidence against the null hypothesis of no effect or no difference. The ANOVA results indicates a strong statistical significance between-group effects, as evidenced by a p-value of less than 0.001 (p < 0.001). Specifically, significant main effects were observed for both the task group (F3,25 = 15.549, p < 0.001) and the non-task group (F3,24 = 12.507, p < 0.001), suggesting that the performance differences within these groups were consistent and not random. The effect sizes, measured by partial η2, ranged from medium to large, which implies a substantial impact of the independent variable on the dependent variable. The t-value indicates the significance of the difference between two groups. A larger absolute t-value signifies a more significant difference, with its sign indicating whether the task group’s mean is higher or lower than the non-task group’s.
The LSD results revealed significant differences in trust among the non-task group for the pretest (M = 4.722, SD = 0.886), low-PI (M = 4.790, SD = 1.090), medium-PI (M = 5.130, SD = 0.941), and high-PI (M = 5.438, SD = 0.744) conditions (p = 0.000 < 0.001). Significant differences were found between the pretest and high PI (p = 0.000 < 0.001), low PI and medium PI (p = 0.011 < 0.05), low PI and high PI (p = 0.000 < 0.001), and medium PI and high PI (p = 0.046 < 0.05). No significant differences were observed between the pretest and low PI (p = 0.761 > 0.05) or pretest and medium PI (p = 0.115 > 0.05). These differences are illustrated in Figure 6.
Figure 6 shows that the participants had low trust in the system before experiencing autonomous driving, with no significant difference between the pretest and low PI conditions. Trust in the medium-PI condition was slightly higher than that in the low-PI condition, and trust in the high-PI interaction was significantly higher.
In the task group, significant differences in trust were found for the pretest (M = 4.640, SD = 0.728), low-PI (M = 5.313, SD = 0.824), medium-PI (M = 5.021, SD = 0.987), and high-PI (M = 5.405, SD = 0.954) conditions (p = 0.000 < 0.001). Significant differences were observed between the pretest and high PI (p = 0.000 < 0.001), pretest and low PI (p = 0.000 < 0.001), low and medium PIs (p = 0.021 < 0.05), and medium and high PIs (p = 0.012 < 0.05). No significant differences were found between the pretest and medium PI (p = 0.072 > 0.05) or between low PI and high PI (p = 0.500 > 0.05). These differences are illustrated in Figure 6.
As shown in Figure 6, trust in both the low-PI and high-PI conditions significantly increased compared to before the experience, but there was no significant difference in trust between the low- and high-PI conditions, and there was no difference between the pretest and medium-PI conditions.

4.2. Gender

To further explore whether there were significant differences in the responses to different levels of interaction among participants of different genders under task and non-task conditions, we conducted a repeated-measures ANOVA. The results showed significant main effects for gender (p < 0.05) and a significant three-way interaction between the PI level, the presence of other tasks, and gender (F3,49 = 3.165, p = 0.033 < 0.05, partial η2 = 0.162). In the simple effect analysis, significant simple effects were found for male participants’ trust in different PI levels without other tasks (F = 7.59, p < 0.001) but not for female participants (F = 2.14, p = 0.098 > 0.05). However, when the participants focused on other tasks, significant simple effects were found for both males (F = 3.041, p = 0.031 < 0.05) and females (F = 8.73, p < 0.001).
Without other tasks, the male participants showed significant differences in their responses to different PI levels, whereas the female participants did not. This indicates that males were more sensitive to PI levels than were females in the no-task condition. When participants focused on calculation tasks, both males and females showed significant differences in their responses to different PI levels, suggesting the need to explore whether the differences in responses to different PI levels vary by gender.
To further analyze the data, we conducted independent-sample t-tests for the task and no-task groups to evaluate whether there is a significant difference in trust between the two independent gender groups under specific conditions. Table 2 shows the results of the independent-sample t-tests. The results indicated no significant differences in trust perceptions at different PI levels between genders in the no-task group. However, in the task group, there were significant differences in trust in the medium PI between males (M = 4.451, SD = 0.938) and females (M = 5.448, SD = 0.808) (p = 0.006 < 0.05), with males showing lower trust than females. No significant differences were found between males (M = 4.618, SD = 0.612) and females (M = 4.656, SD = 0.824) in the pretest (p = 0.894 > 0.05), between males (M = 4.979, SD = 0.717) and females (M = 5.563, SD = 0.830) in the low-PI condition (p = 0.062 > 0.05), or between males (M = 5.111, SD = 0.856) and females (M = 5.625, SD = 0.991) in the high-PI condition (p = 0.162 > 0.05). These differences are illustrated in Figure 7.

4.3. Extroversion

To further analyze the data, we conducted a one-way ANOVA for participants’ extroversion in the non-task group because the distribution of extroversion in the task group was uneven. Table 3 shows the results of the one-way ANOVA and post hoc comparisons. Levels 3, 4, and 5 on the extraversion scale of the Big Five personality questionnaire correspond to moderate, high, and very high degrees of extraversion, respectively.
The results indicated no significant differences in trust perception for the none (low-PI) condition between participants with extroversion levels of 3 (M = 4.681, SD = 0.480), 4 (M = 5.092, SD = 0.372), and 5 (M = 4.750, SD = 0.445) (p = 0.853 > 0.05) or in the notification medium PI between participants with extroversion levels of 3 (M = 5.042, SD = 0.407), 4 (M = 5.233, SD = 0.315), and 5 (M = 5.143, SD = 0.377) (p = 0.956 > 0.05). However, significant differences were found in trust perception for the inquiry (high PI) condition between participants with extroversion levels of 3 (M = 5.000, SD = 0.294), 4 (M = 5.808, SD = 0.228), and 5 (M = 4.810, SD = 0.272) (p = 0.050). Post hoc tests revealed significant differences between participants with extroversion levels of 3 and 4 (p = 0.042 < 0.05) and those with extroversion levels of 4 and 5 (p = 0.011 < 0.05). These differences are illustrated in Figure 8.
As shown in Figure 8, participants with different levels of extroversion mostly showed no differences in trust perception for low and medium PI; however, for high PI, participants with an extroversion level of 4 had the highest trust in the system.

5. Discussion

The main effects of different PI levels in the non-task group were significant, indicating that different levels of proactive behavior had a significant impact on drivers’ trust in AVs. There are significant differences between the three PI levels (low, medium, and high), with higher levels of interaction leading to higher trust in the system, thus supporting H1. The main effects of the different PI levels in the task group were also significant, leading to the rejection of H2. However, further exploration revealed that in more realistic L4 autonomous driving scenarios, when drivers were performing other tasks simultaneously, different levels of proactive behavior had different impacts on trust. Drivers trusted notification-based medium PI less than direct execution low PI, and males trusted notification-based medium PI significantly less than females. Meanwhile, the significant three-way interaction between the PI level, presence of tasks, and gender supported H3. Specifically, we found that driver extroversion personality impacted trust. Drivers with higher extroversion levels trusted high-level PI more, but there was no significant difference between overly extroverted and less extroverted participants, leading to the rejection of hypothesis H4.
According to the experimental results, systems without PI were the least trusted when the driver was merely focused on the driving task, which is consistent with previous findings, possibly due to situational awareness [49] and the perceived capabilities of AVs affecting trust in the system [31]. Among the systems with a PI, a high PI was more trusted than a medium PI. Many participants mentioned in the interviews that making their own decisions regarding driving behavior made them feel a greater sense of control over AVs.
However, when participants concentrated on other tasks, the trust of AVs with low PI was higher than that of AVs with medium PI. There was no difference compared with AVs with high PI. Furthermore, a subsequent analysis of gender differences revealed that males trusted notification-based medium PI less than females, which is consistent with previous results that men are less likely to trust automated systems [36]. This phenomenon could be due to several reasons: First, proactive voice interaction increases cognitive load [8]. Being busy may also lead to aversion to PI systems [39]. This potentially outweighs the positive effects of perceived system capabilities, thereby reducing trust in AVs. Additionally, drivers trusted systems with a high PI more in the absence of distractions than when multitasking, which, in addition to the cognitive load, could also be related to the level of awareness of AVs. Because most participants were college students and staff, they might have understood that environmental perception is part of the system’s capabilities, and PI in distracting environments might be perceived as a lack of intelligence in AVs, leading to lower trust.
Consistent with previous research, the level of trust in AVs may be related to extroversion [50]. For high-level PI, higher extroversion tended to trust AVs, but there was no significant difference in trust between participants with overly extroverted and less extroverted personalities.

6. Conclusions

Our study designed different PI levels for AVs, subdivided voice PI behavior, and explored the differences in trust toward different PI levels by driving supervisors as well as the impact of personal factors on trust in AVs. The results provide practical guidance for the design of AVs that will increase drivers’ trust. Research on gender and personality traits helps understand how individual characteristics modulate the relationship between interaction behaviors and trust, offering a new perspective for personalized design and optimization of interaction strategies in AVs. Here are the key recommendations of AV interactive design according to our study:
(i)
In the future, trust in the system can be enhanced by increasing the PI level. During voice interaction with the driver, it will be more helpful for increasing trust when AVs proactively report current situational changes to the driver, delegate the authority for driving behavior decisions to the driver under suitable conditions, and inquire whether to execute the provided suggestions.
(ii)
When L4 AVs detect that the driver is handling other tasks, they may need to be cautious about informing the driver of driving decisions, especially male drivers, as this may have a negative impact on trust. Instead, AVs can choose to inquire or simply execute the decision directly without any interaction.
(iii)
It is also helpful to gather information about the driver’s personality beforehand to select interaction schemes for different extroverted groups. For drivers with extroverted but not extremely extroverted personalities, a higher level of PI can be utilized more frequently to enhance their trust in AVs.
The results of our study could serve as a reference for PI between drivers and AVs, which will increase drivers’ trust, acceptance of technology, and sustainability. However, this study has some limitations. Although we asked participants to complete the trust scale immediately after each experimental scenario to improve the timeliness of the data, the evaluations could only reflect the retrospective assessment of the participants at the time of completion, instead of the real-time psychological state during the experiment. This is an unavoidable limitation of the scale. In future research, physiological measurement techniques such as electroencephalography (EEG) and eye tracking can be used to obtain real-time trust in the system from participants.

Author Contributions

Conceptualization, J.S.; methodology, J.S.; software, J.S.; validation, Y.H., X.H., H.Z., and J.Z.; formal analysis, H.Z.; investigation, J.S.; resources, J.S.; data curation, H.Z.; writing—original draft preparation, J.S.; writing—review and editing, Y.H. and X.H.; visualization, J.Z.; supervision, Y.H.; project administration, Y.H.; funding acquisition, Y.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (grant No. 51875399).

Institutional Review Board Statement

The study was approved by the Ethics Committee of Tianjin University (protocol code TJUE-2023-026 on 26 February 2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

The authors wish to acknowledge all participants for their support in the experiment.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Dzindolet, M.T.; Peterson, S.A.; Pomranky, R.A.; Pierce, L.G.; Beck, H.P. The Role of Trust in Automation Reliance. Int. J. Hum. Comput. Stud. 2003, 58, 697–718. [Google Scholar] [CrossRef]
  2. Sheridan, T.B. Individual Differences in Attributes of Trust in Automation: Measurement and Application to System Design. Front. Psychol. 2019, 10, 1117. [Google Scholar] [CrossRef]
  3. Zhang, S.; Meng, Z.; Chen, B.; Yang, X.; Zhao, X. Motivation, Social Emotion, and the Acceptance of Artificial Intelligence Virtual Assistants—Trust-Based Mediating Effects. Front. Psychol. 2021, 12, 728495. [Google Scholar] [CrossRef] [PubMed]
  4. Lee, J.D.; See, K.A. Trust in Automation: Designing for Appropriate Reliance. Hum. Factors 2004, 46, 50–80. [Google Scholar] [CrossRef] [PubMed]
  5. Chen, Y.; Shiwakoti, N.; Stasinopoulos, P.; Khan, S.K. State-of-the-Art of Factors Affecting the Adoption of Automated Vehicles. Sustainability 2022, 14, 6697. [Google Scholar] [CrossRef]
  6. Tomasevic, N.; Young, K.L.; Horberry, T.; Fildes, B. A Path towards Sustainable Vehicle Automation: Willingness to Engage in Level 3 Automated Driving. Sustainability 2022, 14, 4602. [Google Scholar] [CrossRef]
  7. Tan, H.; Zhao, Y.; Li, S.; Wang, W.; Zhu, M.; Hong, J.; Yuan, X. Relationship between Social Robot Proactive Behavior and the Human Perception of Anthropomorphic Attributes. Adv. Robot. 2020, 34, 1324–1336. [Google Scholar] [CrossRef]
  8. Samson, K.; Kostyszyn, P. Effects of Cognitive Load on Trusting Behavior—An Experiment Using the Trust Game. PLoS ONE 2015, 10, e0127680. [Google Scholar] [CrossRef] [PubMed]
  9. Satake, S.; Glas, D.F.; Imai, M.; Ishiguro, H.; Hagita, N. How to approach humans?: Strategies for social robots to initiate interaction. In Proceedings of the HRI09: International Conference on Human Robot Interaction, La Jolla, CA, USA, 9–13 March 2009. [Google Scholar] [CrossRef]
  10. Peng, Z.; Kwon, Y.; Lu, J.; Wu, Z.; Ma, X. Design and Evaluation of Service Robot’s Proactivity in Decision-Making Support Process. In Proceedings of the CHI’19: CHI Conference on Human Factors in Computing Systems Glasgow, Scotland, UK, 4–9 May 2019. [Google Scholar] [CrossRef]
  11. Kraus, M.; Wagner, N.; Minker, W. Effects of Proactive Dialogue Strategies on Human-Computer Trust. In Proceedings of the Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization, New York, NY, USA, 13 July 2020; Association for Computing Machinery: New York, NY, USA; pp. 107–116.
  12. Grosinger, J.; Pecora, F.; Saffiotti, A. Making Robots Proactive through Equilibrium Maintenance. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16), New York, NY, USA, 9–15 July 2016. [Google Scholar]
  13. Brünken, R.; Steinbacher, S.; Plass, J.L.; Leutner, D. Assessment of Cognitive Load in Multimedia Learning Using Dual-Task Methodology. Exp. Psychol. 2002, 49, 109–119. [Google Scholar] [CrossRef]
  14. Jo, K.; Kim, J.; Kim, D.; Jang, C.; Sunwoo, M. Development of Autonomous Car—Part I: Distributed System Architecture and Development Process. IEEE Trans. Ind. Electron. 2014, 61, 7131–7140. [Google Scholar] [CrossRef]
  15. He, W.; Yan, G.; Xu, L. Developing Vehicular Data Cloud Services in the IoT Environment. IEEE Trans. Ind. Inform. 2014, 10, 1587–1595. [Google Scholar] [CrossRef]
  16. J3016_202104: Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles—SAE International. Available online: https://www.sae.org/standards/content/j3016_202104/ (accessed on 24 February 2023).
  17. Schoettle, B.; Sivak, M. A Survey of Public Opinion about Autonomous and Self-Driving Vehicles in the U.S., the U.K., and Australia; University of Michigan, Transportation Research Institute: Ann Arbor, MI, USA, 2014. [Google Scholar]
  18. Shneiderman, B. Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy. Int. J. Hum. Comput. Interact. 2020, 36, 495–504. [Google Scholar] [CrossRef]
  19. Favarò, F.M.; Nader, N.; Eurich, S.O.; Tripp, M.; Varadaraju, N. Examining Accident Reports Involving Autonomous Vehicles in California. PLoS ONE 2017, 12, e0184952. [Google Scholar] [CrossRef] [PubMed]
  20. Kalra, N.; Paddock, S.M. Driving to Safety: How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability? Transp. Res. Part A Policy Pract. 2016, 94, 182–193. [Google Scholar] [CrossRef]
  21. Kyriakidis, M.; Happee, R.; de Winter, J. Public Opinion on Automated Driving: Results of an International Questionnaire among 5000 Respondents. Transp. Res. Part F Traffic Psychol. Behav. 2015, 32, 127–140. [Google Scholar] [CrossRef]
  22. Schoettle, B.; Sivak, M. Motorists’ Preferences for Different Levels of Vehicle Automation; University of Michigan, Transportation Research Institute: Ann Arbor, MI, USA, 2015. [Google Scholar]
  23. Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An Integrative Model Of Organizational Trust. AMR 1995, 20, 709–734. [Google Scholar] [CrossRef]
  24. Hoff, K.A.; Bashir, M. Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust. Hum. Factors 2015, 57, 407–434. [Google Scholar] [CrossRef]
  25. Chi, O.H.; Jia, S.; Li, Y.; Gursoy, D. Developing a Formative Scale to Measure Consumers’ Trust toward Interaction with Artificially Intelligent (AI) Social Robots in Service Delivery. Comput. Hum. Behav. 2021, 118, 106700. [Google Scholar] [CrossRef]
  26. Yuksel, B.F.; Collisson, P.; Czerwinski, M. Brains or Beauty: How to Engender Trust in User-Agent Interactions. ACM Trans. Internet Technol. 2017, 17, 1–20. [Google Scholar] [CrossRef]
  27. Desai, M.; Kaniarasu, P.; Medvedev, M.; Steinfeld, A.; Yanco, H. Impact of Robot Failures and Feedback on Real-Time Trust. In Proceedings of the 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Tokyo, Japan, 3–6 March 2013; pp. 251–258. [Google Scholar]
  28. Parasuraman, R.; Miller, C.A. Trust and Etiquette in High-Criticality Automated Systems. Commun. ACM 2004, 47, 51–55. [Google Scholar] [CrossRef]
  29. Niu, D.; Terken, J.; Eggen, B. Anthropomorphizing Information to Enhance Trust in Autonomous Vehicles. Hum. Factors Ergon. 2018, 28, 352–359. [Google Scholar] [CrossRef]
  30. Young, A.D. Autonomous Morals_ Inferences of Mind Predict Acceptance of AI Behavior in Sacrificial Moral Dilemmas. J. Exp. Soc. Psychol. 2019, 7, 103870. [Google Scholar] [CrossRef]
  31. King-Casas, B.; Tomlin, D.; Anen, C.; Camerer, C.F.; Quartz, S.R.; Montague, P.R. Getting to Know You: Reputation and Trust in a Two-Person Economic Exchange. Science 2005, 308, 78–83. [Google Scholar] [CrossRef]
  32. Humans and Intelligent Vehicles: The Hope, the Help, and the Harm. Available online: https://ieeexplore.ieee.org/abstract/document/7467508/ (accessed on 22 February 2023).
  33. Ma, J.; Feng, X. Analysing the Effects of Scenario-Based Explanations on Automated Vehicle HMIs from Objective and Subjective Perspectives. Sustainability 2024, 16, 63. [Google Scholar] [CrossRef]
  34. Omrani, N.; Rivieccio, G.; Fiore, U.; Schiavone, F.; Agreda, S.G. To Trust or Not to Trust? An Assessment of Trust in AI-Based Systems: Concerns, Ethics and Contexts. Technol. Forecast. Soc. Chang. 2022, 181, 121763. [Google Scholar] [CrossRef]
  35. Zhang, J.-D.; Liu, L.A.; Liu, W. Trust and Deception in Negotiation: Culturally Divergent Effects. Manag. Organ. Rev. 2015, 11, 123–144. [Google Scholar] [CrossRef]
  36. De Graaf, M.M.; Allouch, S.B. Exploring Influencing Variables for the Acceptance of Social Robots. Robot. Auton. Syst. 2013, 61, 1476–1486. [Google Scholar] [CrossRef]
  37. Lee, M.K.; Kiesler, S.; Forlizzi, J.; Srinivasa, S.; Rybski, P. Gracefully Mitigating Breakdowns in Robotic Services. In Proceedings of the 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Osaka, Japan, 2–5 March 2010; pp. 203–210. [Google Scholar]
  38. Looije, R.; Neerincx, M.A.; Cnossen, F. Persuasive Robotic Assistant for Health Self-Management of Older Adults: Design and Evaluation of Social Behaviors. Int. J. Hum. Comput. Stud. 2010, 68, 386–397. [Google Scholar] [CrossRef]
  39. Liao, Q.V.; Davis, M.; Geyer, W.; Muller, M.; Shami, N.S. What Can You Do?: Studying Social-Agent Orientation and Agent Proactive Interactions with an Agent for Employees. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems, Brisbane, QLD, Australia, 4 June 2016; ACM: New York, NY, USA; pp. 264–275. [Google Scholar]
  40. Yokoi, R.; Nakayachi, K. The Effect of Value Similarity on Trust in the Automation Systems: A Case Of transportation and Medical Care. Int. J. Hum. Comput. Interact. 2021, 37, 1269–1282. [Google Scholar] [CrossRef]
  41. Goldberg, L.R. The Development of Markers for the Big-Five Factor Structure. Psychol. Assess. 1992, 4, 26–42. [Google Scholar] [CrossRef]
  42. Kraus, M.; Wagner, N.; Callejas, Z.; Minker, W. The Role of Trust in Proactive Conversational Assistants. IEEE Access 2021, 9, 112821–112836. [Google Scholar] [CrossRef]
  43. Parasuraman, R.; Riley, V. Humans and Automation: Use, Misuse, Disuse, Abuse. Hum. Factors 1997, 39, 230–253. [Google Scholar] [CrossRef]
  44. Robinette, P.; Li, W.; Allen, R.; Howard, A.M.; Wagner, A.R. Overtrust of robots in emergency evacuation scenarios. In Proceedings of the HRI 2016—11th ACM/IEEE International Conference on Human Robot Interaction, Christchurch, New Zealand, 7–10 March 2016. [Google Scholar] [CrossRef]
  45. Atoyan, H.; Duquet, J.-R.; Robert, J.-M. Trust in New Decision Aid Systems. In Proceedings of the 18th International Conference on Association Francophone d’Interaction Homme-Machine—IHM’06, Montreal, QC, Canada, 18–21 April 2006; ACM Press: New York, NY, USA; pp. 115–122.
  46. Robinette, P.; Howard, A.M.; Wagner, A.R. Effect of Robot Performance on Human–Robot Trust in Time-Critical Situations. IEEE Trans. Hum. Mach. Syst. 2017, 47, 425–436. [Google Scholar] [CrossRef]
  47. Jian, J.-Y.; Bisantz, A.M.; Drury, C.G. Foundations for an Empirically Determined Scale of Trust in Automated Systems. Int. J. Cogn. Ergon. 2000, 4, 53–71. [Google Scholar] [CrossRef]
  48. Choi, J.K.; Ji, Y.G. Investigating the Importance of Trust on Adopting an Autonomous Vehicle. Int. J. Hum. Comput. Interact. 2015, 31, 692–702. [Google Scholar] [CrossRef]
  49. Petersen, L.; Robert, L.; Yang, X.J.; Tilbury, D.M. Situational Awareness, Driver’s Trust in Automated Driving Systems and Secondary Task Performance. arXiv 2019, arXiv:1903.05251. [Google Scholar]
  50. Sarkar, S.; Araiza-Illan, D.; Eder, K. Effects of Faults, Experience, and Personality on Trust in a Robot Co-Worker. arXiv 2017, arXiv:1703.02335. [Google Scholar]
Figure 1. PI levels in AVs.
Figure 1. PI levels in AVs.
Sustainability 16 03404 g001
Figure 2. Schematic diagram of the experimental scenario.
Figure 2. Schematic diagram of the experimental scenario.
Sustainability 16 03404 g002
Figure 3. Driver’s view in the VR environment: (a) task group; (b) non-task group.
Figure 3. Driver’s view in the VR environment: (a) task group; (b) non-task group.
Sustainability 16 03404 g003
Figure 4. Apparatus of the experiment: (a) task group; (b) non-task group; (c) experimental setup.
Figure 4. Apparatus of the experiment: (a) task group; (b) non-task group; (c) experimental setup.
Sustainability 16 03404 g004
Figure 5. Processes of the experiment.
Figure 5. Processes of the experiment.
Sustainability 16 03404 g005
Figure 6. Differences in perceived trust under different task conditions.
Figure 6. Differences in perceived trust under different task conditions.
Sustainability 16 03404 g006
Figure 7. Differences in trust in AV systems between genders.
Figure 7. Differences in trust in AV systems between genders.
Sustainability 16 03404 g007
Figure 8. Differences in trust among individuals with varying levels of extroversion.
Figure 8. Differences in trust among individuals with varying levels of extroversion.
Sustainability 16 03404 g008
Table 1. Trust levels under different task conditions. Mean values (M) and standard deviations (SD) are presented.
Table 1. Trust levels under different task conditions. Mean values (M) and standard deviations (SD) are presented.
Types of InteractionTaskNon-Tasktp
MSDMSD
Pretest4.6400.7284.7220.886−0.0820.078
Low PI5.3130.8244.7901.0900.5220.885
Medium PI5.0210.9875.1300.941−0.1090.678
High PI5.4050.9545.4380.744−0.0330.050
F15.549 12.507
p<0.01 <0.01
Partial η20.651 0.610
Table 2. Comparison of perceived trust in PI between genders.
Table 2. Comparison of perceived trust in PI between genders.
Gender MaleFemaletpEffect Size (r)
MSDMSD
TaskPretest4.6180.6124.6560.824−0.1350.8940.026
Low PI4.9790.7175.5630.830−1.9480.0620.356
Medium PI4.4510.9385.4480.808−3.0150.0060.509
High PI5.1110.8565.6250.991−1.4380.1620.271
Non-taskPretest4.3260.9344.9950.766−2.0410.0520.378
Low PI4.4391.1845.0310.986−1.4120.1700.271
Medium PI5.0981.0945.1510.858−0.1400.8900.028
High PI5.3710.6705.4840.810−0.3820.7060.076
Table 3. Comparison of trust in PI among participants with different levels of extroversion.
Table 3. Comparison of trust in PI among participants with different levels of extroversion.
Levels of Extraversion345Fpη2Means with LSD
MSDMSDMSDPost hoc Analysis
Pretest4.6250.3695.1330.2864.2380.3421.5220.2390.186N/A
Low PI4.6810.4805.0920.3724.7500.4450.2600.8530.038N/A
Medium PI5.0420.4075.2330.3155.1430.3770.1060.9560.016N/A
High PI5.0000.2945.8080.2284.8100.2723.0910.0500.3173 vs. 4: p = 0.042
4 vs. 5: p = 0.011
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, J.; Huang, Y.; Huang, X.; Zhang, J.; Zhang, H. Effect of Proactive Interaction on Trust in Autonomous Vehicles. Sustainability 2024, 16, 3404. https://doi.org/10.3390/su16083404

AMA Style

Sun J, Huang Y, Huang X, Zhang J, Zhang H. Effect of Proactive Interaction on Trust in Autonomous Vehicles. Sustainability. 2024; 16(8):3404. https://doi.org/10.3390/su16083404

Chicago/Turabian Style

Sun, Jingyue, Yanqun Huang, Xueqin Huang, Jian Zhang, and Hechen Zhang. 2024. "Effect of Proactive Interaction on Trust in Autonomous Vehicles" Sustainability 16, no. 8: 3404. https://doi.org/10.3390/su16083404

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop