Next Article in Journal
Using an Exponential Random Graph Model to Recommend Academic Collaborators
Next Article in Special Issue
Does Information on Automated Driving Functions and the Way of Presenting It before Activation Influence Users’ Behavior and Perception of the System?
Previous Article in Journal
Design Framework of a Traceability System for the Rice Agroindustry Supply Chain in West Java
Previous Article in Special Issue
User Education in Automated Driving: Owner’s Manual and Interactive Tutorial Support Mental Model Formation and Human-Automation Interaction
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Driving Style: How Should an Automated Vehicle Behave?

WMG, University of Warwick, Coventry CV4 7AL, UK
Jaguar Land Rover, Coventry CV4 7AL, UK
Author to whom correspondence should be addressed.
Information 2019, 10(6), 219;
Submission received: 3 May 2019 / Revised: 17 June 2019 / Accepted: 19 June 2019 / Published: 25 June 2019
(This article belongs to the Special Issue Automotive User Interfaces and Interactions in Automated Driving)


This article reports on a study to investigate how the driving behaviour of autonomous vehicles influences trust and acceptance. Two different designs were presented to two groups of participants (n = 22/21), using actual autonomously driving vehicles. The first was a vehicle programmed to drive similarly to a human, “peeking” when approaching road junctions as if it was looking before proceeding. The second design had a vehicle programmed to convey the impression that it was communicating with other vehicles and infrastructure and “knew” if the junction was clear so could proceed without ever stopping or slowing down. Results showed non-significant differences in trust between the two vehicle behaviours. However, there were significant increases in trust scores overall for both designs as the trials progressed. Post-interaction interviews indicated that there were pros and cons for both driving styles, and participants suggested which aspects of the driving styles could be improved. This paper presents user information recommendations for the design and programming of driving systems for autonomous vehicles, with the aim of improving their users’ trust and acceptance.

1. Introduction

Technological developments in driving systems are making it possible for automated vehicles (AVs) to become a reality in the near future. We use here the definition from [1], where AVs refer to vehicles equipped with any driving automation system capable of performing dynamic driving tasks on a sustained basis, and which are often labelled as driverless, self-driving or autonomous vehicles. Manufacturers, tech companies and research centres are investing heavily in AVs and associated technologies, with early deployments and trials already happening across the globe, e.g., [2,3]. These vehicles have the potential to revolutionise transportation through increased mobility and safety, reduced congestion, emissions and travel costs [4,5,6]. The extent of the potential benefits brought about by AVs will depend significantly on people’s adoption of these technologies, and thus via their trust and acceptance of such vehicles.
A substantial body of literature focuses on trust in automation, user expectations and the development of adequate trust in systems [7]. Particular interest is placed on trust in autonomous vehicles, given the number of factors influencing trust and its dimensions such as overtrust, undertrust and mistrust [8,9], and the risks associated with inadequate trust levels [10]. Previous research shows that system transparency and ease of use help build trust in automation [11]. More recently, studies focused on the application of trust calibration into AVs to make sure drivers understand the limitations of the system [12] and do not overtrust the technology [9]. Having the correct level of trust is especially useful for SAE level 1–3 assisted driving [1], when the vehicle may need to hand the control of back to drivers during parts of the journey [13,14,15]. For level 4–5 AVs [1], however, when vehicles can handle all traffic situations, there will be no handover process [16]. Nevertheless, user trust will be essential to guarantee acceptance and adoption.
Trust in technology tends to increase with repeated interactions, as users become more familiar with the systems in question [11,17,18]. However, there are still reservations [19] and modest long-term projections of adoption of AVs [20]. The initial user experiences influence the levels of trust and acceptance of technology [21], making adamant that interactions are positive the first time around. In addition, well-designed and aesthetically pleasing interfaces tend to be more trustworthy [22]. As with any new technology, it is necessary to obtain a deep understanding of the reasons why people trust it or not, to redesign and adapt AVs to improve its chances of acceptance and “domestication” [23].
AVs can provide a “first mile/last mile” transportation solution and be available on demand [24]. These vehicles have the potential to facilitate access to and from transport hubs and go through semi-pedestrianised areas such as city centres [25]. The expected benefits of AVs such as less traffic, fewer emissions and lower costs may require that users share vehicles instead of owning them [26] and that passengers use ride-sharing schemes [27]. Studies have and are experimenting with scheduling and dispatching services to optimise the efficiency of these pods [28]. These vehicles had been used on recent research projects investigating trust in automation, for example, to assess usefulness, ease of use and intention to use on-demand transportation [29,30].
Traffic efficiency and reduced congestion can be obtained through the implementation of technological features such as communication from one vehicle to another (V2V) and between vehicles and roadside infrastructure (V2I) [31]. AVs can implement collaborative “perception” or data sharing about hazards or obstacles [32,33]. There is also the potential for AVs to safely drive more closely to each other than human-driven cars. With collaborative perception, AVs can negotiate lanes or junctions faster and more efficiently [34]. Platooning is also a possible feature: if vehicles drive in a fleet, it can save costs and enable smoother traffic flow [35]. Another expected capability of AVs is for them to ‘see around corners’ through advanced sensing technologies [36] so they can drive more assertively even when a human driver would not be able to directly see the environment.
This increasing complexity of systems controlling AVs poses interesting challenges for information sharing and processing. Occupants of AVs may have difficulty making sense of how its control systems work [37] and therefore form incorrect mental models, defined as a representation of the world held in the mind [38,39]. The way a vehicle behaves and the reasons behind its actions have the potential to affect trust and acceptance. Although studies comparing human vs. machine driving exist, both in simulators [40] and in the real world [30], they have not focused on the driving style comparing two AVs driven by different systems. The development of this research was motivated by the need to understand how people feel when being driven by these complex systems.

2. Literature Review

2.1. AVs vs. Pedestrians

A number of studies have been investigating the communications between AVs and vulnerable users (e.g., pedestrians and cyclists) to better understand preferred messages and the most effective methods of delivery [41,42]. Böckle et al. [43] used a VR environment with vehicles driving past a pedestrian crossing and evaluated the impact of a vehicle’s external lights on the user experience. A similar study simulated AVs with ‘eyes’ on the headlights that give the impression that the AV can see pedestrians and indicate intention to stop [44]. One extensive study of users of short-distance AVs focused on how the vehicle should communicate its intentions to pedestrians and cyclists via external human-machine interaction [45]. A parallel study evaluated user expectations about the behaviour and reactions of AVs and what information from vehicles is needed [46]. Vulnerable road users prefer the vehicles to drive slowly and far [47], and want to have priority over AVs in shared public spaces [41]. Another recent example tested projections on the floor to improve communication from the vehicle during ambiguous traffic situations [48].
External communication tools on the vehicle can minimise the likelihood of conflict when both are sharing the same environments. However, Dey and Terken [49] observed hundreds of interactions between pedestrians and regular vehicles and established that explicit communication is seldom used. Pedestrians tend to rely more on the motion patterns and behaviours of vehicles to make decisions during traffic negotiations. One lab study presented a vehicle with different rates of acceleration, deceleration and stopping distances, and concluded that AVs should present obvious expressions to be clear about their intent in traffic [50]. Mahadevan et al. [51] suggest that AV’s movement patterns are key for safe and effective interaction with pedestrians and that this information could be reinforced by other explicit communication cues.

2.2. Human Driver vs. Human Driver

The study of how people perceive the behaviour of other drivers and other vehicles is important to guide the programming of automated driving systems. Studies have evaluated how human drivers interact among themselves on the road and how they indicate intentions when negotiating complex traffic situations. Drivers typically make sense of the evolution of each traffic scene by observing and interpreting the behaviours of other vehicles and consider vehicles as whole entities, or ‘animate human-vehicles’ [52]. Another example of previous research asked participants to drive to intersections and assessed how they negotiated complex scenarios with other vehicles. Drivers preferred somebody else to be proactive and felt “more confident if they do not have to be the first driver to cross the intersection” [53].

2.3. AV vs. Human-Driven Vehicles

Studies have also been conducted in what can be considered a “transition period” consisting of mixed cooperative traffic situations between AVs and human-driven vehicles. Drivers were asked to evaluate AVs’ behaviours in diverse traffic situations such as lane changes, with this information used to inform the design of driving systems with higher chances of acceptance [54,55]. In a recent example, researchers watched several publicly available videos of AVs to evaluate the interactions on the road, how cars communicate through their movements, and how other people interpret this [56]. They suggested that movements performed by AVs should be clear and easy to understand by occupants of the vehicle and other vehicles, and not just part of the mechanical means of travelling towards a destination.

2.4. AV vs. AV

The communication between two or more vehicles is a subject of growing interest given its applications for automated driving. A seminal modelling of traffic lane changes suggests that a ‘forced’ behaviour can result in shorter travel time in comparison to a more cooperative negotiation of lane change [57]. One study simulated cooperative vehicle systems at road intersections and evaluated diverse scenarios, for example involving emergency vehicles [58]. They concluded that a digital decision-making system could improve safety at junctions. Furthermore, V2V/V2I may imply that no visible communication between vehicles is needed anymore, as all traffic negotiations could be pre-arranged [59].

2.5. AV’s Driving Style

Early studies set to develop driving styles for AVs include examples attempting to define the behaviours that would feel natural in a driving simulator [60]. Automated driving styles have gathered attention in recent years [61], generally focusing on occupant’s comfort [62]. Occupants of AVs feel these vehicles need to control the steering and speed precisely to generate a smooth ride, similarly to how humans drive [63]. Rapid changes in acceleration or direction can compromise comfort and cause motion sickness [62], which may impact driver performance, especially important in handout situations [64]. However, a recent user study shows that preferred AV driving style may not correspond to the way humans drive, mainly regarding decelerating: when the simulator vehicle decelerated most in the first part of the manoeuvre, as human drivers do, users tended to feel uncomfortable [65].

2.6. Anthropomorphism

Human-robot interactions are generally preferred if the machines present human-like features or behaviours [66]. Robots can display these behaviours through motion or gaze, with some arrangements being perceived by humans as being more natural and competent than others [67]. One literature review indicates that trust is most influenced by characteristics of the robot such as anthropomorphism, transparency, politeness, and ease of use [11]. Previous research has examined how driving agents could increase trust with more human-like appearance and behaviour, and be interpreted intuitively by the driver [68,69]. Some studies have been trying to make AVs better at reproducing human-like driving styles to increase safety when interacting with human-driven vehicles [70]. Anthropomorphism has been shown to evoke feelings of social presence and allow AVs to be perceived as safer, more intelligent and trustworthy [71]. These examples are in line with the more overarching issues of human-robot interactions. One extensive literature review analysed studies of the behaviour of robots as they interacted with humans [72], concluding that humans need to receive intuitive and effective signals from robots, and that robots should act as intelligent and thoughtful entities during interactions.

3. Aims

Emerging driverless technologies can make transportation safer and more efficient, but there are concerns from pedestrians, other drivers, and questions about how these vehicles will interact with each other. The systems governing AVs need to be programmed to behave in specific ways to be trusted and accepted. For example, AVs can adopt a driving style similar to humans, to rely on the fact that people tend to trust agents that look or behave similarly to humans [68,69]. Conversely, they can be more assertive, making use of V2V and V2I communication [31]. Human-like robots may be seen as less efficient in negotiating junctions. Assertive robots may be perceived as unsafe or unnatural. It is necessary to increase the safety and efficiency of traffic via the use of AVs, but at the same time improve trust and guarantee acceptance. However, studies testing these styles using real automated driving vehicles were not found in the literature. Therefore, we formulated these research questions:
RQ1: How would two different driving styles affect trust for the occupants of AVs?
RQ2: How should an AV drive, and how can the acceptance of this driving behaviour be improved?
This study was designed to answer these questions, through testing different vehicle behaviours and evaluating user feedback using actual automated driving vehicles. The aim was to assess passenger’s levels of trust and acceptance of different driving styles to understand preferences. Surveys and interviews were conducted to obtain impressions and opinions from participants after they were driven by Level 4 SAE AVs [1] which used two types of driving behaviours. We hypothesise that (H1) the manoeuvres from a human-like style would be preferred, and that (H2) familiar driving behaviour characteristics should be added to the control systems governing AVs.

4. Methods

This experiment was performed in the Urban Development Lab in Coventry, UK, consisting of a large warehouse designed to resemble a pedestrianised area in a town centre. It has 2-metre tall partitions dividing the internal space into junctions and corners where small vehicles can drive autonomously (Figure 1). Participants (N = 43) were invited to be passengers in SAE level 4 [1] AVs, (i.e., the vehicle is capable of handling all driving functions under certain circumstances). There are no pedals or steering wheel in the test vehicles and the occupant has no control beyond an emergency-stop button. The vehicles were driving in highly automated mode within a defined area with no safety driver inside the vehicle, but they were remotely supervised.
We used a mixed experimental design with repeated-measures for within- and between-groups comparisons. The intention was to obtain a sample size of 36, to obtain reasonable statistical power and strong dataset for qualitative analysis [73]. As it is customary with user research, we sent some extra invites to account for participant no-show [74]. The turnout was surprisingly good, and we scheduled one extra day for data collection, hence ending up with 43 participants. These were randomly assigned to two groups. The first 22 participants experienced the pod using “human-like” driving behaviour, while the other 21 participants rode the pods configured with the “machine-like” driving. Participants were not briefed about the behaviour of the pods to avoid possible bias.
The recruitment of participants was made via internal emails sent to employees of a large car manufacturer based in the UK, but targeting mainly personnel working for administrative activities–we intended to avoid those involved with engineering or vehicle design roles as their main jobs. Of the 43 participants, seven were females, and the ages ranged from 22 to 60 (M = 37). Two of the participants did not complete the final survey due to technical mishaps, therefore they were removed from the quantitative dataset. No incentives were given to participants.
The design of the scenarios was scripted to give the impression to participants that the vehicles were taking decisions and interacting in real time. Although the pod is a highly automated SAE level 4 vehicle [1], to give experimental control, it was decided that the routes used would be pre-defined. The pods followed a pre-determined path and displayed specific behaviours to give the impression that they were interacting in real time. The duration of the experiment was from 45 minutes up to one hour per participant, and they were fully debriefed at the end of the trial.

4.1. Vehicle Programming

We programmed two vehicles to ride in this environment simultaneously four times for approximately four minutes each time, with one participant in each vehicle. There were six crucial moments where both vehicles interacted with each other by “negotiating” manoeuvers at T-junctions (Figure 2). The layout of the partitions meant that it was not always possible for participants inside the vehicles to see if the other vehicle was also approaching a junction or corner. For example, when driving towards a junction from the internal road, the pod had to stop and let the vehicle on the outer perimeter road pass first (Figure 3). The behaviour at the junction could be of two types, described below.

4.1.1. Human-Like Behaviour

For the human-like driving, both pods in the arena would display the same behaviour: they reduced speed and “crept” out of the junctions as if “looking if it was safe to proceed”. In this condition, the pods slowed and moved out onto junctions as if being cautious, or unsure if the other pod was present or a potential hazard, as a human driver might do. This “creeping” manoeuvre could also be interpreted as slowly “unmasking” the pod’s sensors around a physical obstacle. If the other vehicle was approaching, one pod would stop to give way before exiting.

4.1.2. Machine-Like Behaviour

For the machine-like driving condition, the behaviour of the vehicles was designed to convey the impression that the pods’ control systems already “knew” where the other vehicle was at all times and were thus communicating and negotiating the junction beforehand. For half of the interactions, the pod stopped at junctions and waited for the other vehicle (which was not yet visible) to pass. For the other interactions, the pod would unhesitatingly manoeuvre through the junction since it already “knew” in advance that the other pod could not be a hazard or obstacle.

4.2. Activities

Participants were in the vehicle alone, with no specific task to perform. They had a radio to communicate with the research team should they need. Two volunteers took part in this study at the same time, one in each vehicle. After each of the four journeys, participants exited the vehicle to complete surveys indicating their trust in the vehicle. Both participants were escorted to a waiting room to fill in these surveys electronically on tablets while another two participants rode in the pods.

4.2.1. Trust

The main instrument used to evaluate trust was the Scale of Trust in Automated Systems [75]. The questionnaire contains 12 items assessing concepts such as security, dependability, reliability and familiarity. Participants ranked statements such as “The autonomous pod is reliable” or “I am suspicious of the autonomous pod’s intent, action or outputs” on a 7-point scale. Seven questions measure trust, and the remaining five assess distrust in the technology. Distrust responses are inverted and added to trust responses to result in the overall trust score, as instructed by [75]. Results from the surveys were statistically analysed using SPSS 24.

4.2.2. Acceptance

To obtain qualitative data and assess the user acceptance, we asked a few questions to participants during a brief semi-structured interview where they could describe the experience. We were particularly interested if they noticed the behaviour of the pod approaching corners when the other pod passed in front, and how they negotiated the junctions. We also asked if participants could explain why the vehicle behaved in that way. The interviews were transcribed and imported into the QSR International NVivo software to be coded into nodes, which are the units of information based on participants’ statements [76]. Nodes were then grouped in categories, integrated and correlated to indicate relationships and develop conclusions [77].

5. Results

5.1. Quantitative Data

We conducted a 4×2 repeated-measures ANOVA on the responses from the Scale of Trust in Automated Systems [75], using the trust scores for each of the four trips as dependent scores, and the two driving styles as a between-groups factor. We ran the Kolmogorov-Smirnov test for normality on these trust variables, and there were no significant differences, thus the data is normally distributed. There were no group differences between the human-like or machine-like driving styles (F(1,39) = 1.711, p = 0.20) with low observed power and effect size (0.248 and 0.042 respectively). There was a main effect of trust scores across trips F(3,117) = 25.403, p < 0.0001, partial-eta² = 0.394, where trust increased across the four journeys irrespective of driving style (Figure 4), with the standard deviations shown in Table 1. As there were no group differences, we used paired t-tests to identify the post-hoc differences in trust scores across the four trips. There was a non-significant difference between comparisons of the 1st and 2nd trust scores only (t(40) = -1.980, p < 0.055) (Table 2), with all other paired comparisons showing significant differences.
Separate analyses of the two sub-factor constructs – trust and distrust – showed slightly different trends. Although there were no interaction effects, Distrust appears to be more stable on the first two runs, tended to fall towards the third journey in the vehicle, and returning to a steady score by the final run (Figure 5). Post-hoc differences were non-significant only between journeys 1 and 2, and 3 and 4 (Table 3). Conversely, the Trust subset seems to rise steadily for all runs, as can be seen in Figure 6. Differences here were significant between all journeys (Table 4).

5.2. Qualitative Data

The qualitative data analysis of interviews produced 85 nodes in the three main themes describing the reassuring human driver, the assertive machine, and the incomplete mental model presented by participants. Opinions were divided on which was the optimal way for the vehicle to behave out of the two designs. Participants only experienced either of the driving styles, but we received, from both groups, arguments in favour of aspects of both human-like behaviour and the machine-like driving style.

5.2.1. Reassuring Human

In the human-like driving style condition, the behaviour of the vehicles was designed to appear that it was “looking”, or using sensors of some kind, before proceeding through the T-junctions. The vehicles reduced speed every time, and stopped to give way if the other vehicle was approaching. When asked to describe the vehicle’s behaviour, participant 11 [P11] declared: “I assume it’s got to get, the cameras need to be out to see where it was going to, just like us, it can’t see around corners, it noses out a little bit so it can actually see what it’s doing”. P15 complemented this: “it knows it has to give way at that point and just check if anything is coming”.
P16 felt comfortable with the driving behaviour presented by the vehicle, saying that it was “probably trying to inspire confidence in the passenger, I’m guessing, in terms of like the way it behaved, kind of quite similar to a human, it’s only ever going to inspire confidence I think it’s because that’s what we’re used to”. After being debriefed about the study and the possibility of the pod reducing speed and ‘looking’ at junctions, P32 added concerns about vulnerable road users, such as “pedestrians or cyclists that could have been there that don’t communicate with the pod. That may be a safer way of doing it rather than flying around the corner”.

5.2.2. Assertive Machine

For the machine-like condition, the design intended to convey that the vehicles were communicating between each other. The vehicles would manoeuvre through junctions without reducing the speed, and stop only if another vehicle was approaching. This design was perceived correctly by some participants, as P28 explains: “it stopped at a junction, because I assume it knew that something was coming, as opposed to it reacting to seeing something coming”.
However, there was also the feeling that the traffic needed a more efficient approach, and that the vehicle could have been more assertive. P40 said that “sometimes I didn’t expect it to stop, because I thought the other pod was a bit further away but then it did, so I guess it’s cautious… if I was driving I’d probably have gone”.
Interestingly, P19, who tested the human-like version, commented that a machine driving like a human and trying to look around the corners seemed unnatural: “I think it was a bit unexpected because my expectation with the pods is that that there would be some unnaturalism to it rather than a human driver”. P21 complemented with their wish for the pod to be more assertive: “If I was in an autonomous pod with sensors giving a 360-degree view at all times, I’d expect the vehicle to instantaneously know whether it was safe or not, and not need to edge out”.
One common complaint was that the vehicles were performing sharp turns, due to the way we purposefully designed the driving behaviour. This feeling was present in both conditions, but more noticeable with the machine-like driving style condition. The relationship between speed and sharp steering caused a few negative reactions from participants: “what you’d expect from a driver is a bit of a gradual turn” [P34] and “there were moments where it was accelerating around corners, I think it catches you unaware” [P41].

5.2.3. Incomplete Mental Model

The unfamiliarity with automated vehicles and their capabilities led participants to be unsure about its diving style and reasons behind behaviours. For some participants, it was not very clear how the vehicles navigated the environment, or why they behaved as they did. Some participants seemed to be unaware of the possibility of vehicle-to-vehicle communication. For example, P22 declared that “the [other] car hasn’t even appeared but my car had already stopped in advance, and there wasn’t a light or anything, just stopping because they knew that a car or something would pass in front of it, but in a way that it was impossible for it to have detected”. Likewise, P04 was unsure about the reasons behind the vehicle behaviours: “I just assume it was a radar in front, it’s not obvious what is making the mechanism workings, the inside, it’s just how I understand how it works. I just consider the lights when the other is on the way, they may interact like a car, it’s not completely obvious how they move”. P43, who tested the machine-like driving, was also unsure about why the vehicle behaved as it did, and commented that they felt uncomfortable when it took the corner without ‘looking’, which seemed unsafe:
Normally, when you drive, you stop at the junction and check if there’s another car coming or another driver and then will go, but here it didn’t stop, it just went. I did, in my mind, I knew there wasn’t anything coming, but if it would be the real, in real life, I would be a bit cautious, I’d be feeling a bit, ‘why it didn’t stop?’ it was ok at this time, but I wouldn’t feel safe, because it may be other vehicles coming.
These uncertainties, together with limitations with the design of the journeys, led to some participants correctly suspecting that the vehicles were pre-programmed to follow a specific route. Ten participants mentioned their suspicions during the interviews. P27 illustrates: “So without knowing how it all linked together and how it is integrated I assume that there is a preconceived path that the pod has to follow, and if that’s the case then one pod is always going to know where the other pod is”.
After a discussion about vehicle-to-vehicle communication, P26 questioned: “how would that work for other cars? I don’t know, for pods that works, for other cars you can’t expect that everyone’s going to have, immediately have cars that all communicate between each other, overnight”. P31 added the concept of familiarity and domestication of the technology, which may eventually happen: “when people get used to it, when people grow up with it, I don’t think it will be a consideration anymore. I think it will be assumed, that’s it, and it does that”.

6. Discussion

This study demonstrated that there were no statistically significant differences in reported trust scores between the two driving style conditions as measured by the trust questionnaire [75]. This result was corroborated by qualitative analysis of the interview responses. Participants’ opinions were divided between the two driving styles, and they could list the advantages and disadvantages of both without a strong preference for either. Therefore, our first research question could not be answered and the first hypothesis could not be proved: the manoeuvres from one driving system were not necessarily preferred over the other by our participants.
We showed that trust increased with time for both driving styles, being higher on the final run once users built familiarity with the system. Previous research also indicated that trust evolves and stabilizes over time [78]. There is evidence that trust can be learned, as users evaluate their current and past experiences about the system’s performance [11]. Especially if interactions are positive, users can learn to trust technology [21]. Our result probably reflects the growing familiarity with the technology as it proved itself safe.
The overall trust and the sub-factors of trust and distrust in the machine-like driving style showed a steadily curve throughout the four journeys in the vehicle. However, the human-like behaviour presented a steeped change in these scores between runs 2 and 3. Although the reasons are not completely clear, we suggest that the behaviour of the vehicle could have been perceived as awkward by participants, and therefore they took a couple of runs to figure out this driving style, accustomate with the vehicle and only then increase their trust the technology.
Current traffic situations are challenging for vehicles with more assertive behaviour, as our participants pointed out. There are interactions with diverse agents and environmental features, which are not directly in communication with the vehicle’s system [79]. Users also acknowledge that future generations may be more comfortable with AVs and its features, as they learn to live with the new technology.
If the benefits of automated driving are to be obtained, e.g., less traffic congestion and improved efficiency, these vehicles should incorporate the capabilities brought about the technology, such as platooning and collaborative perception. Vehicles will probably be able to communicate among each other, share the knowledge of hazards and obstacles, drive in platoons, slot in between each other at junctions, and make decisions based on information beyond occupants’ field of view. Driving efficiently may sometimes involve performing manoeuvres that can be considered misconduct or may make people uneasy [50]. Users may be unsure of the reasons behind vehicle behaviours, and assertive driving seems unsafe. However, some level of rule breaking is acceptable and even expected, for example, when a vehicle has to cross a street divisor line to overtake a stopped car [80].

6.1. Suggestions

To answer our second research question, we provide here an indication of how an AV should drive and some recommendations to improve the acceptance of AV’s driving behaviours. Early versions of the control systems governing AVs could start off being more conservative, similarly to how humans behave, “leveraging the instinctive human ability to react to dangerous situations” [81]. After repeated exposure and once users become familiar with AVs, their behaviour should become more assertive, progressing to a machine-like driving style. The speed of this transition could be down to the occupants to define, and therefore gradually increasing comfort and acceptance. These suggestions indicate that our second hypothesis is only partially proved. We had hypothesised that AVs should present a familiar driving behaviour. However, a longitudinal perspective to trust and acceptance implies that a familiar human-like driving style could be gradually replaced by a more machine-like behaviour.
We also suggest that occupants should form a mental model of how AVs work beforehand, and progressively understand the more advanced features, given that the formation of the appropriate level of trust tends to start to be created long before the actual user interaction [82]. Users should also be aware of the details of the systems and the reasons behind vehicles’ behaviours, as it can increase the situation awareness [83]. Users would also benefit to know that the AV is sharing an overarching knowledge that is governing traffic for a common good. Users should require a comprehensive mental model of the processes behind the decision-making system embedded in the vehicles in order to build trust instead of waiting for users to learn how the control systems work over time [37].
People tend to produce highly different mental models of AVs (not always correct or comprehensive), and gradually add concepts and links as they experience journeys [84]. It is possible to design procedures that encourage appropriate trust, for example, communicating to users the system capabilities and limitations beforehand [12]. An industry-wide effort to communicate the capabilities of vehicles may be needed. Occupants of AVs could be shown that vehicles have ‘seen’ possible hazards and the related system performance [85], but also assure when a vehicle is communicating with another and with infrastructure. By doing so, users would be more likely to accept what otherwise could be deemed a risky driving manoeuvre.
AV’s communication capabilities could be displayed on internal [86] and external [46] interfaces available to the users, as this interaction improves the understanding of the vehicle with time [87]. However, the design of the information delivered to the occupants of AVs should take into consideration the related workload, as too much information can have a negative effect, making users anxious [88]. Pre-training could be used to improve the understanding of advanced features of AVs, for example, via interactive tutorials [89].

6.2. Limitations and Future Work

The design of this study presented numerous challenges, which resulted in some limitations described here. Firstly, the process of defining the laps required a meticulous and time-consuming design of the journeys through the arena. This was coupled by the challenge of coordinating the behaviour of one vehicle with the other, with the path to be followed and the timings for each start and stop to be in perfect sync. Minor deviations from the expected ideal driving behaviour have been shown to lead to AVs being perceived as awkward [56], hence our participants’ complaints about sharp, un-natural turns and acceleration profiles, and long stop times.
The recruitment of participants and the experiments conducted for each condition happened in two different phases, one after the other, approximately one week apart. This lack of randomisation may have affected the results of the study, for example, if participants were primed by incidents involving AVs. However, no high-profile accidents were in the news during the course of the data collection phase. Additionally, the demographics of participants may not represent the target population for these vehicles, since, via an opportunistic sampling, we obtained mainly male able-bodied participants working for a car manufacturer. We attempted to minimise previous knowledge and experience by excluding engineers and designers from the recruitment process. Nevertheless, the general population with a more balanced gender and controlled age ratio could be invited to participate in future studies. Previous studies show that occupants’ comfort and acceptance of certain driving style may be perceived differently according to their demographic characteristics [90].
This research could have benefited from longer piloting, testing and validation of the designed driving behaviours to increase the chances of all participants perceiving the driving styles as we designed them. Future studies should also find better ways of designing the laps, perhaps observing how humans drive to find the precise ideal path [62,91,92]. It is possible to use computational methods for interpolating the curves and defining paths followed by AVs [34], but taking in consideration that a technically ideal trajectory may not coincide with occupants’ preferences [61]. Vehicles should also only spend the minimum necessary time stopped at junctions not to compromise the perceived efficiency. These details may explain why some of our participants suspected that the pods followed a pre-programmed path. Further research could also compare acceptance and trust between lay users and those that went through a training program about the capabilities of the vehicles prior to the interactions [89].
The evolution of trust during the current study indicates an avenue of research within technology acceptance and automotive user interaction. Studies could test trust and distrust levels using longer journeys or a large number of runs to define how scores improve over time. It would also be interesting to understand when trust ‘saturates’. Future studies could also include a staged negative incident to evaluate how it would affect trust levels, and if trust is ever rebuilt.
Finally, this research raised one interesting point, that of individual choices versus a common good [93]. Stopping for no apparent reason sounded non-efficient and made participants complain. Would users accept this behaviour, if they know that their vehicle is being held there because it is more efficient to let a whole platoon of vehicles go by at speed instead of letting vehicles negotiate the junction one by one? Will this represent the end of etiquette and courtesy as vehicles “get down to business” regardless of users’ preferences [94]? More research will be needed to identify these psychological aspects of individualism in traffic subject to a (possible) governing system controlling all AVs.

7. Conclusions

This study presents a contribution to the design definitions of AVs, towards future driving systems that are more acceptable and trustworthy. Two highly automated driving pods were used simultaneously to test user trust and acceptance in relation to two distinct driving behaviours. Human-like behaviour inspires confidence due to familiarity. However, to reduce traffic congestion and improve efficiency, AVs will have to behave more like machines, driving in platoons and negotiating gaps and junctions automatically. They are likely to make use of collaborative perception and share information that vehicle occupants are unable to directly obtain. Consequently, AVs may be more assertive than humans are in traffic, and these behaviours are generally seen as unsafe or are considered to make people uneasy.
To improve the trust and acceptance of the automated driving systems of the future, the design recommendations obtained from this research are the following:
  • Explain to the general public the details of the driving systems, for example, the recent technological features such as V2V/V2I
  • Help create realistic mental models of the complex interactions between vehicles, its sensors, other road users and infrastructure
  • Present the features progressively, so occupants can build this knowledge with time
  • Convey to occupants the sensed hazards and the shared knowledge received from the other vehicles or infrastructure, so users can acknowledge that the system is aware of hazards beyond the field of view.
Users may need to form new and more realistic mental models of how the AVs work, either through an iterative process of experiencing the systems or via pre-training about the features and capabilities of AVs. Once users better understand the driving systems and become familiar with the technology and the reasons behind its behaviour, they will be more trustworthy, accepting and likely to ‘let it do its job’.

Author Contributions

Conceptualization, L.O., K.P., C.G.B.; methodology, L.O., K.P., C.G.B.; formal analysis, L.O., K.P., C.G.B.; investigation, L.O., C.G.B., K.P.; data curation, L.O., K.P., C.G.B.; writing—original draft preparation, L.O.; writing—review and editing, C.G.B., K.P., S.B.; visualization, L.O.; supervision, S.B.; project administration, S.B.; funding acquisition, S.B.


This project is funded by Innovate UK – an agency to find and drive science and technology innovations. Grant competition code: 1407_CRD1_TRANS_DCAR.


This study was part of the UK Autodrive, a flagship, multi-partner project, focusing on the development of the Human Machine Interface (HMI) strategies and performing real-world trials of these technologies in low-speed AVs ( The authors would like to acknowledge the vast amount of work that RDM Group/Aurrigo put into this study.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.


  1. SAE. J3016: Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems; SAE Int.: Warrendale, PA, USA, 2014; Available online: (accessed on 13 April 2018).
  2. Eden, G.; Nanchen, B.; Ramseyer, R.; Evéquoz, F. On the Road with an Autonomous Passenger Shuttle: Integration in Public Spaces. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; pp. 1569–1576. [Google Scholar] [CrossRef]
  3. Nordhoff, S.; de Winter, J.; Madigan, R.; Merat, N.; van Arem, B.; Happee, R. User acceptance of automated shuttles in Berlin-Schöneberg: A questionnaire study. Transp. Res. Part F Traffic Psychol. Behav. 2018, 58, 843–854. [Google Scholar] [CrossRef]
  4. Meyer, J.; Becker, H.; Bösch, P.M.; Axhausen, K.W. Autonomous vehicles: The next jump in accessibilities? Res. Transp. Econ. 2017, 62, 80–91. [Google Scholar] [CrossRef] [Green Version]
  5. Fagnant, D.J.; Kockelman, K. Preparing a nation for autonomous vehicles: Opportunities, barriers and policy recommendations. Transp. Res. Part A Policy Pract. 2015, 77, 167–181. [Google Scholar] [CrossRef]
  6. Wadud, Z.; MacKenzie, D.; Leiby, P. Help or hindrance? The travel, energy and carbon impacts of highly automated vehicles. Transp. Res. Part A Policy Pract. 2016, 86, 1–18. [Google Scholar] [CrossRef] [Green Version]
  7. Lee, J.D.; See, K.A. Trust in Automation: Designing for Appropriate Reliance. Hum. Factors J. Hum. Factors Ergon. Soc. 2004, 46, 50–80. [Google Scholar] [CrossRef] [PubMed]
  8. Merritt, S.M.; Heimbaugh, H.; LaChapell, J.; Lee, D. I Trust It, but I don’t Know Why: Effects of Implicit Attitudes Toward Automation on Trust in an Automated System. Hum. Factors J. Hum. Factors Ergon. Soc. 2013, 55, 520–534. [Google Scholar] [CrossRef]
  9. Mirnig, A.G.; Wintersberger, P.; Sutter, C.; Ziegler, J. A Framework for Analyzing and Calibrating Trust in Automated Vehicles. In Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Ann Arbor, MI, USA, 24–26 October 2016; pp. 33–38. [Google Scholar] [CrossRef]
  10. Kundinger, T.; Wintersberger, P.; Riener, A. (Over)Trust in Automated Driving: The Sleeping Pill of Tomorrow? In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems; CHI’19; ACM Press: New York, NY, USA, 2019; pp. 1–6. [Google Scholar]
  11. Hoff, K.A.; Bashir, M. Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust. Hum. Factors J. Hum. Factors Ergon. Soc. 2015, 57, 407–434. [Google Scholar] [CrossRef] [PubMed]
  12. Khastgir, S.; Birrell, S.; Dhadyalla, G.; Jennings, P. Calibrating trust through knowledge: Introducing the concept of informed safety for automation in vehicles. Transp. Res. Part C Emerg. Technol. 2018, 96, 290–303. [Google Scholar] [CrossRef] [Green Version]
  13. Helldin, T.; Falkman, G.; Riveiro, M.; Davidsson, S. Presenting system uncertainty in automotive UIs for supporting trust calibration in autonomous driving. In Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Eindhoven, The Netherlands, 28–30 October 2013; pp. 210–217. [Google Scholar] [CrossRef]
  14. Lyons, J.B. Being transparent about transparency: A model for human-robot interaction. In Proceedings of the AAAI Spring Symposium, Stanford, CA, USA, 25–27 March 2013; pp. 48–53. [Google Scholar]
  15. Kunze, A.; Summerskill, S.J.; Marshall, R.; Filtness, A.J. Automation transparency: Implications of uncertainty communication for human-automation interaction and interfaces. Ergonomics 2019, 62, 345–360. [Google Scholar] [CrossRef]
  16. Haeuslschmid, R.; Shou, Y.; O’Donovan, J.; Burnett, G.; Butz, A. First Steps towards a View Management Concept for Large-sized Head-up Displays with Continuous Depth. In Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications—Automotive’UI 16, Ann Arbor, MI, USA, 24–26 October 2016; pp. 1–8. [Google Scholar] [CrossRef]
  17. Sibi, S.; Baiters, S.; Mok, B.; Steiner, M.; Ju, W. Assessing driver cortical activity under varying levels of automation with functional near infrared spectroscopy. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Redondo Beach, CA, USA, 11–14 June 2017; pp. 1509–1516. [Google Scholar]
  18. Gustavsson, P.; Victor, T.W.; Johansson, J.; Tivesten, E.; Johansson, R.; Aust, L. What were they thinking? Subjective experiences associated with automation expectation mismatch. In Proceedings of the 6th Driver Distraction and Inattention conference, Gothenburg, Sweden, 15–17 October 2018; pp. 1–12. [Google Scholar]
  19. Haboucha, C.J.; Ishaq, R.; Shiftan, Y. User preferences regarding autonomous vehicles. Transp. Res. Part C Emerg. Technol. 2017, 78, 37–49. [Google Scholar] [CrossRef]
  20. Bansal, P.; Kockelman, K.M. Forecasting Americans’ long-term adoption of connected and autonomous vehicle technologies. Transp. Res. Part A Policy Pract. 2017, 95, 49–63. [Google Scholar] [CrossRef]
  21. Hartwich, F.; Witzlack, C.; Beggiato, M.; Krems, J.F. The first impression counts—A combined driving simulator and test track study on the development of trust and acceptance of highly automated driving. Transp. Res. Part F Traffic Psychol. Behav. 2018, in press. [Google Scholar] [CrossRef]
  22. Frison, A.; Wintersberger, P.; Riener, A.; Schartmüller, C.; Boyle, L.N.; Miller, E.; Weigl, K. In UX We Trust: Investigation of Aesthetics and Usability of Driver-Vehicle Interfaces and Their Impact on the Perception of Automated Driving. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems—CHI ’19, New York, NY, USA, 4–9 May 2019; pp. 1–13. [Google Scholar]
  23. Smits, M. Taming monsters: The cultural domestication of new technology. Technol. Soc. 2006, 28, 489–504. [Google Scholar] [CrossRef]
  24. Mirnig, A.; Gärtner, M.; Meschtscherjakov, A.; Gärtner, M. Autonomous Driving: A Dream on Rails? In Mensch und Comput 2017-Workshopband; Digitalen Bibliothek der Gesellschaft für Informatik: Regensburg, Germany, 2017. [Google Scholar]
  25. Chong, Z.J.; Qin, B.; Bandyopadhyay, T.; Wongpiromsarn, T.; Rebsamen, B.; Dai, P.; Rankin, E.S.; Ang, M.H., Jr. Autonomy for Mobility on Demand. In Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2013; pp. 671–682. [Google Scholar]
  26. Moorthy, A.; De Kleine, R.; Keoleian, G.; Good, J.; Lewis, G. Shared Autonomous Vehicles as a Sustainable Solution to the Last Mile Problem: A Case Study of Ann Arbor-Detroit Area. SAE Int. J. Passeng. Cars-Electron. Electr. Syst. 2017, 10, 328–336. [Google Scholar] [CrossRef]
  27. Krueger, R.; Rashidi, T.H.; Rose, J.M. Preferences for shared autonomous vehicles. Transp. Res. Part C Emerg. Technol. 2016, 69, 343–355. [Google Scholar] [CrossRef]
  28. Fu, X.; Vernier, M.; Kurt, A.; Redmill, K.; Ozguner, U. Smooth: Improved Short-distance Mobility for a Smarter City. In Proceedings of the 2nd International Workshop on Science of Smart City Operations and Platforms Engineering, Pittsburgh, Pennsylvania, 18–21 April 2017; pp. 46–51. [Google Scholar] [CrossRef]
  29. Distler, V.; Lallemand, C.; Bellet, T. Acceptability and Acceptance of Autonomous Mobility on Demand: The Impact of an Immersive Experience. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; pp. 1–10. [Google Scholar]
  30. Wintersberger, P.; Frison, A.-K.; Riener, A. Man vs. Machine: Comparing a Fully Automated Bus Shuttle with a Manu- ally Driven Group Taxi in a Field Study. In Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Toronto, ON, Canada, 23–25 September 2018; pp. 215–220. [Google Scholar]
  31. Qiu, H.; Ahmad, F.; Govindan, R.; Gruteser, M.; Bai, F.; Kar, G. Augmented Vehicular Reality: Enabling Extended Vision for Future Vehicles. In Proceedings of the 18th International Workshop on Mobile Computing Systems and Applications, Sonoma, CA, USA, 21–22 February 2017; pp. 67–72. [Google Scholar] [CrossRef]
  32. Arnold, E.; Al-Jarrah, O.Y.; Dianati, M.; Fallah, S.; Oxtoby, D.; Mouzakitis, A. A Survey on 3D Object Detection Methods for Autonomous Driving Applications. IEEE Trans. Intell. Transp. Syst. 2019, 1–14. [Google Scholar] [CrossRef]
  33. Kuutti, S.; Fallah, S.; Katsaros, K.; Dianati, M.; Mccullough, F.; Mouzakitis, A. A Survey of the State-of-the-Art Localization Techniques and Their Potentials for Autonomous Vehicle Applications. IEEE Internet Things J. 2018, 5, 829–846. [Google Scholar] [CrossRef]
  34. Gonzalez, D.; Perez, J.; Milanes, V.; Nashashibi, F.A. Review of Motion Planning Techniques for Automated Vehicles. IEEE Trans. Intell. Transp. Syst. 2016, 17, 1135–1145. [Google Scholar] [CrossRef]
  35. Lu, K.; Higgins, M.; Woodman, R.; Birrell, S. Dynamic platooning for autonomous vehicles: Real-time, En-route Optimisation. Transp. Res. Part B Methodol. 2019, submitted. [Google Scholar]
  36. O’Toole, M.; Lindell, D.B.; Wetzstein, G. Confocal non-line-of-sight imaging based on the light-cone transform. Nature 2018, 555, 338–341. [Google Scholar] [CrossRef]
  37. Beggiato, M.; Krems, J.F. The evolution of mental model, trust and acceptance of adaptive cruise control in relation to initial information. Transp. Res. Part F Traffic Psychol. Behav. 2013, 18, 47–57. [Google Scholar] [CrossRef]
  38. Rasmussen, J. Skills, rules, and knowledge; signals, signs, and symbols, and other distinctions in human performance models. IEEE Trans. Syst. Man Cybern. 1983, 257–266. [Google Scholar] [CrossRef]
  39. Revell, K.M.A.; Stanton, N.A. When energy saving advice leads to more, rather than less, consumption. Int. J. Sustain. Energy 2017, 36, 1–19. [Google Scholar] [CrossRef]
  40. Wintersberger, P.; Riener, A.; Frison, A.-K. Automated Driving System, Male, or Female Driver: Who’D You Prefer? Comparative Analysis of Passengers’ Mental Conditions, Emotional States & Qualitative Feedback. In Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Ann Arbor, MI, USA, 24–26 October 2016; pp. 51–58. [Google Scholar] [CrossRef]
  41. Fridman, L.; Mehler, B.; Xia, L.; Yang, Y.; Facusse, L.Y.; Reimer, B. To Walk or Not to Walk: Crowdsourced Assessment of External Vehicle-to-Pedestrian Displays. arXiv 2017, arXiv:1707.02698. [Google Scholar]
  42. Song, Y.E.; Lehsing, C.; Fuest, T.; Bengler, K. External HMIs and Their Effect on the Interaction Between Pedestrians and Automated Vehicles. Adv. Intell. Syst. Comput. 2018, 722, 13–18. [Google Scholar] [CrossRef]
  43. Böckle, M.-P.; Brenden, A.P.; Klingegård, M.; Habibovic, A.; Bout, M. SAV2P – Exploring the Impact of an Interface for Shared Automated Vehicles on Pedestrians’ Experience. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications Adjunct, Oldenburg, Germany, 24–27 September 2017; pp. 136–140. [Google Scholar] [CrossRef]
  44. Chang, C.; Toda, K.; Sakamoto, D.; Igarashi, T. Eyes on a Car: An Interface Design for Communication between an Autonomous Car and a Pedestrian. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Oldenburg, Germany, 24–27 September 2017; pp. 65–73. [Google Scholar] [CrossRef]
  45. Merat, N.; Louw, T.; Madigan, R.; Wilbrink, M.; Schieben, A. What externally presented information do VRUs require when interacting with fully Automated Road Transport Systems in shared space? Accid. Anal. Prev. 2018, 118, 244–252. [Google Scholar] [CrossRef] [PubMed]
  46. Merat, N.; Louw, T.; Madigan, R.; Wilbrink, M.; Schieben, A. Designing the interaction of automated vehicles with other traffic participants: Design considerations based on human needs and expectations. Cogn. Technol. Work 2019, 21, 69–85. [Google Scholar] [CrossRef]
  47. Burns, C.G.; Oliveira, L.; Hung, V.; Thomas, P.; Birrell, S. Pedestrian Attitudes to Shared-Space Interactions with Autonomous Vehicles—A Virtual Reality Study. In Proceedings of the 10th International Conference on Applied Human Factors and Ergonomics (AHFE), Washington, DC, USA, 24–28 July 2019; pp. 307–316. [Google Scholar]
  48. Burns, C.G.; Oliveira, L.; Birrell, S.; Iyer, S.; Thomas, P. Pedestrian Decision-Making Responses to External Human-Machine Interface Designs for Autonomous Vehicles. In Proceedings of the 30th IEEE Intelligent Vehicles Symposium, HFIV: Human Factors in Intelligent Vehicles, Paris, France, 9–12 June 2019. [Google Scholar]
  49. Dey, D.; Terken, J. Pedestrian Interaction with Vehicles: Roles of Explicit and Implicit Communication. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Oldenburg, Germany, 24–27 September 2017; pp. 109–113. [Google Scholar]
  50. Zimmermann, R.; Wettach, R. First Step into Visceral Interaction with Autonomous Vehicles. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Oldenburg, Germany, 24–27 September 2017; pp. 58–64. [Google Scholar] [CrossRef]
  51. Mahadevan, K.; Somanath, S.; Sharlin, E. Communicating Awareness and Intent in Autonomous Vehicle-Pedestrian Interaction. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal QC, Canada, 21–26 April 2018; pp. 1–12. [Google Scholar]
  52. Portouli, E.; Nathanael, D.; Marmaras, N. Drivers’ communicative interactions: On-road observations and modelling for integration in future automation systems. Ergonomics 2014, 57, 1795–1805. [Google Scholar] [CrossRef]
  53. Imbsweiler, J.; Ruesch, M.; Weinreuter, H.; Puente León, F.; Deml, B. Cooperation behaviour of road users in t-intersections during deadlock situations. Transp. Res. Part F Traffic Psychol. Behav. 2018, 58, 665–677. [Google Scholar] [CrossRef]
  54. Kauffmann, N.; Winkler, F.; Naujoks, F.; Vollrath, M. What Makes a Cooperative Driver? Identifying parameters of implicit and explicit forms of communication in a lane change scenario. Transp. Res. Part F Traffic Psychol. Behav. 2018, 58, 1031–1042. [Google Scholar] [CrossRef]
  55. Kauffmann, N.; Winkler, F.; Vollrath, M. What Makes an Automated Vehicle a Good Driver? Exploring Lane Change Announcements in Dense Traffic Situations. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal QC, Canada, 21–26 April 2018; pp. 1–9. [Google Scholar]
  56. Brown, B.; Laurier, E. The Trouble with Autopilots: Assisted and autonomous driving on the social road. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, Colorado, USA, 6–11 May 2017; pp. 416–429. [Google Scholar]
  57. Hidas, P. Modelling lane changing and merging in microscopic traffic simulation. Transp. Res. Part C Emerg. Technol. 2002, 10, 351–371. [Google Scholar] [CrossRef]
  58. Ibanez-Guzman, J.; Lefevre, S.; Mokkadem, A.; Rodhaim, S. Vehicle to vehicle communications applied to road intersection safety, field results. In Proceedings of the 13th International IEEE Conference on Intelligent Transportation Systems, Funchal, Portugal, 19–22 September 2010; pp. 192–197. [Google Scholar]
  59. Imbsweiler, J.; Stoll, T.; Ruesch, M.; Baumann, M.; Deml, B. Insight into cooperation processes for traffic scenarios: Modelling with naturalistic decision making. Cogn. Technol. Work 2018, 20, 621–635. [Google Scholar] [CrossRef]
  60. Al-Shihabi, T.; Mourant, R.R. Toward More Realistic Driving Behavior Models for Autonomous Vehicles in Driving Simulators. Transp. Res. Rec J. Transp. Res. Board 2003, 1843, 41–49. [Google Scholar] [CrossRef]
  61. Voß, G.M.I.; Keck, C.M.; Schwalm, M. Investigation of drivers’ thresholds of a subjectively accepted driving performance with a focus on automated driving. Transp. Res. Part F Traffic Psychol. Behav. 2018, 56, 280–292. [Google Scholar] [CrossRef]
  62. Bellem, H.; Schönenberg, T.; Krems, J.F.; Schrauf, M. Objective metrics of comfort: Developing a driving style for highly automated vehicles. Transp. Res. Part F Traffic Psychol. Behav. 2016, 41, 45–54. [Google Scholar] [CrossRef]
  63. Oliveira, L.; Proctor, K.; Burns, C.; Luton, J.; Mouzakitis, A. Trust and acceptance of automated vehicles: A qualitative study. In Proceedings of the INTSYS – 3rd EAI International Conference on Intelligent Transport Systems, Braga, Portugal, 4–6 December 2019. submitted for publication. [Google Scholar]
  64. Smyth, J.; Jennings, P.; Mouzakitis, A.; Birrell, S. Too Sick to Drive: How Motion Sickness Severity Impacts Human Performance. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 1787–1793. [Google Scholar]
  65. Bellem, H.; Thiel, B.; Schrauf, M.; Krems, J.F. Comfort in automated driving: An analysis of preferences for different automated driving styles and their dependence on personality traits. Transp. Res. Part F Traffic Psychol. Behav. 2018, 55, 90–100. [Google Scholar] [CrossRef]
  66. Waytz, A.; Heafner, J.; Epley, N. The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. J. Exp. Soc. Psychol. 2014, 52, 113–117. [Google Scholar] [CrossRef]
  67. Huang, C.; Mutlu, B. The Repertoire of Robot Behavior: Designing Social Behaviors to Support Human-Robot Joint Activity. J. Hum. -Robot Interact. 2013, 2, 80–102. [Google Scholar] [CrossRef]
  68. Häuslschmid, R.; von Bülow, M.; Pfleging, B.; Butz, A. Supporting Trust in Autonomous Driving. In Proceedings of the 22nd International Conference on Intelligent User Interfaces, Limassol, Cyprus, 13–16 March 2017; pp. 319–329. [Google Scholar]
  69. Zihsler, J.; Hock, P.; Walch, M.; Dzuba, K.; Schwager, D.; Szauer, P.; Rukzio, E. Carvatar: Increasing Trust in Highly-Automated Driving Through Social Cues. In Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications Adjunct, Ann Arbor, MI, USA, 24–26 October 2016; pp. 9–14. [Google Scholar]
  70. Zhu, M.; Wang, X.; Wang, Y. Human-like autonomous car-following model with deep reinforcement learning. Transp. Res. Part C Emerg. Technol. 2018, 97, 348–368. [Google Scholar] [CrossRef] [Green Version]
  71. Lee, J.G.; Kim, K.J.; Lee, S.; Shin, D.H. Can Autonomous Vehicles Be Safe and Trustworthy? Effects of Appearance and Autonomy of Unmanned Driving Systems. Int. J. Hum. Comput. Interact. 2015, 31, 682–691. [Google Scholar] [CrossRef]
  72. Cha, E.; Kim, Y.; Fong, T.; Mataric, M.J. A Survey of Nonverbal Signaling Methods for Non-Humanoid Robots. Found. Trends Robot. 2018, 6, 211–323. [Google Scholar] [CrossRef]
  73. Galvin, R. How many interviews are enough? Do qualitative interviews in building energy consumption research produce reliable knowledge? J. Build. Eng. 2014, 1, 2–12. [Google Scholar] [CrossRef]
  74. Kuniavsky, M.; Goodman, E.; Moed, A. Observing the User Experience: A Practitioner’s Guide to User Research, 2nd ed.; Elsevier: Amsterdam, The Netherlands, 2012. [Google Scholar]
  75. Jian, J.-Y.; Bisantz, A.M.; Drury, C.G. Foundations for an Empirically Determined Scale of Trust in Automated Systems. Int. J. Cogn. Ergon. 2000, 4, 53–71. [Google Scholar] [CrossRef]
  76. Braun, V.; Clarke, V. Using thematic analysis in psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef] [Green Version]
  77. Glaser, B.G. The Constant Comparative Method of Qualitative Analysis. Soc. Probl. 1965, 12, 436–445. [Google Scholar] [CrossRef]
  78. Yang, X.J.; Unhelkar, V.V.; Li, K.; Shah, J.A. Evaluating Effects of User Experience and System Transparency on Trust in Automation. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, Vienna, Austria, 6–9 March 2017; pp. 408–416. [Google Scholar]
  79. Dogramadzi, S.; Giannaccini, M.E.; Harper, C.; Sobhani, M.; Woodman, R.; Choung, J. Environmental Hazard Analysis—A Variant of Preliminary Hazard Analysis for Autonomous Mobile Robots. J. Intell. Robot Syst. 2014, 76, 73–117. [Google Scholar] [CrossRef]
  80. Vinkhuyzen, E.; Cefkin, M. Developing socially acceptable autonomous vehicles. In Proceedings of the Ethnographic Praxis in Industry Conference, Minneapolis, MN, USA, 9 August–1 September 2016; pp. 522–534. [Google Scholar]
  81. Mahadevan, K.; Somanath, S.; Sharlin, E. “Fight-or-Flight”: Leveraging Instinctive Human Defensive Behaviors for Safe Human-Robot Interaction. In Proceedings of the Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 183–184. [Google Scholar]
  82. Ekman, F.; Johansson, M.; Sochor, J. Creating appropriate trust in automated vehicle systems: A framework for HMI design. IEEE Trans. Hum. Mach. Syst. 2018, 48, 95–101. [Google Scholar] [CrossRef]
  83. Wiegand, G.; Schmidmaier, M.; Weber, T.; Liu, Y.; Hussmann, H. I Drive—You Trust: Explaining Driving Behavior of Autonomous Cars. In Proceedings of the Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–6. [Google Scholar]
  84. Heikoop, D.D.; de Winter, J.C.F.; van Arem, B.; Stanton, N.A. Effects of mental demands on situation awareness during platooning: A driving simulator study. Transp. Res. Part F Traffic Psychol. Behav. 2018, 58, 193–209. [Google Scholar] [CrossRef] [Green Version]
  85. Kunze, A.; Summerskill, S.J.; Marshall, R.; Filtness, A.J. Evaluation of Variables for the Communication of Uncertainties Using Peripheral Awareness Displays. In Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Toronto, ON, Canada, 23–25 September 2018; pp. 147–153. [Google Scholar]
  86. Oliveira, L.; Luton, J.; Iyer, S.; Burns, C.; Mouzakitis, A.; Jennings, P.; Birrell, S. Evaluating How Interfaces Influence the User Interaction with Fully Autonomous Vehicles. In Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Toronto, ON, Canada, 23–25 September 2018; pp. 320–331. [Google Scholar]
  87. Forster, Y.; Hergeth, S.; Naujoks, F.; Beggiato, M.; Krems, J.F.; Keinath, A. Learning to use automation: Behavioral changes in interaction with automated driving systems. Transp. Res. Part F Traffic Psychol. Behav. 2019, 62, 599–614. [Google Scholar] [CrossRef]
  88. Koo, J.; Kwac, J.; Ju, W.; Steinert, M.; Leifer, L.; Nass, C. Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance. Int. J. Interact. Des. Manuf. 2015, 9, 269–275. [Google Scholar] [CrossRef]
  89. Forster, Y.; Hergeth, S.; Naujoks, F.; Krems, J.; Keinath, A. User Education in Automated Driving: Owner’s Manual and Interactive Tutorial Support Mental Model Formation and Human-Automation Interaction. Information 2019, 10, 22. [Google Scholar] [CrossRef]
  90. Hartwich, F.; Beggiato, M.; Krems, J.F. Driving comfort, enjoyment and acceptance of automated driving–effects of drivers’ age and driving style familiarity. Ergonomics 2018, 61, 1017–1032. [Google Scholar] [CrossRef]
  91. Driggs-Campbell, K.; Govindarajan, V.; Bajcsy, R. Integrating Intuitive Driver Models in Autonomous Planning for Interactive Maneuvers. IEEE Trans. Intell. Transp. Syst. 2017, 18, 3461–3472. [Google Scholar] [CrossRef]
  92. Elbanhawi, M.; Simic, M.; Jazar, R. In the Passenger Seat: Investigating Ride Comfort Measures in Autonomous Cars. IEEE Intell. Transp. Syst. Mag. 2015, 7, 4–17. [Google Scholar] [CrossRef]
  93. Hardin, G. The Tragedy of the Commons. Sci. Mag. 1968, 162, 1243–1248. [Google Scholar] [CrossRef] [Green Version]
  94. Parasuraman, R.; Miller, C.A. Trust and etiquette in high-criticality automated systems. Commun ACM 2004, 47, 51. [Google Scholar] [CrossRef]
Figure 1. Vehicles used during this study, manufactured by RDM/Aurrigo, parked at opposite sides of the arena prior to the start of a user trial.
Figure 1. Vehicles used during this study, manufactured by RDM/Aurrigo, parked at opposite sides of the arena prior to the start of a user trial.
Information 10 00219 g001
Figure 2. Diagram of the arena showing the interactions at the three T-junctions, where the vehicle going straight had priority.
Figure 2. Diagram of the arena showing the interactions at the three T-junctions, where the vehicle going straight had priority.
Information 10 00219 g002
Figure 3. Vehicles negotiating the T-junction during the machine-like experiment. The white vehicle (a) arrives at the junction, (b) stops and applies the brakes (as indicated by the red LEDs around the wheel wings) as it ‘knows’ the black vehicle is approaching. The black vehicle turns the corner (c) and passes in front of the white vehicle (d), which then disengages the brakes (e) and is clear to proceed (f).
Figure 3. Vehicles negotiating the T-junction during the machine-like experiment. The white vehicle (a) arrives at the junction, (b) stops and applies the brakes (as indicated by the red LEDs around the wheel wings) as it ‘knows’ the black vehicle is approaching. The black vehicle turns the corner (c) and passes in front of the white vehicle (d), which then disengages the brakes (e) and is clear to proceed (f).
Information 10 00219 g003
Figure 4. Mean scores of trust per condition through journeys.
Figure 4. Mean scores of trust per condition through journeys.
Information 10 00219 g004
Figure 5. Distrust scores as a separated subset from [75].
Figure 5. Distrust scores as a separated subset from [75].
Information 10 00219 g005
Figure 6. Trust scores as a separated subset from [75].
Figure 6. Trust scores as a separated subset from [75].
Information 10 00219 g006
Table 1. Mean trust scores as measured by [75] and related standard deviation for both human and machine driving styles.
Table 1. Mean trust scores as measured by [75] and related standard deviation for both human and machine driving styles.
Mean Trust Scores per RunHumanSDMachineSD
Table 2. Paired differences in Jian et al. [75] scores between each journey in the vehicle.
Table 2. Paired differences in Jian et al. [75] scores between each journey in the vehicle.
Paired DifferencesMeanStd. Dev.Std. Error Mean95% Confidence Interval of the DifferenceTdfSig. (2-tailed)
Pair 1Trust score run 1-Trust score run 2−1.7325.5990.874−3.4990.036−1.980400.055
Pair 2Trust score run 1-Trust score run 3−5.8055.9040.922−7.669−3.941−6.295400.000
Pair 3Trust score run 1-Trust score run 4−7.0247.2061.125−9.299−4.750−6.242400.000
Pair 4Trust score run 2-Trust score run 3−4.0736.5091.017−6.128−2.019−4.007400.000
Pair 5Trust score run 2-Trust score run 4−5.2937.1281.113−7.543−3.043−4.754400.000
Pair 6Trust score run 3-Trust score run 4−1.2203.5250.551−2.332−0.107−2.215400.033
Table 3. Post-hoc tests for the Distrust sub-factor.
Table 3. Post-hoc tests for the Distrust sub-factor.
Paired Differences, distrust sub-factorMeanStd. Dev.Std. Error Mean95% Confidence Interval of the DifferenceTdfSig. (2-tailed)
Pair 1Distrust sub-factor 1-Distrust sub-factor 20.1713.2010.500−0.8401.1810.342400.734
Pair 2Distrust sub-factor 1-Distrust sub-factor 31.1953.7230.5810.0202.3702.055400.046
Pair 3Distrust sub-factor 1-Distrust sub-factor 41.3413.9470.6160.0962.5872.176400.036
Pair 4Distrust sub-factor 2-Distrust sub-factor 31.0242.3180.3620.2931.7562.829400.007
Pair 5Distrust sub-factor 2-Distrust sub-factor 41.1712.5870.4040.3541.9872.897400.006
Pair 6Distrust sub-factor 3-Distrust sub-factor 40.1461.4590.228−0.3140.6070.642400.524
Table 4. Post-hoc tests for the Trust sub-factor.
Table 4. Post-hoc tests for the Trust sub-factor.
Paired Differences, trust sub-factorMeanStd. Dev.Std. Error Mean95% Confidence Interval of the DifferenceTdfSig. (2-tailed)
Pair 1Trust sub-factor 1-Trust sub-factor 2−1.5614.3990.687−2.950−0.172−2.272400.029
Pair 2Trust sub-factor 1-Trust sub-factor 3−4.6103.6250.566−5.754−3.465−8.142400.000
Pair 3Trust sub-factor 1-Trust sub-factor 4−5.6834.6180.721−7.140−4.225−7.880400.000
Pair 4Trust sub-factor 2-Trust sub-factor 3−3.0494.9440.772−4.609−1.488−3.948400.000
Pair 5Trust sub-factor 2-Trust sub-factor 4−4.1225.3860.841−5.822−2.422−4.900400.000
Pair 6Trust sub-factor 3-Trust sub-factor 4−1.0732.9360.459−2.000−0.146−2.341400.024

Share and Cite

MDPI and ACS Style

Oliveira, L.; Proctor, K.; Burns, C.G.; Birrell, S. Driving Style: How Should an Automated Vehicle Behave? Information 2019, 10, 219.

AMA Style

Oliveira L, Proctor K, Burns CG, Birrell S. Driving Style: How Should an Automated Vehicle Behave? Information. 2019; 10(6):219.

Chicago/Turabian Style

Oliveira, Luis, Karl Proctor, Christopher G. Burns, and Stewart Birrell. 2019. "Driving Style: How Should an Automated Vehicle Behave?" Information 10, no. 6: 219.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop