Next Article in Journal
Association of the Big Five Personality Traits with Training Effectiveness, Sense of Presence, and Cybersickness in Virtual Reality
Next Article in Special Issue
Ranking Crossing Scenario Complexity for eHMIs Testing: A Virtual Reality Study
Previous Article in Journal
Emerging Technologies and New Media for Children: Introduction
Previous Article in Special Issue
The Value of Context-Based Interface Prototyping for the Autonomous Vehicle Domain: A Method Overview
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Smiles and Angry Faces vs. Nods and Head Shakes: Facial Expressions at the Service of Autonomous Vehicles

Department of Health, Education, and Technology, Division of Health, Medicine, and Rehabilitation, Engineering Psychology, Luleå University of Technology, 971 87 Luleå, Sweden
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2023, 7(2), 10; https://doi.org/10.3390/mti7020010
Submission received: 16 September 2022 / Revised: 8 January 2023 / Accepted: 9 January 2023 / Published: 20 January 2023

Abstract

:
When deciding whether to cross the street or not, pedestrians take into consideration information provided by both vehicle kinematics and the driver of an approaching vehicle. It will not be long, however, before drivers of autonomous vehicles (AVs) will be unable to communicate their intention to pedestrians, as they will be engaged in activities unrelated to driving. External human–machine interfaces (eHMIs) have been developed to fill the communication gap that will result by offering information to pedestrians about the situational awareness and intention of an AV. Several anthropomorphic eHMI concepts have employed facial expressions to communicate vehicle intention. The aim of the present study was to evaluate the efficiency of emotional (smile; angry expression) and conversational (nod; head shake) facial expressions in communicating vehicle intention (yielding; non-yielding). Participants completed a crossing intention task where they were tasked with deciding appropriately whether to cross the street or not. Emotional expressions communicated vehicle intention more efficiently than conversational expressions, as evidenced by the lower latency in the emotional expression condition compared to the conversational expression condition. The implications of our findings for the development of anthropomorphic eHMIs that employ facial expressions to communicate vehicle intention are discussed.

1. Introduction

1.1. Interaction in Traffic

Interactions between road users are formally regulated by the traffic code, yet road users resort frequently to informal communication to ensure traffic safety and improve traffic flow, especially in ambiguous traffic situations where right-of-way rules are unclear and dedicated infrastructure is missing [1,2]. It is very common that communicating intention, negotiating priority, and resolving deadlocks is achieved through the casual use of informal communicative cues, such as eye contact, nodding, and waving [3,4].
It will not be long, however, before drivers of autonomous vehicles (AVs) will be unable to communicate their intention, as they will be engaged in activities unrelated to driving, such as reading, typing, or watching a movie [5,6]. This situation may prove difficult for pedestrians especially, because when deciding whether to cross the street or not, they take into consideration information provided by both vehicle kinematics, such as speed and acceleration [7,8,9], and the driver of an approaching vehicle, such as gaze direction and facial expression, according to a plethora of research conducted in different cultural contexts, such as France, China, the Czech Republic, Greece, the Netherlands, and the UK [4,10,11,12,13,14,15,16,17,18,19]. To make matters worse, there is ample evidence for the underestimation of vehicle speed on the part of pedestrians, as well as the overestimation of the time at their disposal to attempt crossing the street safely [20,21,22].

1.2. External Human–Machine Interfaces as a Substitute

External human–machine interfaces (eHMIs), i.e., human–machine interfaces that utilize the external surface of the vehicle, such as the headlights, the radiator grille, the hood, or the roof, have been developed to fill the communication gap that will result by offering information about the situational awareness and intention of an AV, mainly to ensure that pedestrians are safe and traffic flow is unhindered but also to promote public acceptance of the new technology [23,24,25,26,27,28,29,30]. Previous research has found AV–pedestrian interactions to be more effective and efficient and perceived as safer and more satisfactory when an AV is equipped with an interface compared to when it is not and any relevant information pedestrians have to extract solely from vehicle kinematics [31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48]; however see [49,50,51].
Importantly, relevant work has also provided evidence for the positive effect eHMIs have on AV–driver interactions, especially in ambiguous traffic situations. More specifically, Colley et al. [52] found that, in the context of an unsignalized four-way intersection scenario, drivers reported higher perceived safety, lower mental workload, and higher understanding of AV intention when interacting with an AV that was equipped with an interface compared to interacting with an AV that did not offer external communication. Similarly, drivers that were instructed to perform a left turn at a four-way intersection while an AV was approaching from the opposite side, maintained higher speed and completed their maneuver faster in the eHMI experimental condition [53]. Finally, Rettenmaier et al. [54] evaluated an eHMI in the context of an unregulated bottleneck scenario and showed that external communication led to shorter passing times and fewer crashes.

1.3. The Case for Anthropomorphism in eHMI Development

Interface concepts that use text to communicate information to pedestrians have been shown to be easily comprehensible [15,34,36,41,55,56,57,58]. Nevertheless, said concepts tend to be effective and efficient only in the case of pedestrians who speak the specific language of presentation, and are also limited with respect to international marketability potential due to the language barrier [23,30]. Interface concepts that employ pictorial message coding, on the other hand, succeed in transcending local specificities by using widely recognized traffic symbols. Still, these traffic symbols represent advice or instruction directed at the pedestrian, which is inadvisable from a design perspective due to possible liability issues in the event of a traffic accident [59]. Finally, eHMI concepts that employ abstract message coding in the form of novel light patterns, such as pulsating or sweeping lights, attempt to simulate the functionality of a vehicle’s turn signals and brake lights—that address other drivers mainly—and introduce it to the AV–pedestrian interaction domain. Even so, these concepts have been found lacking in comprehensibility due to the demand they place on pedestrians to establish new mental connections that do not draw from their experience of social or traffic interaction and, therefore, require their design rationale be explained beforehand and pedestrians be trained to use them successfully [31,35,41,47,48,51,60].
On the contrary, eHMI concepts that employ elements of human appearance and/or behavior take advantage of prior experience to facilitate AV–pedestrian interactions and, thus, require neither explanation and training nor cultural adaptation for pedestrians to use them successfully. Several eHMI concepts make use of eyes, facial expressions, and hand gestures to communicate pedestrian acknowledgement and vehicle intention [34,37,38,41,42,56,61,62]. For instance, Chang et al. [32] evaluated a concept where the headlights of the AV serve as its “eyes” and, accordingly, turn and look at the pedestrian to communicate they have been acknowledged and they will be yielded to. Even though the participants had not received any information regarding design rationale beforehand, the interface still led to them deciding faster whether to cross or not and reporting higher perceived safety compared to the baseline condition, namely interacting with an AV that offered no external communication. Moreover, according to relevant work, an effective design approach for increasing trust in a new technology is adding anthropomorphic elements to it [63,64,65,66], which also holds true for AVs and the placement of trust in their capabilities by the general public [67]. As trust is a mediating factor for automation acceptance, an even stronger case can be made for anthropomorphism in eHMI development, especially during the initial stages of AV deployment, so that skepticism and hesitation are allayed and the public reaps the full benefits of the new technology [68,69,70,71,72,73].
Virtual human characters (VHCs) have been utilized extensively in social neuroscience research for the purpose of studying mental and neural processes in both neurotypical and neurodiverse samples [74,75,76,77,78,79,80], due to being processed similarly to fellow conspecifics [81] and evoking a strong sense of social presence [82]. VHCs have also been established as tools for human–computer interaction research in the field of affective computing [83,84,85,86,87,88,89,90] and as end-products in the form of real-world solutions to address business, educational, entertainment, and healthcare challenges [91]. Despite their realism and sociability, however, they have been employed only once as an end-product in the field of eHMIs. More specifically, Furuya et al. [92] developed an eHMI concept where a full-sized VHC driver either gazes directly at the pedestrian or looks straight ahead, along the road. The participants had to cross the street at an unsignalized crosswalk while an AV that offered external communication was approaching. Results showed a higher preference for the VHC driver that engaged in eye contact to both the driver that did not and the baseline condition (no eHMI), while some participants suggested that it would be useful to add gestures to its social repertoire.

1.4. Facial Expressions and Vehicle Yielding Intention

Social neuroscience research has found mentalizing, i.e., attributing mental states, such as desires, beliefs, emotions, and intentions to others, to be crucial for the accurate interpretation or anticipation of their behavior [93,94,95]. The affective state of an expresser is generally considered to be manifested via their emotional facial expressions [96]. Furthermore, said expressions are also produced to provide information about an expresser’s cognitive state and intentions [97]. During social interactions, for example, a Duchenne smile—the universally recognized facial expression of happiness—is produced to communicate friendliness or kindness [98], whereas an angry expression—the universally recognized facial expression of anger—is produced to communicate competitiveness or aggressiveness [99]. Accordingly, the smile has been utilized to signify yielding intention in various anthropomorphic eHMI concepts on the premise that, in a hypothetical situation where right-of-way is negotiated, a driver could smile at a pedestrian to communicate yielding intention [34,37,41,42,56,62,100]. Having considered the circumstances, the pedestrian would associate friendliness/kindness with being offered safe passage and would, thus, proceed to cross first [101].
Conversational facial expressions, such as the nod and the head shake, are universally utilized for providing information about an expresser’s cognitive state and intentions [102,103,104]. For instance, during social interactions, a nod is produced to communicate agreement or cooperativeness, whereas a head shake is produced to communicate disagreement or unwillingness to cooperate [105,106,107,108,109,110]. Fittingly, a driver could also decide to nod at a pedestrian to let them know they will be yielded to [1,3,4]. In that case, the pedestrian would associate agreement/cooperativeness with relinquishing right-of-way and would, thus, go ahead and cross first [101]. Indeed, in Rouchitsas and Alm [101], the nod was shown to help pedestrians infer vehicle yielding intention highly effectively in the context of a self-paced crossing intention task, as evidenced by an accuracy of 96.3% in the nod experimental condition.

1.5. Facial Expressions and Vehicle Non-Yielding Intention

With respect to employing facial expressions for communicating not relinquishing right-of-way, several anthropomorphic eHMI concepts have employed the neutral expression to signify both non-yielding and cruising intention, i.e., intention to carry on with operating in automated mode [37,41,42,100]. A neutral expression, i.e., an alert but blank expression, is a social stimulus that is evaluated as indifferent with respect to valence and does not provide a perceiver with any insight as to what an expresser is feeling, thinking, or planning on doing [97,111,112]. Nonetheless, when employed side by side with a smile in a concept, it is reasonable for the neutral expression to act as a signifier of non-yielding or cruising intention, as it is doubtful that a pedestrian would try crossing the street without having received some confirmation from the driver first [101], and will most likely adopt a look-before-you-leap strategy instead, based on ambiguity aversion [113]. Even so, communicating safety-critical information via an ambiguous expression is inadvisable, as misjudging the intention of an oncoming vehicle may have severe consequences [114,115]. In fact, Rouchitsas and Alm [101] found the neutral expression to not communicate vehicle cruising intention effectively, as evidenced by an accuracy of only 82.3% for interpretation of vehicle intention, which is lower than the 85% criterion for effectiveness proposed by Kaß et al. [116]. Accordingly, more unambiguous expressions for communicating vehicle non-yielding and cruising intention will have to be explored as alternatives if anthropomorphic eHMI concepts are to be utilized safely in the soon-to-be mixed-autonomy traffic.
Regarding employing emotional expressions to communicate non-yielding intention, Chang [62] evaluated an interface concept where a smile signifies yielding intention and a sad expression signifies non-yielding intention. However, it is highly unlikely that a driver would be saddened by a pedestrian [101], given that sadness is typically felt as a result of loss while a sad facial expression communicates need of help or comfort [97]. Accordingly, it comes as no surprise that said concept was not found to be effective in helping pedestrians decide appropriately whether to cross or not, as evidenced by an accuracy of only 75% for interpretation of vehicle intention. Therefore, a sad expression is not a wise design choice for signifying non-yielding intention.
On the contrary, in real-life traffic situations, it makes great sense that the reckless or inconsiderate behavior of a pedestrian would make a driver angry. In fact, previous research has shown that drivers experience anger very often when driving around the city [117]. This is to be expected, given the high traffic density that characterizes urban roads and the overabundance of situations where a driver may be delayed or even subjected to harm [99]. Interestingly, braking for a jaywalker, i.e., a pedestrian that attempts to cross the street at an illegal location, has been shown to elicit anger in drivers [118,119]. If found in this situation, a driver could produce an angry expression to communicate non-yielding intention to the potential jaywalker [101]. Having considered the circumstances, the pedestrian would associate competitiveness/aggressiveness with not relinquishing right-of-way and would, thus, not attempt to cross the street first. However, a driver could also decide to shake their head at the jaywalker to let them know they will not be yielded to [120]. In that case, the pedestrian would associate disagreement/unwillingness to cooperate with not relinquishing right-of-way and would, thus, not attempt to cross the street first. Accordingly, Rouchitsas and Alm [101] evaluated the effectiveness of the angry expression and the head shake in signifying vehicle non-yielding intention. Both expressions were shown to help pedestrians infer vehicle non-yielding intention highly effectively in the context of a self-paced crossing intention task, as evidenced by an accuracy of 96.2% for the angry expression and 99.7% for the head shake.

1.6. Aim and Approach

Having said that, in real-world traffic, deciding appropriately is not sufficient; pedestrians should decide whether to cross or not in a timely manner, so that both main ambitions of the traffic system, namely traffic safety and traffic flow, are satisfied [121]. Hence, the present study set out to evaluate the efficiency of emotional and conversational expressions in communicating vehicle intention, i.e., their effectiveness with respect to temporal resources expended [122]. Participants performed a speeded crossing intention task where latency and accuracy for crossing intention responses were measured. Latency in the smile and the nod conditions is indicative of efficiency in communicating yielding intention, whereas latency in the angry expression and the head shake conditions is indicative of efficiency in communicating non-yielding intention [123].
The employed task was performed in the laboratory, a situation that ensures maximum participant safety, and allows researchers to control confounding and extraneous factors more effectively and manipulate factors of interest more efficiently compared to field experiments employing physical prototypes of eHMIs. These qualities are invaluable at the early stages of the development of a new technology. Furthermore, to ensure that other traffic and expectations about priority would not interfere with their responses, our participants were presented with an oversimplified traffic scenario that involved only one oncoming vehicle and one pedestrian intending to cross the street at a random uncontrolled location [116]. Similarly, the stimuli were presented out of context, so that responses would not be compromised by visual elements of secondary importance or cues from vehicle kinematics [58]. Additionally, we employed a male and a female VHC as stimuli to control for possible gender effects on participant performance, as previous research has shown male and female gender cues to affect the identification of facial expressions differentially [124,125], while research on gender fluidity and androgyny in the context of human–VHC interactions is still lacking [126]. Finally, we followed the common practice in the field and framed participant behavior in binary terms, namely “cross/not cross”, because, regardless of traffic situation complexity and decision formula, crossing the street is a binary decision [127].

2. Method

2.1. Design

We employed a 2 × 2 × 2 within-subject design, where the factors “VHC gender” (male; female), “vehicle intention” (yielding; non-yielding), and “expression type” (emotional; conversational) served as the independent variables. When crossed, the three factors yielded a total of 8 experimental conditions.

2.2. Stimuli

A total of 10 animated sequences were used as stimuli in the experiment (see Appendix A). Eight 3D animated sequences were developed in Poser Pro 11 (Bondware Inc., Murfreesboro, TN, USA). The development and validation of the VHC stimuli we used in the experiment are described in full detail in Rouchitsas and Alm [101]. An additional two sequences were developed in Poser Pro 11, where an initially grey circle of 28.5 cm diam. (1500 ms) turned either green or red (1134 ms), before turning grey again (500 ms). Via resembling the operation of a typical traffic light, these sequences served the purpose of regularly reminding the participants that the facial expressions were to be processed in the context of interaction in traffic.

2.3. Apparatus

We used a Lenovo ThinkPad P50s (Intel® Core™ i7-6500U CPU @ 2.5 GHz; 8 GB RAM; Intel HD Graphics 520; Windows 10 Education) to run the task, and a 24” Fujitsu B24W-7 LED monitor to present the stimuli. We also used a Chronos response box (Psychology Software Tools Inc.) to collect participant responses. E-Prime 3.0 (Psychology Software Tools Inc.) controlled the presentation of stimuli and the collection of data.

2.4. Procedure

The introductory and familiarization phases of the procedure were identical to the ones described in Rouchitsas and Alm [101]. After assuming a standing position—to facilitate pedestrian perspective adoption—the participants performed a crossing intention task (speeded, two-alternative forced choice). (Relevant research has found that assuming a standing position does not affect performance in a primary cognitive task [128,129]).
The traffic scenario they were presented with was the following: “Imagine you are a pedestrian about to cross a one-way street at a random uncontrolled location when you see an autonomous car approaching from the right. The autonomous car is equipped with our communication system. A smile, a nod (down-up head movement), and the green circle, mean that the autonomous car will stop to let you cross first. An angry expression, a head shake (left-right head movement), and the red circle, mean that the autonomous car will not stop to let you cross first.” (It has been shown that green is suitable for indicating that a pedestrian may cross in front of an AV, whereas red is suitable for indicating that a pedestrian may not cross in front of an AV [130]).
In each trial, after a fixation point of variable duration (1000–1400 ms) appearing at the location that coincided with the center of the interpupillary line of the VHC, participants were presented with one of the sequences (Figure 1). Their task was to respond as fast and accurately as possible whether they might cross or not, by pressing the correct button within the duration of the sequence. The two rightmost buttons of the Chronos response box were used for data collection. The buttons were counterbalanced between participants. A blank white screen was presented for 1200 ms before the start of the next trial. Each sequence was presented 24 times, yielding a total of 240 experimental trials. The participants completed a round of 20 practice trials before the actual task. Feedback on performance was provided during practice and achieving a level of minimum 0.9 accuracy was a requirement for progressing to the actual task. The duration of the procedure was about 45 min.

2.5. Dependent Variables

We measured the latency and accuracy (total correct trials/total trials) for crossing intention responses. We programmed “cross” to be the correct response for the green circle, the smile, and the nod, irrespective of VHC gender condition, and “not cross” to be the correct response for the red circle, the angry expression, and the head shake, irrespective of VHC gender condition.

2.6. Participants

The Umeå Research Ethics Committee approved the study (Ref: 2020-00642). We used a convenience sample of 45 participants (30 male, 15 female; mean age = 25.8 years, SD = 6.5 years). All participants volunteered to participate in the study and provided written informed consent. All participants reported normal or corrected-to-normal vision. A total of 24 participants were Swedish nationals, 3 German, 3 Greek, 3 Italian, 2 Russian, 2 Mexican, 1 French, 1 Macedonian, 1 Colombian, 1 Brazilian, 1 Iranian, 1 Pakistani, 1 Indian, and 1 Nepalese. A total of 44 participants assumed the role of a pedestrian on a daily basis and 1 on a weekly basis.

2.7. Data Analysis

Overall mean accuracy in the task was 0.969. We excluded the data from 2 participants from further analyses due to low accuracy (>2 SDs below the overall mean). Only RTs of correct trials were analyzed [131]. Correct RTs that were generated in the first 1500 ms of each sequence, i.e., before the VHC had begun producing a facial expression or the grey circle had changed color, were discarded as premature. We excluded the data from 1 participant from further analyses due to high mean latency (>2 SDs above the overall mean). Outliers, defined as correct RTs that were 2 SDs above or below each individual mean per condition, were excluded from further analyses [132,133]. We excluded the data from 1 participant from further analyses due to a high percentage (>21%) of excluded RTs in a single condition. To test the effects of VHC gender, vehicle intention, and expression type on the dependent variables, a 2 (male; female) × 2 (yielding; non-yielding) × 2 (emotional; conversational) repeated-measures analysis of variance (ANOVA) was conducted [134]. Excel (Microsoft) was used to process the data and SPSS Statistics 28 (IBM) was used to analyze the data.

3. Results

3.1. Latency

In the yielding intention condition, mean latency (standard error) was 595 ms (11.4) for the emotional expression (smile) and 618.1 ms (13.7) for the conversational expression (nod) (Figure 2). In the non-yielding intention condition, mean latency (standard error) was 599.7 ms (11.3) for the emotional expression (angry expression) and 632.2 ms (13.2) for the conversational expression (head shake) (Figure 2). Mean latency (standard error) was 605.7 ms (11.7) for the male VHC and 616.9 ms (10.9) for the female VHC. A 2 × 2 × 2 repeated-measures ANOVA revealed a significant main effect of expression type, F(1, 40) = 11.985, p = 0.001, ηp2 = 0.231, a significant main effect of VHC gender, F(1, 40) = 16.008, p < 0.001, ηp2 = 0.286, and a significant interaction between VHC gender and intention, F(1, 40) = 4.759, p = 0.035, ηp2 = 0.106. To unpack the interaction, we performed simple effects analyses. Results showed that the non-yielding intention was responded to significantly slower in the female VHC compared to the male VHC condition (M = 17.7, SE = 3.7), F(1, 40) = 22.594, p < 0.001, ηp2 = 0.361, which was not the case for the yielding intention (M = 4.7, SE = 4.4), F(1, 40) = 1.115, p = 0.297, ηp2 = 0.027 (Figure 3).

3.2. Accuracy

In the yielding intention condition, mean accuracy (standard error) was 0.983 (0.003) for the emotional expression (smile) and 0.970 (0.004) for the conversational expression (nod). In the non-yielding intention condition, mean accuracy (standard error) was 0.974 (0.004) for the emotional expression (angry expression) and 0.983 (0.004) for the conversational expression (head shake). Mean accuracy (standard error) was 0.975 (0.004) for the male VHC and 0.979 (0.004) for the female VHC. A 2 × 2 × 2 repeated-measures ANOVA revealed a significant interaction between intention and expression type, F(1, 40) = 7.174, p = 0.011, ηp2 = 0.152. To unpack the interaction, we performed simple effects analyses. Results showed that, in the yielding intention condition, the emotional expression (smile) was responded to significantly more accurately than the conversational expression (nod) (M = 0.012, SE = 0.005), F(1, 40) = 6.988, p = 0.012, ηp2 = 0.149, whereas, in the non-yielding intention condition, the emotional expression (angry expression) was responded to less accurately compared to the conversational expression (head shake) (M = 0.009, SE = 0.006), F(1, 40) = 2.225, p = 0.144, ηp2 = 0.053, albeit non-significantly.

4. Discussion

4.1. Findings

In the present study, we evaluated emotional and conversational expressions with respect to their efficiency in signifying vehicle intention. Emotional expressions were shown to be more efficient than conversational expressions in helping pedestrians decide appropriately whether to cross or not, as evidenced by a significantly lower latency in the emotional expression compared to the conversational expression condition. More specifically, with respect to communicating yielding intention, the smile was shown to be more efficient than the nod. However, Rouchitsas and Alm [101] found that the smile does not communicate yielding intention effectively, as evidenced by an accuracy of only 76% for interpretation of vehicle intention, which is considerably lower than the 85% criterion for effectiveness proposed by Kaß et al. [116]. On the other hand, the nod was shown to be highly effective in communicating yielding intention, as evidenced by an accuracy of 96.3% for interpretation of vehicle intention. As far as communicating non-yielding intention is concerned, the angry expression was shown to be more efficient than the head shake. Although Rouchitsas and Alm [101] found both angry expression and head shake to be highly effective in communicating vehicle non-yielding intention, we report here a clear advantage for the angry expression over the head shake with respect to their efficiency in doing so.
We also found that non-yielding intention was responded to significantly faster in the male VHC compared to the female VHC condition, whereas VHC gender did not affect latency in the yielding intention condition differentially. This result is consistent with relevant research that has found anger to be more readily identified on male compared to female faces [125], and men to be perceived as more dominant and, thus, likelier to show anger [124]. Considering that both the angry expression and the head shake denote dominance and opposition by signifying non-yielding intention, a perceived incongruence between propensity and VHC gender could explain the higher latency in the female VHC condition [135].

4.2. Implications

Future implementations of anthropomorphic eHMI concepts that aim to communicate vehicle intention via facial expression would benefit from employing the angry expression as a signifier of non-yielding intention, both on grounds of superiority in effectiveness compared to the industry standard anthropomorphic signifier of not relinquishing right-of-way, namely the neutral expression, and of superiority in efficiency compared to another highly effective facial expression, namely the head shake. Moreover, designers of anthropomorphic eHMI concepts should proceed with caution when considering employing the smile to communicate yielding intention, as they may be improving traffic flow at the expense of pedestrian safety, and instead minimize ambiguousness in AV–pedestrian interactions by exploring the nod as a worthy alternative.
While Rouchitsas and Alm [101] found no effect of VHC gender on accuracy—the dependent variable that served as an indicator of effectiveness—the effect on latency we report here—the dependent variable that served as an indicator of efficiency—implies that design choices regarding VHC gender could affect traffic flow. More specifically, it appears that communication of vehicle intention via facial expression will be at its most efficient, i.e., at its fastest, when realized via a male VHC compared to a female VHC, thus suggesting that incorporating male gender cues into the design of anthropomorphic eHMI concepts would maximize their ability to support pedestrians decide appropriately in a timely manner and, hence, improve the flow of mixed-autonomy traffic.
Relevant research has shown that gendering social robots tends to elicit stereotypical user responses about personality traits and task appropriateness [136]. For instance, a female social robot is expected to be more compassionate than its male counterpart, whereas a male social robot is expected to be more competent that its female counterpart. Similarly, in people’s minds, heavy lifting is typically reserved for a male social robot and tending to the elderly for its female counterpart. On the other hand, however, a familiar and relatable physical appearance tends to facilitate both human–robot and human–VHC interactions and lays the foundation for the public to accept and eventually adopt these new technologies [91,137,138]. Furthermore, recent work has provided evidence for the diminishing gap in gender stereotypes regarding personality traits and social roles of men and women [139], which, interestingly enough, has been shown to apply to gendered VHCs as well [126]. Therefore, given that research on gender fluidity and androgyny in the context of human–VHC interactions is still lacking, and that the traffic system is a relatively balanced environment with respect to gender representation, we deem it beneficial to assign gender to anthropomorphic eHMI concepts, especially during the initial stages of AV deployment.
Despite the fact that concepts using light patterns or text do help pedestrians decide appropriately, they call either for their design rationale to be explained or their content to be adapted culturally. On top of that, introducing additional stimuli in a traffic environment that is already overwhelming the senses of road users will most likely create more confusion and frustration, and affect the public acceptance of AVs negatively [30]. Anthropomorphic concepts, on the other hand, bypass explanation and adaptation by taking advantage of previous experience of social and traffic interaction. For instance, as far as pedestrians are concerned, a VHC displayed on the windshield of an AV references directly their everyday experience of resorting to informal communication in the context of driver–pedestrian interactions to ensure traffic safety and improve traffic flow. What is more, considering that the VHC will occupy the space reserved typically for the driver of a conventional vehicle, no extra mental workload will be placed on road users.
Nevertheless, as previous research has shown that anthropomorphism affects evaluations of trustworthiness positively [140], it is probable that the VHC will lead to overtrusting the AV, i.e., to underestimating the likelihood and consequences of a malfunction, which may jeopardize pedestrian safety if the AV does not grant passage after all. Additionally, there is high chance that the VHC will arouse considerable interest or curiosity in bystanders, and lead to them being distracted while trying to navigate traffic safely and efficiently [141]. Designers of eHMIs should take into consideration possible negative effects on trust or attention to maximize their potential for successful integration into the traffic system.

4.3. Limitations

We must acknowledge a number of limitations that could jeopardize our findings’ potential for generalization [142]. Firstly, the crossing intention task was performed in the laboratory and the stimuli were presented on a monitor. This approach is characterized by poor ecological validity, on account of being minimally immersive and highly artificial, and susceptibility to data contamination due to the absence of concerns about safety. Therefore, it would be advisable to have participants perform the task in a virtual reality (VR) environment as well—known for combining high experimental control with a more realistic experience—to increase the ecological validity of the findings [143]. Secondly, as the stimuli were presented out of context, no comparison could be made to a condition where an approaching AV without an interface would communicate its intention solely via vehicle kinematics. For this reason, the employed VHCs should also be presented as part of more detailed virtual scenes, to evaluate their efficiency more comprehensively. Lastly, because of temporal limitations, we did not conduct explicitation interviews as part of the procedure [144,145], which may have deprived us of the opportunity to access valuable information regarding the mental calculations that underpinned responses. Having said that, we are planning to include said technique in future work, so that participants can provide detailed accounts of their subjective experience when interacting with the VHCs.

4.4. Future Work

Previous work has shown that when children interact with an AV, they tend to make street-crossing decisions based entirely on information provided by the interface, without taking into consideration any information provided by vehicle kinematics [146]. It is highly probable that an anthropomorphic eHMI will startle or intrigue children and, thus, make moving safely and efficiently through traffic more challenging for them than it already is. Accordingly, we are also planning on evaluating the efficiency of emotional and conversational expressions in communicating vehicle intention with a relevant sample to determine the degree to which children may be at an unfavorable position during interactions with an anthropomorphic eHMI concept due to their inexperience, playfulness, and carelessness [147,148].
Previous work has also found communication and social interaction skills to be characteristically impaired in autistic patients [77,149,150]. Considering that successful interaction with an anthropomorphic eHMI concept presupposes that the pedestrian’s socio-perceptual and socio-cognitive abilities are intact, we are also planning on evaluating emotional and conversational expressions with respect to their efficiency with a relevant sample to determine the degree to which autistic patients may be at an unfavorable position compared to neurotypical pedestrians, especially given that pedestrians with neurodevelopmental disorders are already likelier to become injured in traffic compared to the general population [151].

5. Conclusions

In the context of AV-pedestrian interactions, emotional facial expressions were shown to communicate vehicle intention more efficiently than conversational facial expressions. Moreover, a male VHC was shown to communicate vehicle non-yielding intention more efficiently than its female counterpart. Therefore, designers of anthropomorphic eHMI concepts that aim to communicate vehicle intention via facial expression would maximize their potential for improving traffic flow if they were to employ emotional expressions, ideally produced by a male VHC.

Author Contributions

A.R. contributed to conceptualization, experimental design, stimuli creation, experiment building, data collection, data processing, data analysis, manuscript preparation, and manuscript revision. H.A. contributed to manuscript revision and supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Umeå Research Ethics Committee (Ref: 2020-00642).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The authors do not have permission to share data.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Stills Taken from the Animated Sequences

Mti 07 00010 i001
1. Rex looking directly at the participant while smiling.
Mti 07 00010 i002
2. Roxie looking directly at the participant while smiling.
Mti 07 00010 i003
3. Rex looking directly at the participant while nodding.
Mti 07 00010 i004
4. Roxie looking directly at the participant while nodding.
Mti 07 00010 i005
5. Rex looking directly at the participant while making an angry expression.
Mti 07 00010 i006
6. Roxie looking directly at the participant while making an angry expression.
Mti 07 00010 i007
7. Rex looking directly at the participant while shaking its head.
Mti 07 00010 i008
8. Roxie looking directly at the participant while shaking its head.
Mti 07 00010 i009
9. Green circle.
Mti 07 00010 i010
10. Red circle.

References

  1. Rasouli, A.; Kotseruba, I.; Tsotsos, J.K. Understanding pedestrian behaviour in complex traffic scenes. IEEE Trans. Intell. Veh. 2017, 3, 61–70. [Google Scholar] [CrossRef]
  2. Markkula, G.; Madigan, R.; Nathanael, D.; Portouli, E.; Lee, Y.M.; Dietrich, A.; Billington, J.; Merat, N. Defining interactions: A conceptual framework for understanding interactive behaviour in human and automated road traffic. Theor. Issues Ergon. Sci. 2020, 21, 728–752. [Google Scholar] [CrossRef] [Green Version]
  3. Färber, B. Communication and communication problems between autonomous vehicles and human drivers. In Autonomous Driving; Springer: Berlin/Heidelberg, Germany, 2016; pp. 125–144. [Google Scholar]
  4. Sucha, M.; Dostal, D.; Risser, R. Pedestrian-driver communication and decision strategies at marked crossings. Accid. Anal. Prev. 2017, 102, 41–50. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Llorca, D.F. From driving automation systems to autonomous vehicles: Clarifying the terminology. arXiv 2021, arXiv:2103.10844. [Google Scholar]
  6. SAE International. Taxonomy and Definitions of Terms Related to Driving Automation Systems for on-Road Motor Vehicles. 2021. Available online: www.sae.org (accessed on 26 January 2022).
  7. Dey, D.; Terken, J. Pedestrian interaction with vehicles: Roles of explicit and implicit communication. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Oldenburg, Germany, 24–27 September 2017; ACM: New York, NY, USA; pp. 109–113. [Google Scholar]
  8. Moore, D.; Currano, R.; Strack, G.E.; Sirkin, D. The case for implicit external human-machine interfaces for autonomous vehicles. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Utrecht, The Netherlands, 21–25 September 2019; pp. 295–307. [Google Scholar]
  9. Lee, Y.M.; Madigan, R.; Giles, O.; Garach-Morcillo, L.; Markkula, G.; Fox, C.; Camara, F.; Merat, N. Road users rarely use explicit communication when interacting in today’s traffic: Implications for automated vehicles. Cogn. Technol. Work. 2020, 23, 367–380. [Google Scholar] [CrossRef]
  10. Guéguen, N.; Eyssartier, C.; Meineri, S. A pedestrian’s smile and drivers’ behaviour: When a smile increases careful driving. J. Saf. Res. 2016, 56, 83–88. [Google Scholar] [CrossRef]
  11. Guéguen, N.; Meineri, S.; Eyssartier, C. A pedestrian’s stare and drivers’ stopping behaviour: A field experiment at the pedestrian crossing. Saf. Sci. 2015, 75, 87–89. [Google Scholar] [CrossRef]
  12. Ren, Z.; Jiang, X.; Wang, W. Analysis of the influence of pedestrians’ eye contact on drivers’ comfort boundary during the crossing conflict. Procedia Eng. 2016, 137, 399–406. [Google Scholar] [CrossRef] [Green Version]
  13. Nathanael, D.; Portouli, E.; Papakostopoulos, V.; Gkikas, K.; Amditis, A. Naturalistic Observation of Interactions Between Car Drivers and Pedestrians in High Density Urban Settings. In Congress of the International Ergonomics Association; Springer: Cham, Switzerland, 2018; pp. 389–397. [Google Scholar]
  14. Dey, D.; Walker, F.; Martens, M.; Terken, J. Gaze patterns in pedestrian interaction with vehicles: Towards effective design of external human-machine interfaces for automated vehicles. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Utrecht, The Netherlands, 21–25 September 2019; ACM: New York, NY, USA; pp. 369–378. [Google Scholar]
  15. Eisma, Y.B.; Van Bergen, S.; Ter Brake, S.M.; Hensen, M.T.T.; Tempelaar, W.J.; De Winter, J.C.F. External Human–Machine Interfaces: The Effect of Display Location on Crossing Intentions and Eye Movements. Information 2020, 11, 13. [Google Scholar] [CrossRef] [Green Version]
  16. Uttley, J.; Lee, Y.M.; Madigan, R.; Merat, N. Road user interactions in a shared space setting: Priority and communication in a UK car park. Transp. Res. Part F Traffic Psychol. Behav. 2020, 72, 32–46. [Google Scholar] [CrossRef]
  17. De Winter, J.; Bazilinskyy, P.; Wesdorp, D.; de Vlam, V.; Hopmans, B.; Visscher, J.; Dodou, D. How do pedestrians distribute their visual attention when walking through a parking garage? An eye-tracking study. Ergonomics 2021, 64, 793–805. [Google Scholar] [CrossRef] [PubMed]
  18. Kong, X.; Das, S.; Zhang, Y.; Xiao, X. Lessons learned from pedestrian-driver communication and yielding patterns. Transp. Res. Part F Traffic Psychol. Behav. 2021, 79, 35–48. [Google Scholar] [CrossRef]
  19. Onkhar, V.; Bazilinskyy, P.; Dodou, D.; De Winter, J.C.F. The effect of drivers’ eye contact on pedestrians’ perceived safety. Transp. Res. Part F Traffic Psychol. Behav. 2022, 84, 194–210. [Google Scholar] [CrossRef]
  20. Lobjois, R.; Cavallo, V. Age-related differences in street-crossing decisions: The effects of vehicle speed and time constraints on gap selection in an estimation task. Accid. Anal. Prev. 2007, 39, 934–943. [Google Scholar] [CrossRef] [PubMed]
  21. Sun, R.; Zhuang, X.; Wu, C.; Zhao, G.; Zhang, K. The estimation of vehicle speed and stopping distance by pedestrians crossing streets in a naturalistic traffic environment. Transp. Res. Part F Traffic Psychol. Behav. 2015, 30, 97–106. [Google Scholar] [CrossRef]
  22. Papić, Z.; Jović, A.; Simeunović, M.; Saulić, N.; Lazarević, M. Underestimation tendencies of vehicle speed by pedestrians when crossing unmarked roadway. Accid. Anal. Prev. 2020, 143, 105586. [Google Scholar] [CrossRef]
  23. ISO/TR 23049:2018; Road Vehicles: Ergonomic Aspects of External Visual Communication from Automated Vehicles to Other Road Users. BSI: London, UK, 2018.
  24. Merat, N.; Louw, T.; Madigan, R.; Wilbrink, M.; Schieben, A. What externally presented information do VRUs require when interacting with fully Automated Road Transport Systems in shared space? Accid. Anal. Prev. 2018, 118, 244–252. [Google Scholar] [CrossRef]
  25. Rasouli, A.; Tsotsos, J.K. Autonomous vehicles that interact with pedestrians: A survey of theory and practice. IEEE Trans. Intell. Transp. Syst. 2019, 21, 900–918. [Google Scholar] [CrossRef] [Green Version]
  26. Rouchitsas, A.; Alm, H. External human–machine interfaces for autonomous vehicle-to-pedestrian communication: A review of empirical work. Front. Psychol. 2019, 10, 2757. [Google Scholar] [CrossRef]
  27. Schieben, A.; Wilbrink, M.; Kettwich, C.; Madigan, R.; Louw, T.; Merat, N. Designing the interaction of automated vehicles with other traffic participants: Design considerations based on human needs and expectations. Cogn. Technol. Work. 2019, 21, 69–85. [Google Scholar] [CrossRef] [Green Version]
  28. Ezzati Amini, R.; Katrakazas, C.; Riener, A.; Antoniou, C. Interaction of automated driving systems with pedestrians: Challenges, current solutions, and recommendations for eHMIs. Transp. Rev. 2021, 41, 788–813. [Google Scholar] [CrossRef]
  29. Carmona, J.; Guindel, C.; Garcia, F.; de la Escalera, A. eHMI: Review and Guidelines for Deployment on Autonomous Vehicles. Sensors 2021, 21, 2912. [Google Scholar] [CrossRef] [PubMed]
  30. Tabone, W.; de Winter, J.; Ackermann, C.; Bärgman, J.; Baumann, M.; Deb, S.; Emmenegger, C.; Habibovic, A.; Hagenzieker, M.; Stanton, N.A. Vulnerable road users and the coming wave of automated vehicles: Expert perspectives. Transp. Res. Interdiscip. Perspect. 2021, 9, 100293. [Google Scholar] [CrossRef]
  31. Böckle, M.P.; Brenden, A.P.; Klingegård, M.; Habibovic, A.; Bout, M. SAV2P: Exploring the Impact of an Interface for Shared Automated Vehicles on Pedestrians’ Experience. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications Adjunct, Oldenburg, Germany, 24–27 September 2017; ACM: New York, NY, USA, 2017; pp. 136–140. [Google Scholar]
  32. Chang, C.M.; Toda, K.; Sakamoto, D.; Igarashi, T. Eyes on a Car: An Interface Design for Communication between an Autonomous Car and a Pedestrian. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Oldenburg, Germany, 24–27 September 2017; ACM: New York, NY, USA, 2017; pp. 65–73. [Google Scholar]
  33. Costa, G. Designing Framework for Human-Autonomous Vehicle Interaction. Master’s Thesis, Graduate School of Media Design, Keio University, Yokohama, Japan, 2017. [Google Scholar]
  34. Deb, S.; Strawderman, L.J.; Carruth, D.W. Investigating pedestrian suggestions for external features on fully autonomous vehicles: A virtual reality experiment. Transp. Res. Part F Traffic Psychol. Behav. 2018, 59, 135–149. [Google Scholar] [CrossRef]
  35. Habibovic, A. Communicating intent of automated vehicles to pedestrians. Front. Psychol. 2018, 9, 1336. [Google Scholar] [CrossRef] [PubMed]
  36. Hudson, C.R.; Deb, S.; Carruth, D.W.; McGinley, J.; Frey, D. Pedestrian Perception of Autonomous Vehicles with External Interacting Features. In Advances in Human Factors and Systems Interaction, Proceedings of the AHFE 2018 International Conference on Human Factors and Systems Interaction, Loews Sapphire Falls Resort at Universal Studios, Orlando, FL, USA, 21–25 July 2018; Springer: Cham, Switzerland, 2018; pp. 33–39. [Google Scholar]
  37. Mahadevan, K.; Somanath, S.; Sharlin, E. Communicating Awareness and Intent in Autonomous Vehicle-Pedestrian Interaction. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; ACM: New York, NY, USA; p. 429. [Google Scholar]
  38. Othersen, I.; Conti-Kufner, A.S.; Dietrich, A.; Maruhn, P.; Bengler, K. Designing for automated vehicle and pedestrian communication: Perspectives on eHMIs from older and younger persons. Proc. Hum. Factors Ergon. Soc. Eur. 2018, 4959, 135–148. [Google Scholar]
  39. Petzoldt, T.; Schleinitz, K.; Banse, R. Potential safety effects of a frontal brake light for motor vehicles. IET Intell. Transp. Syst. 2018, 12, 449–453. [Google Scholar] [CrossRef]
  40. Song, Y.E.; Lehsing, C.; Fuest, T.; Bengler, K. External HMIs and their effect on the interaction between pedestrians and automated vehicles. In Intelligent Human Systems Integration, Proceedings of the 1st International Conference on Intelligent Human Systems Integration (IHSI 2018): Integrating People and Intelligent Systems, Dubai, United Arab Emirates, 7–9 January 2018; Springer: Cham, Switzerland, 2018; pp. 13–18. [Google Scholar]
  41. De Clercq, K.; Dietrich, A.; Núñez Velasco, J.P.; De Winter, J.; Happee, R. External human-machine interfaces on automated vehicles: Effects on pedestrian crossing decisions. Hum. Factors 2019, 61, 1353–1370. [Google Scholar] [CrossRef] [Green Version]
  42. Holländer, K.; Colley, A.; Mai, C.; Häkkilä, J.; Alt, F.; Pfleging, B. Investigating the influence of external car displays on pedestrians’ crossing behavior in virtual reality. In Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services, Taipei, Taiwan, 1–4 October 2019; pp. 1–11. [Google Scholar]
  43. Stadler, S.; Cornet, H.; Theoto, T.N.; Frenkler, F. A Tool, not a Toy: Using Virtual Reality to Evaluate the Communication Between Autonomous Vehicles and Pedestrians. In Augmented Reality and Virtual Reality; Springer: Cham, Switzerland, 2019; pp. 203–216. [Google Scholar]
  44. Ackermans, S.C.A.; Dey, D.D.; Ruijten, P.A.; Cuijpers, R.H.; Pfleging, B. The effects of explicit intention communication, conspicuous sensors, and pedestrian attitude in interactions with automated vehicles. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; Association for Computing Machinery, Inc.: New York, NY, USA, 2020; p. 70. [Google Scholar]
  45. Faas, S.M.; Mathis, L.A.; Baumann, M. External HMI for self-driving vehicles: Which information shall be displayed? Transp. Res. Part F Traffic Psychol. Behav. 2020, 68, 171–186. [Google Scholar] [CrossRef]
  46. Singer, T.; Kobbert, J.; Zandi, B.; Khanh, T.Q. Displaying the driving state of automated vehicles to other road users: An international, virtual reality-based study as a first step for the harmonized regulations of novel signaling devices. IEEE Trans. Intell. Transp. Syst. 2020, 23, 2904–2918. [Google Scholar] [CrossRef]
  47. Lee, Y.M.; Madigan, R.; Uzondu, C.; Garcia, J.; Romano, R.; Markkula, G.; Merat, N. Learning to interpret novel eHMI: The effect of vehicle kinematics and eHMI familiarity on pedestrian’ crossing behavior. J. Saf. Res. 2022, 80, 270–280. [Google Scholar] [CrossRef] [PubMed]
  48. Wilbrink, M.; Lau, M.; Illgner, J.; Schieben, A.; Oehl, M. Impact of External Human–Machine Interface Communication Strategies of Automated Vehicles on Pedestrians’ Crossing Decisions and Behaviors in an Urban Environment. Sustainability 2021, 13, 8396. [Google Scholar] [CrossRef]
  49. Clamann, M.; Aubert, M.; Cummings, M.L. Evaluation of vehicle-to-pedestrian communication displays for autonomous vehicles (No. 17-02119). In Proceedings of the Transportation Research Board 96th Annual Meeting, Washington, DC, USA, 8–12 January 2017. [Google Scholar]
  50. Li, Y.; Dikmen, M.; Hussein, T.G.; Wang, Y.; Burns, C. To Cross or Not to Cross: Urgency-Based External Warning Displays on Autonomous Vehicles to Improve Pedestrian Crossing Safety. In Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Toronto, ON, Canada, 23–25 September 2018; ACM: New York, NY, USA, 2018; pp. 188–197. [Google Scholar]
  51. Hensch, A.C.; Neumann, I.; Beggiato, M.; Halama, J.; Krems, J.F. How Should Automated Vehicles Communicate? Effects of a Light-Based Communication Approach in a Wizard-of-Oz Study. In Advances in Human Factors of Transportation, Proceedings of the AHFE 2019 International Conference on Human Factors in Transportation, Washington, DC, USA, 24–28 July 2019; Springer: Cham, Switzerland, 2019; pp. 79–91. [Google Scholar]
  52. Colley, M.; Fabian, T.; Rukzio, E. Investigating the Effects of External Communication and Automation Behavior on Manual Drivers at Intersections. Proc. ACM Hum.-Comput. Interact. 2022, 6, 1–16. [Google Scholar] [CrossRef]
  53. Papakostopoulos, V.; Nathanael, D.; Portouli, E.; Amditis, A. Effect of external HMI for automated vehicles (AVs) on drivers’ ability to infer the AV motion intention: A field experiment. Transp. Res. Part F Traffic Psychol. Behav. 2021, 82, 32–42. [Google Scholar] [CrossRef]
  54. Rettenmaier, M.; Albers, D.; Bengler, K. After you?!–Use of external human-machine interfaces in road bottleneck scenarios. Transp. Res. Part F Traffic Psychol. Behav. 2020, 70, 175–190. [Google Scholar] [CrossRef]
  55. Fridman, L.; Mehler, B.; Xia, L.; Yang, Y.; Facusse, L.Y.; Reimer, B. To walk or not to walk: Crowdsourced assessment of external vehicle-to-pedestrian displays. arXiv 2017, arXiv:1707.02698. [Google Scholar]
  56. Chang, C.M.; Toda, K.; Igarashi, T.; Miyata, M.; Kobayashi, Y. A Video-based Study Comparing Communication Modalities between an Autonomous Car and a Pedestrian. In Proceedings of the Adjunct Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Toronto, ON, Canada, 23–25 September 2018; ACM: New York, NY, USA, 2018; pp. 104–109. [Google Scholar]
  57. Ackermann, C.; Beggiato, M.; Schubert, S.; Krems, J.F. An experimental study to investigate design and assessment criteria: What is important for communication between pedestrians and automated vehicles? Appl. Ergon. 2019, 75, 272–282. [Google Scholar] [CrossRef]
  58. Bazilinskyy, P.; Dodou, D.; De Winter, J. Survey on eHMI concepts: The effect of text, color, and perspective. Transp. Res. Part F: Traffic Psychol. Behav. 2019, 67, 175–194. [Google Scholar] [CrossRef]
  59. Dey, D.; Habibovic, A.; Löcken, A.; Wintersberger, P.; Pfleging, B.; Riener, A.; Martens, M.; Terken, J. Taming the eHMI jungle: A classification taxonomy to guide, compare, and assess the design principles of automated vehicles’ external human-machine interfaces. Transp. Res. Interdiscip. Perspect. 2020, 7, 100174. [Google Scholar] [CrossRef]
  60. Zhang, J.; Vinkhuyzen, E.; Cefkin, M. Evaluation of an autonomous vehicle external communication system concept: A survey study. In Proceedings of the International Conference on Applied Human Factors and Ergonomics, Los Angeles, CA, USA, 17–21 July 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 650–661. [Google Scholar]
  61. Alvarez, W.M.; de Miguel, M.Á.; García, F.; Olaverri-Monreal, C. Response of Vulnerable Road Users to Visual Information from Autonomous Vehicles in Shared Spaces. In Proceedings of the 2019 IEEE, Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 3714–3719. [Google Scholar]
  62. Chang, C.M. A Gender Study of Communication Interfaces between an Autonomous Car and a Pedestrian. In Proceedings of the 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Virtual, 21–22 September 2020; pp. 42–45. [Google Scholar]
  63. Nowak, K.L.; Rauh, C. Choose your “buddy icon” carefully: The influence of avatar androgyny, anthropomorphism, and credibility in online interactions. Comput. Hum. Behav. 2008, 24, 1473–1493. [Google Scholar] [CrossRef]
  64. De Visser, E.J.; Krueger, F.; McKnight, P.; Scheid, S.; Smith, M.; Chalk, S.; Parasuraman, R. The world is not enough: Trust in cognitive agents. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting; Sage Publications: Thousand Oaks, CA, USA, 2012; Volume 56, pp. 263–267. [Google Scholar]
  65. Pak, R.; Fink, N.; Price, M.; Bass, B.; Sturre, L. Decision support aids with anthropomorphic characteristics influence trust and performance in younger and older adults. Ergonomics 2012, 55, 1059–1072. [Google Scholar] [CrossRef] [PubMed]
  66. Hoff, K.A.; Bashir, M. Trust in automation: Integrating empirical evidence on factors that influence trust. Hum. Factors 2015, 57, 407–434. [Google Scholar] [CrossRef] [PubMed]
  67. Waytz, A.; Heafner, J.; Epley, N. The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. J. Exp. Soc. Psychol. 2014, 52, 113–117. [Google Scholar] [CrossRef]
  68. Choi, J.K.; Ji, Y.G. Investigating the importance of trust on adopting an autonomous vehicle. Int. J. Hum.-Comput. Interact. 2015, 31, 692–702. [Google Scholar] [CrossRef]
  69. Hengstler, M.; Enkel, E.; Duelli, S. Applied artificial intelligence and trust—The case of autonomous vehicles and medical assistance devices. Technol. Forecast. Soc. Chang. 2016, 105, 105–120. [Google Scholar] [CrossRef]
  70. Reig, S.; Norman, S.; Morales, C.G.; Das, S.; Steinfeld, A.; Forlizzi, J. A field study of pedestrians and autonomous vehicles. In Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Toronto, ON, Canada, 23–25 September 2018; pp. 198–209. [Google Scholar]
  71. Oliveira, L.; Proctor, K.; Burns, C.G.; Birrell, S. Driving style: How should an automated vehicle behave? Information 2019, 10, 219. [Google Scholar] [CrossRef] [Green Version]
  72. Olaverri-Monreal, C. Promoting trust in self-driving vehicles. Nat. Electron. 2020, 3, 292–294. [Google Scholar] [CrossRef]
  73. Wang, Y.; Hespanhol, L.; Tomitsch, M. How Can Autonomous Vehicles Convey Emotions to Pedestrians? A Review of Emotionally Expressive Non-Humanoid Robots. Multimodal Technol. Interact. 2021, 5, 84. [Google Scholar] [CrossRef]
  74. Schilbach, L.; Wohlschlaeger, A.M.; Kraemer, N.C.; Newen, A.; Shah, N.J.; Fink, G.R.; Vogeley, K. Being with virtual others: Neural correlates of social interaction. Neuropsychologia 2006, 44, 718–730. [Google Scholar] [CrossRef]
  75. Kuzmanovic, B.; Georgescu, A.L.; Eickhoff, S.B.; Shah, N.J.; Bente, G.; Fink, G.R.; Vogeley, K. Duration matters: Dissociating neural correlates of detection and evaluation of social gaze. Neuroimage 2009, 46, 1154–1163. [Google Scholar] [CrossRef]
  76. Schrammel, F.; Pannasch, S.; Graupner, S.T.; Mojzisch, A.; Velichkovsky, B.M. Virtual friend or threat? The effects of facial expression and gaze interaction on psychophysiological responses and emotional experience. Psychophysiology 2009, 46, 922–931. [Google Scholar] [CrossRef]
  77. Georgescu, A.L.; Kuzmanovic, B.; Schilbach, L.; Tepest, R.; Kulbida, R.; Bente, G.; Vogeley, K. Neural correlates of “social gaze” processing in high-functioning autism under systematic variation of gaze duration. NeuroImage Clin. 2013, 3, 340–351. [Google Scholar] [CrossRef] [Green Version]
  78. Parsons, T.D. Virtual reality for enhanced ecological validity and experimental control in the clinical, affective, and social neurosciences. Front. Hum. Neurosci. 2015, 9, 660. [Google Scholar] [CrossRef]
  79. Parsons, T.D.; Gaggioli, A.; Riva, G. Virtual reality for research in social neuroscience. Brain Sci. 2017, 7, 42. [Google Scholar] [CrossRef] [PubMed]
  80. Dobs, K.; Bülthoff, I.; Schultz, J. Use and usefulness of dynamic face stimuli for face perception studies–a review of behavioral findings and methodology. Front. Psychol. 2018, 9, 1355. [Google Scholar] [CrossRef] [PubMed]
  81. Georgescu, A.L.; Kuzmanovic, B.; Roth, D.; Bente, G.; Vogeley, K. The use of virtual characters to assess and train non-verbal communication in high-functioning autism. Front. Hum. Neurosci. 2014, 8, 807. [Google Scholar] [CrossRef] [Green Version]
  82. Biocca, F.; Harms, C.; Burgoon, J.K. Toward a more robust theory and measure of social presence: Review and suggested criteria. Presence Teleoperators Virtual Environ. 2003, 12, 456–480. [Google Scholar] [CrossRef]
  83. Cassell, J.; Thorisson, K.R. The power of a nod and a glance: Envelope vs. emotional feedback in animated conversational agents. Appl. Artif. Intell. 1999, 13, 519–538. [Google Scholar] [CrossRef]
  84. Picard, R.W. Affective Computing; MIT Press: Cambridge, MA, USA, 2000. [Google Scholar]
  85. Pütten, A.V.D.; Reipen, C.; Wiedmann, A.; Kopp, S.; Krämer, N.C. Comparing emotional vs. envelope feedback for ECAs. In International Workshop on Intelligent Virtual Agents; Springer: Berlin/Heidelberg, Germany, 2008; pp. 550–551. [Google Scholar]
  86. Ochs, M.; Niewiadomski, R.; Pelachaud, C. How a virtual agent should smile? In International Conference on Intelligent Virtual Agents; Springer: Berlin/Heidelberg, Germany, 2010; pp. 427–440. [Google Scholar]
  87. Scherer, K.R.; Bänziger, T.; Roesch, E. (Eds.) A Blueprint for Affective Computing: A Sourcebook and Manual; University Press: Oxford, UK, 2010. [Google Scholar]
  88. Wang, N.; Gratch, J. Don’t just stare at me! In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Atlanta, GA, USA, 10–15 April 2010; pp. 1241–1250. [Google Scholar]
  89. McDonnell, R.; Breidt, M.; Bülthoff, H.H. Render me real? Investigating the effect of render style on the perception of animated virtual humans. ACM Trans. Graph. (TOG) 2012, 31, 91. [Google Scholar] [CrossRef]
  90. Wong, J.W.E.; McGee, K. Frown more, talk more: Effects of facial expressions in establishing conversational rapport with virtual agents. In Intelligent Virtual Agents, Proceedings of the 12th International Conference, IVA 2012, Santa Cruz, CA, USA, 12–14 September 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 419–425. [Google Scholar]
  91. Aljaroodi, H.M.; Adam, M.T.; Chiong, R.; Teubner, T. Avatars and embodied agents in experimental information systems research: A systematic review and conceptual framework. Australas. J. Inf. Syst. 2019, 23. [Google Scholar] [CrossRef] [Green Version]
  92. Furuya, H.; Kim, K.; Bruder, G.J.; Wisniewski, P.F.; Welch, G. Autonomous Vehicle Visual Embodiment for Pedestrian Interactions in Crossing Scenarios: Virtual Drivers in AVs for Pedestrian Crossing. In Proceedings of the Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–7. [Google Scholar]
  93. Frith, C.D.; Frith, U. Interacting minds—A biological basis. Science 1999, 286, 1692–1695. [Google Scholar] [CrossRef] [Green Version]
  94. Gallagher, H.L.; Frith, C.D. Functional imaging of ‘theory of mind’. Trends Cogn. Sci. 2003, 7, 77–83. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  95. Krumhuber, E.G.; Kappas, A.; Manstead, A.S. Effects of dynamic aspects of facial expressions: A review. Emot. Rev. 2013, 5, 41–46. [Google Scholar] [CrossRef]
  96. Horstmann, G. What do facial expressions convey: Feeling states, behavioral intentions, or actions requests? Emotion 2003, 3, 150. [Google Scholar] [CrossRef]
  97. Scherer, K.R.; Grandjean, D. Facial expressions allow inference of both emotions and their components. Cogn. Emot. 2008, 22, 789–801. [Google Scholar] [CrossRef]
  98. Ekman, P. Facial expressions of emotion: New findings, new questions. Psychol. Sci. 1992, 3, 34–38. [Google Scholar] [CrossRef]
  99. Berkowitz, L.; Harmon-Jones, E. Toward an understanding of the determinants of anger. Emotion 2004, 4, 107. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  100. Semcon. The Smiling Car. 2016. Available online: https://semcon.com/uk/smilingcar/ (accessed on 21 April 2022).
  101. Rouchitsas, A.; Alm, H. Ghost on the Windshield: Employing a Virtual Human Character to Communicate Pedestrian Acknowledgement and Vehicle Intention. Information 2022, 13, 420. [Google Scholar] [CrossRef]
  102. Nusseck, M.; Cunningham, D.W.; Wallraven, C.; Bülthoff, H.H. The contribution of different facial regions to the recognition of conversational expressions. J. Vis. 2008, 8, 1. [Google Scholar] [CrossRef] [Green Version]
  103. Cunningham, D.W.; Wallraven, C. Dynamic information for the recognition of conversational expressions. J. Vis. 2009, 9, 7. [Google Scholar] [CrossRef]
  104. Kaulard, K.; Cunningham, D.W.; Bülthoff, H.H.; Wallraven, C. The MPI facial expression database—A validated database of emotional and conversational facial expressions. PLoS ONE 2012, 7, e32321. [Google Scholar] [CrossRef] [PubMed]
  105. Kendon, A. Some uses of the head shake. Gesture 2002, 2, 147–182. [Google Scholar] [CrossRef]
  106. Guidetti, M. Yes or no? How young French children combine gestures and speech to agree and refuse. J. Child Lang. 2005, 32, 911–924. [Google Scholar] [CrossRef] [PubMed]
  107. Andonova, E.; Taylor, H.A. Nodding in dis/agreement: A tale of two cultures. Cogn. Process. 2012, 13, 79–82. [Google Scholar] [CrossRef]
  108. Fusaro, M.; Vallotton, C.D.; Harris, P.L. Beside the point: Mothers’ head nodding and shaking gestures during parent–child play. Infant Behav. Dev. 2014, 37, 235–247. [Google Scholar] [CrossRef]
  109. Osugi, T.; Kawahara, J.I. Effects of Head Nodding and Shaking Motions on Perceptions of Likeability and Approachability. Perception 2018, 47, 16–29. [Google Scholar] [CrossRef] [Green Version]
  110. Moretti, S.; Greco, A. Nodding and shaking of the head as simulated approach and avoidance responses. Acta Psychol. 2020, 203, 102988. [Google Scholar] [CrossRef]
  111. Yamada, H.; Matsuda, T.; Watari, C.; Suenaga, T. Dimensions of visual information for categorizing facial expressions of emotion. Jpn. Psychol. Res. 1994, 35, 172–181. [Google Scholar] [CrossRef] [Green Version]
  112. Tottenham, N.; Tanaka, J.W.; Leon, A.C.; McCarry, T.; Nurse, M.; Hare, T.A.; Marcus, D.J.; Westerlund, A.; Nelson, C. The NimStim set of facial expressions: Judgments from untrained research participants. Psychiatry Res. 2009, 168, 242–249. [Google Scholar] [CrossRef] [Green Version]
  113. Wu, S.; Sun, S.; Camilleri, J.A.; Eickhoff, S.B.; Yu, R. Better the devil you know than the devil you don’t: Neural processing of risk and ambiguity. NeuroImage 2021, 236, 118109. [Google Scholar] [CrossRef]
  114. Bainbridge, L. Ironies of automation. In Analysis, Design, and Evaluation of Man–Machine Systems; Pergamon: Oxford, UK, 1983; pp. 129–135. [Google Scholar]
  115. Reason, J. Understanding adverse events: Human factors. BMJ Qual. Saf. 1995, 4, 80–89. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  116. Kaß, C.; Schoch, S.; Naujoks, F.; Hergeth, S.; Keinath, A.; Neukum, A. Standardized Test Procedure for External Human–Machine Interfaces of Automated Vehicles. Information 2020, 11, 173. [Google Scholar] [CrossRef] [Green Version]
  117. Weber, M.; Giacomin, J.; Malizia, A.; Skrypchuk, L.; Gkatzidou, V.; Mouzakitis, A. Investigation of the dependency of the drivers’ emotional experience on different road types and driving conditions. Transp. Res. Part F Traffic Psychol. Behav. 2019, 65, 107–120. [Google Scholar] [CrossRef]
  118. Popuşoi, S.A.; Havârneanu, G.M.; Havârneanu, C.E. “Get the f#∗ k out of my way!” Exploring the cathartic effect of swear words in coping with driving anger. Transp. Res. Part F Traffic Psychol. Behav. 2018, 56, 215–226. [Google Scholar]
  119. Stephens, A.N.; Lennon, A.; Bihler, C.; Trawley, S. The measure for angry drivers (MAD). Transp. Res. Part F Traffic Psychol. Behav. 2019, 64, 472–484. [Google Scholar] [CrossRef]
  120. Deffenbacher, J.L.; Lynch, R.S.; Oetting, E.R.; Swaim, R.C. The Driving Anger Expression Inventory: A measure of how people express their anger on the road. Behav. Res. Ther. 2002, 40, 717–737. [Google Scholar] [CrossRef]
  121. Utriainen, R.; Pöllänen, M. Prioritizing Safety or Traffic Flow? Qualitative Study on Highly Automated Vehicles’ Potential to Prevent Pedestrian Crashes with Two Different Ambitions. Sustainability 2020, 12, 3206. [Google Scholar] [CrossRef] [Green Version]
  122. Bevan, N.; Carter, J.; Harker, S. ISO 9241-11 revised: What have we learnt about usability since 1998? In Human-Computer Interaction: Design and Evaluation, Proceedings of the 17th International Conference, HCI International 2015, Los Angeles, CA, USA, 2–7 August 2015; Springer: Cham, Switzerland, 2015; pp. 143–151. [Google Scholar]
  123. Wiese, E.; Metta, G.; Wykowska, A. Robots as intentional agents: Using neuroscientific methods to make robots appear more social. Front. Psychol. 2017, 8, 1663. [Google Scholar] [CrossRef] [Green Version]
  124. Hess, U.; Adams, R., Jr.; Kleck, R. Who may frown and who should smile? Dominance, affiliation, and the display of happiness and anger. Cogn. Emot. 2005, 19, 515–536. [Google Scholar] [CrossRef]
  125. Becker, D.V.; Kenrick, D.T.; Neuberg, S.L.; Blackwell, K.C.; Smith, D.M. The confounded nature of angry men and happy women. J. Personal. Soc. Psychol. 2007, 92, 179. [Google Scholar] [CrossRef] [Green Version]
  126. Nag, P.; Yalçın, Ö.N. Gender stereotypes in virtual agents. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents, Virtual, 20–22 October 2020; pp. 1–8. [Google Scholar]
  127. Dey, D. External Communication for Self-Driving Cars: Designing for Encounters between Automated Vehicles and Pedestrians. Ph.D. Thesis, Technische Universiteit Eindhoven, Eindhoven, The Netherlands, 2020. [Google Scholar]
  128. Bantoft, C.; Summers, M.J.; Tranent, P.J.; Palmer, M.A.; Cooley, P.D.; Pedersen, S.J. Effect of standing or walking at a workstation on cognitive function: A randomized counterbalanced trial. Hum. Factors 2016, 58, 140–149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  129. Kang, S.H.; Lee, J.; Jin, S. Effect of standing desk use on cognitive performance and physical workload while engaged with high cognitive demand tasks. Appl. Ergon. 2021, 92, 103306. [Google Scholar] [CrossRef] [PubMed]
  130. Bazilinskyy, P.; Kooijman, L.; Dodou, D.; de Winter, J.C.F. How should external Human-Machine Interfaces behave? Examining the effects of colour, position, message, activation distance, vehicle yielding, and visual distraction among 1434 participants. Appl. Ergon. 2021, 95, 103450. [Google Scholar] [CrossRef] [PubMed]
  131. Kyllonen, P.C.; Zu, J. Use of response time for measuring cognitive ability. J. Intell. 2016, 4, 14. [Google Scholar] [CrossRef]
  132. Ratcliff, R. Methods for dealing with reaction time outliers. Psychol. Bull. 1993, 114, 510. [Google Scholar] [CrossRef] [PubMed]
  133. Aguinis, H.; Gottfredson, R.K.; Joo, H. Best-practice recommendations for defining, identifying, and handling outliers. Organ. Res. Methods 2013, 16, 270–301. [Google Scholar] [CrossRef]
  134. Field, A. Discovering Statistics Using IBM SPSS Statistics; SAGE: Thousand Oaks, CA, USA, 2013. [Google Scholar]
  135. Kohn, N.; Fernández, G. Emotion and sex of facial stimuli modulate conditional automaticity in behavioral and neuronal interference in healthy men. Neuropsychologia 2020, 145, 106592. [Google Scholar] [CrossRef]
  136. Shaw-Garlock, G. Gendered by design: Gender codes in social robotics. In Social Robots; Routledge: London, UK, 2017; pp. 199–218. [Google Scholar]
  137. Fong, T.; Nourbakhsh, I.; Dautenhahn, K. A survey of socially interactive robots. Robot. Auton. Syst. 2003, 42, 143–166. [Google Scholar] [CrossRef] [Green Version]
  138. Suzuki, T.; Nomura, T. Gender preferences for robots and gender equality orientation in communication situations. AI Soc. 2022, 1–10. [Google Scholar] [CrossRef]
  139. Eagly, A.H.; Nater, C.; Miller, D.I.; Kaufmann, M.; Sczesny, S. Gender stereotypes have changed: A cross-temporal meta-analysis of US public opinion polls from 1946 to 2018. Am. Psychol. 2020, 75, 301. [Google Scholar] [CrossRef] [Green Version]
  140. Gong, L. How social is social responses to computers? The function of the degree of anthropomorphism in computer representations. Comput. Hum. Behav. 2008, 24, 1494–1509. [Google Scholar] [CrossRef]
  141. Tapiro, H.; Oron-Gilad, T.; Parmet, Y. Pedestrian distraction: The effects of road environment complexity and age on pedestrian’s visual attention and crossing behavior. J. Saf. Res. 2020, 72, 101–109. [Google Scholar] [CrossRef] [PubMed]
  142. Andrade, C. Internal, external, and ecological validity in research design, conduct, and evaluation. Indian J. Psychol. Med. 2018, 40, 498–499. [Google Scholar] [CrossRef]
  143. Deb, S.; Carruth, D.W.; Sween, R.; Strawderman, L.; Garrison, T.M. Efficacy of virtual reality in pedestrian safety research. Appl. Ergon. 2017, 65, 449–460. [Google Scholar] [CrossRef]
  144. Vermersch, P. Describing the practice of introspection. J. Conscious. Stud. 2009, 16, 20–57. [Google Scholar]
  145. Cahour, B.; Salembier, P.; Zouinar, M. Analyzing lived experience of activity. Le Trav. Hum. 2016, 79, 259–284. [Google Scholar] [CrossRef]
  146. Deb, S.; Carruth, D.W.; Fuad, M.; Stanley, L.M.; Frey, D. Comparison of Child and Adult Pedestrian Perspectives of External Features on Autonomous Vehicles Using Virtual Reality Experiment. In Proceedings of the International Conference on Applied Human Factors and Ergonomics, Washington, DC, USA, 24–28 July 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 145–156. [Google Scholar]
  147. Tapiro, H.; Meir, A.; Parmet, Y.; Oron-Gilad, T. Visual search strategies of child-pedestrians in road crossing tasks. In Proceedings of the Human Factors and Ergonomics Society Europe Chapter 2013 Annual Conference, Torino, Italy, 16–18 October 2013; pp. 119–130. [Google Scholar]
  148. Charisi, V.; Habibovic, A.; Andersson, J.; Li, J.; Evers, V. Children’s views on identification and intention communication of self-driving vehicles. In Proceedings of the 2017 Conference on Interaction Design and Children, Stanford, CA, USA, 27–30 June 2017; pp. 399–404. [Google Scholar]
  149. Klin, A.; Jones, W.; Schultz, R.; Volkmar, F.; Cohen, D. Visual fixation patterns during viewing of naturalistic social situations as predictors of social competence in individuals with autism. Arch. Gen. Psychiatry 2002, 59, 809–816. [Google Scholar] [CrossRef] [Green Version]
  150. Crehan, E.T.; Althoff, R.R. Me looking at you, looking at me: The stare-in-the-crowd effect and autism spectrum disorder. J. Psychiatr. Res. 2021, 140, 101–109. [Google Scholar] [CrossRef]
  151. Strauss, D.; Shavelle, R.; Anderson, T.W.; Baumeister, A. External causes of death among persons with developmental disability: The effect of residential placement. Am. J. Epidemiol. 1998, 147, 855–862. [Google Scholar] [CrossRef]
Figure 1. Experiment set-up.
Figure 1. Experiment set-up.
Mti 07 00010 g001
Figure 2. Mean RTs for crossing intention in the presence of yielding and non-yielding AVs for emotional and conversational facial expressions. Error bars represent standard error.
Figure 2. Mean RTs for crossing intention in the presence of yielding and non-yielding AVs for emotional and conversational facial expressions. Error bars represent standard error.
Mti 07 00010 g002
Figure 3. Mean RTs for crossing intention in the presence of yielding and non-yielding AVs for male and female VHCs. Error bars represent standard error.
Figure 3. Mean RTs for crossing intention in the presence of yielding and non-yielding AVs for male and female VHCs. Error bars represent standard error.
Mti 07 00010 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rouchitsas, A.; Alm, H. Smiles and Angry Faces vs. Nods and Head Shakes: Facial Expressions at the Service of Autonomous Vehicles. Multimodal Technol. Interact. 2023, 7, 10. https://doi.org/10.3390/mti7020010

AMA Style

Rouchitsas A, Alm H. Smiles and Angry Faces vs. Nods and Head Shakes: Facial Expressions at the Service of Autonomous Vehicles. Multimodal Technologies and Interaction. 2023; 7(2):10. https://doi.org/10.3390/mti7020010

Chicago/Turabian Style

Rouchitsas, Alexandros, and Håkan Alm. 2023. "Smiles and Angry Faces vs. Nods and Head Shakes: Facial Expressions at the Service of Autonomous Vehicles" Multimodal Technologies and Interaction 7, no. 2: 10. https://doi.org/10.3390/mti7020010

Article Metrics

Back to TopTop