Next Article in Journal
Bio-Based Products from Mediterranean Seaweeds: Italian Opportunities and Challenges for a Sustainable Blue Economy
Previous Article in Journal
Impact of Exposure to a Counter-Stereotypical STEM Television Program on Children’s Gender- and Race-Based STEM Occupational Schema
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Video-Based, Eye-Tracking Study to Investigate the Effect of eHMI Modalities and Locations on Pedestrian–Automated Vehicle Interaction

1
School of Business Administration, Northeastern University, Shenyang 110169, China
2
Faculty of Art and Communication, Kunming University of Science and Technology, Kunming 650500, China
*
Author to whom correspondence should be addressed.
Sustainability 2022, 14(9), 5633; https://doi.org/10.3390/su14095633
Submission received: 24 March 2022 / Revised: 28 April 2022 / Accepted: 29 April 2022 / Published: 7 May 2022
(This article belongs to the Topic Intelligent Transportation Systems)

Abstract

:
Numerous studies have emerged on the external human–machine interface (eHMI) to facilitate the communication between automated vehicles (AVs) and other road users. However, it remains to be determined which eHMI modality and location are proper for the pedestrian–AV interaction. Therefore, a video-based, eye-tracking study was performed to investigate how pedestrians responded to AVs with eHMIs in different modalities (flashing text, smiley, light band, sweeping pedestrian icon, arrow, and light bar) and locations (grill, windshield, and roof). Moreover, the effects of pedestrian-related factors (e.g., gender, sensation-seeking level, and traffic accident involvement) were also included and evaluated. The dependent variables included pedestrians’ clarity-rating scores towards these eHMI concepts, road-crossing decision time, and gaze-based metrics (e.g., fixation counts, dwell time, and first fixation duration). The results showed that the text, icon, and arrow-based eHMIs resulted in the shortest decision time, highest clarity scores, and centralized visual attention. The light strip-based eHMIs yielded no significant decrease in decision time yet longer fixation time, indicating difficulties in comprehension of their meaning without learning. The eHMI location had no effect on pedestrians’ decision time but a substantial influence on their visual searching strategy, with a roof eHMI contradicting pedestrians’ inherent scanning pattern. These findings provide implications for the standardized design of future eHMIs.

1. Introduction

Automated vehicles (AVs) promise to enhance traffic efficiency, environmental benefits, mobility of elderly and disabled users [1,2], and safety of road users (RUs) by reducing traffic accidents associated with human errors [3]. The realization of such anticipated benefits has been linked to broader uptake of AVs, e.g., public acceptance and adoption [4]. In general, safety is one of the basic human needs and one of the leading factors influencing the acceptance of AVs [4,5]. Topics around safety and security include equipment and system failure, cyber security, and system performance in poor or unexpected conditions. Regarding pedestrians, some of the most vulnerable road users (VRUs), their sense of safety when encountering AVs also plays a critical role in the acceptance of and interaction with AVs [5]. A recent interview study involving 16 experts in human factors revealed that external human–machine interfaces (eHMIs) of AVs could enhance the interaction between AVs and VRUs [6], which provides implications for eHMI as a supplementary channel to facilitate interaction safety and efficiency simultaneously [7].
Recently, numerous studies have emerged on eHMIs in the context of pedestrian–AV interaction. According to the SAE International [8], the forthcoming AVs, especially at levels four or five, will release the drivers/occupants from environment monitoring or vehicle control. For example, the drivers could be engaged in nondriving related tasks (NDRTs) or absent from the driver seat. In this sense, informal communication between pedestrians and drivers will not always be available, including eye contact, facial expressions, and head/body movements, which are necessary to perceive driver intention for pedestrians [9]. One reason for recommending the use of eHMIs would be to mitigate the uncertainty that arises from the lack of informal communication channels. Additionally, by equipping eHMIs, an AV can inform other RUs of its driving mode (automated/manual), current action (decelerating/accelerating/cruising), driving intention (yielding/nonyielding), situational awareness, and giving advice [10].
In recent years, various eHMI concepts have been designed and investigated. For example, Bazilinskyy et al. [11] reviewed and examined the clarity of 28 eHMI concepts presented by the automotive industry, whilst Fridman et al. [12] presented a crowdsourced assessment of 30 eHMIs commonly seen in the literature. More broadly, Dey et al. [10] and Carmona et al. [13] provided thorough and systematic reviews of the eHMIs most frequently used in industry and academia, covering their physical characteristics and social communication aspects. According to these comprehensive reviews, there seems to be a substantial difference in the existing studies regarding eHMI design across different dimensions, including their conveyed messages, communication modalities, colors, and placements. Regardless of the difference in eHMI concepts and research protocols (e.g., image- or video-based surveys, virtual reality experiments, and “Wizard-of-Oz”/“Ghost driver” approaches) used in different studies, some scientific consensus regarding eHMI design seems to be gradually reached. For example, conveying the AV’s yielding intent is generally considered the most critical information for other RUs, compared with AV’s driving mode or environmental perception [7,10,14]. As for the color preference in eHMI design, white, green, cyan (or blue-green/turquoise), red, blue, and yellow are frequently used [10]. In spite of this variety, there is a tendency that cyan has been widely recommended as an appropriate option to communicate the intent of an AV, given different dimensions of light perception (such as visibility and discrimination) and compliance with current traffic regulations [13,15,16,17].
However, there remains no consensus on many other dimensions of eHMI design. When it comes to the modality of communication, vision-based signals are used most frequently, including textual messages [18,19], symbols [12,20], anthropomorphic icons [21,22], light strips [23,24], and so forth. Research indicates that text-based eHMIs are regarded as clearest for the ease of understanding and learning, yet might have issues with liability, long-range legibility, and cross-cultural feasibility [11]. By contrast, light-based eHMIs (such as flashing light bands and sweeping light strips) are claimed to be somewhat ambiguous or incomprehensible by intuition [25], although eHMIs in this form are the most commonly used solution [10]. Moreover, many anthropomorphic eHMIs (e.g., eyes [26] and smiley [21,27]) are advocated for their performance in shortening pedestrians’ decision time and enhancing interactive experience. Similarly, symbol-based eHMIs, such as zebra crossing, walking pedestrian icon, and moving arrows, also yielded convincing performances in facilitating pedestrians’ understanding of AV’s intent, without including controversial issues as text- or light-based eHMIs do. Regarding the locus of eHMIs, a majority of studies attached eHMIs to the external surface of the AV, mostly on the windshield, followed by the bumper, roof, and grill [10]. These locations are conceivable and justifiable for their convenience in mounting a communication device [18], such as a larger eHMI to enhance visibility from afar [28]. Recent research by Eisma et al. [18] examined the effect of eHMI location on pedestrians’ crossing intention and suggested that eHMIs on the grill, windscreen, and roof were subjectively regarded as the clearest. From the existing body of literature, it is unclear, or at least inconsistent, which modality or location of the eHMI will engender the most perceived clarity and communication efficiency among pedestrians.
To sum up, limited consensus has been reached regarding eHMI design, which in turn delayed the standardization of eHMI for AVs to a certain degree. As stated by Joost de Winter (an expert in this field), the proliferation of eHMIs and lack of standardization raised the question of whether eHMIs are necessary [29] (p. 1). One potential shortcut to facilitate eHMI’s standardization could be moving forward based on the “limited consensus”. Therefore, it is worthwhile to explore the unknown based on the known, e.g., investigating the effect of eHMIs with the cyan color showing AVs’ yielding intention in different modalities and locations.
Besides the manipulation of eHMIs’ features, studies also varied in how they measured pedestrians’ responses when interacting with an AV. Subjective evaluation methods have been used in a number of studies to obtain pedestrians’ views and demands [30,31,32]. For example, Faas et al. [7] investigated the efficacy of different eHMI concepts (steady, flashing, and sweeping light signals) as an indicator of AVs’ yielding intention with questionnaires and structured interviews. Meanwhile, objective measures have been widely used in the imaginary, virtual, or actual interaction between pedestrians and AVs. The commonly mentioned benchmarks in the literature include time-based metrics, such as pedestrians’ decision/initiating time [27,33] and recognition time of AVs’ intention [34,35] and accuracy/error rate-based metrics; that is, whether pedestrians display compatible responses with the eHMI’s proposition [36,37].
In several recently published studies, a handful of researchers started to peep at pedestrians’ gaze behavior as a shortcut to obtain their perceptions of AV’s intention and examine the efficacy of various eHMI concepts [38]. As highlighted by previous studies, pedestrians’ eye movements follow a specific pattern towards oncoming human-driven vehicles; that is, their gaze tends to shift from the road surface towards the bumper, hood, and windshield of the vehicle as it is approaching [39]. It might be interesting to see whether such inherently visual patterns are enhanced or distracted when interacting with an AV with eHMIs that are generally light-based and visually displayed. The most relevant research in the context of AV–pedestrian interaction is the work by [18,37,40]. Using the Wizard-of-Oz protocol, Dey et al. [14] examined pedestrians’ crossing willingness and gaze behavior when crossing before an AV with/without a turquoise light-based eHMI showing different kinematics (i.e., gentle, early, aggressive, and no braking). Eisma et al. [18] investigated pedestrians’ crossing decisions and gaze dispersion towards an AV with textual eHMI displaying the AV’s yielding/nonyielding intention in different locations. In a video-based survey, Hochman et al. [37] investigated the pedestrians’ crossing possibility, decision time, and fixations when interacting with a fully AV that intends to stop. In this study, the manipulated factors include the presence of eHMI, message type (status/advise), modality (text/symbol), background color (red/green), and vehicle size and stopping distance. In general, these studies provide insights in applying the gaze-based metrics along with subjective and objective measures in examining specific dimensions of eHMIs. However, it remains unclear how eHMIs’ different modalities, loci, and interaction effects will shape pedestrians’ visual behavior.
To this end, the current study aims to investigate pedestrians’ crossing decisions, perceived clarity, and eye movements when interacting with an AV that mounts a cyan eHMI displaying a yielding intention in different modalities and locations. To our knowledge, these insights have not been previously investigated, especially from the perspective of pedestrians’ gaze activity. The manipulated factors are related to the eHMI characteristics that lack consensus from previous studies, including modality (such as text-, light-, symbol-based, and anthropomorphic icons) and location (grill, windshield, and roof of the AV). Additionally, this study utilizes a video-based protocol, and a mobile eye-tracker is employed to capture pedestrians’ gaze behavior before crossing in front of an AV with different eHMI concepts.

2. Materials and Methods

2.1. Participants

Ethical approval was obtained from the Northeastern University Research Ethics Committee. Sixty-two participants (32 females, average age = 25.63, SD = 3.47, Range = 19~37 years old) were recruited via social advertisements within the university. The participants were required to have normal or corrected-to-normal eyesight and mobility. All participants provided written informed consent forms and received monetary compensation for their participation.

2.2. Apparatus and Materials

The implementation and control of the experimental trials were achieved via Psychopy [41], an open-source package for researchers to run behavioral and psychological experiments. Meanwhile, participants’ behavioral responses (e.g., reaction time and Likert score) through input devices can be recorded by Psychopy. During the video-based experiment, participants were presented with the stimuli materials (i.e., video clips that contain an AV approaching from their right) on a 65-inch Samsung screen with a resolution of 1920 × 1080 pixels (1372 × 881 mm). Participants’ eye movements when interacting with the AV were recorded at 60 Hz using the SMI Eye Tracking Glasses 2.0 (SMI ETG, SensoMotoric Instruments, Teltow, Germany). The wireless eye-tracker is capable of recording live gaze activity remotely, thus allowing wearers to move and scan freely. However, to guarantee the quality of the gaze data, the participants were asked to sit still and keep their heads oriented towards the screen placed 1.5 m in front of them. Figure 1 illustrated the experimental scenario, in which the placements and connections between experimental elements were annotated and displayed.

2.3. Experimental Design

A within-subject design was used. The focus of this study was to determine how pedestrians respond to an AV that displays yielding intention with an eHMI in different modalities and locations. In this context, the experiment included two main independent variables: eHMI modality (six conditions) and eHMI location (three conditions). In addition, one trial where the AV yielded for the pedestrians without eHMI was supplemented as a baseline. Moreover, to avoid learning effects (that the pedestrians would expect the AV to yield for them and display an eHMI in every encounter), we incorporated ten trials (half of the number of yielding trials) where the AV did not yield (without eHMI) for the pedestrian. Therefore, the yielding/nonyielding behavior and eHMI setup resulted in 6 × 3 + 1 + 10 = 29 trials for each participant in the experimental session, reaching a total number of 1798 trials for all the 62 participants. Bearing the study aim in mind, pedestrians’ subjective responses, objective reactions, and gaze behavior in yielding conditions, especially with eHMIs, were analyzed as dependent variables. However, pedestrians’ behavior when interacting with an AV that did not yield was reported selectively to simplify the scope of this study.

2.3.1. Independent Variables (IVs)

For the modality variable, six eHMI types were used: flashing text (FT), sweeping pedestrian icon (SPI), sweeping arrow (SA), flashing smiley (FS), flashing light band (FLB), and sweeping light bar (SLB). These eHMI concepts are commonly seen in current industry design and academic research in terms of pedestrian–AV interaction design. It is worth noting that the textual eHMI used in this study is a Chinese version of the non-commanding text “Waiting”, as commanding messages could raise potential liability concerns [10]. Regarding placement locations, the grill, windshield, and roof of the AV were utilized, as these locations are frequently used and deemed to “yield the best performance” [18]. Table 1 lists the combination of these two IVs (6 × 3) as well as some inspiring and supportive references in choosing the target eHMI modalities and locations. The six pictures in Figure 2 illustrate how and where these eHMI concepts are mounted. Note that the AV showed only one eHMI modality at one location at a time in the experimental session (as shown in Figure 1).
There are some points that need extra clarification. Firstly, all of these eHMI concepts are presented in cyan color (RGB = (0, 255, 255)), with the purpose of exhibiting the AV’s yielding intent, according to the aforementioned consensus on eHMI color and information preference. Secondly, we attempted to provide a rigorous experimental design by controlling the length, height, thickness, spacing, and flashing/sweeping frequency of different eHMI concepts as much as possible. For example, the eHMI placed at the same location across modalities should occupy the same area (except for the flashing light band around the grill and windshield). The flashing/sweeping frequency of these eHMIs was set at 0.5 Hz, as [7,53] did in their studies. Thirdly, the eHMIs placed at the windshield and roof were presented in the form of holographic projection rather than on a background board used in other studies [18,22]. On the one hand, it remains unknown what external interface the hi-tech AV will use to communicate with other road users. On the other hand, a fixed component on the roof of the AV similar to a “taxi-roof-sign” might make the AV look weird and affect pedestrians’ gaze behavior if the eHMI is not presented on the roof in that particular trial.
Last but not least, participants’ demographics (e.g., gender, age, traffic accident involvement, and driving experience) and sensation-seeking score (SSS) [54] were collected using a postexperiment questionnaire. These factors, from the pedestrians’ perspective, have been reported to affect their road-crossing behavior and decisions to a certain degree [9]. Particularly, pedestrians’ sensation-seeking levels were reported to positively correlate with risky behavior, such as reckless road-crossing [55,56]. Moreover, people with higher sensation-seeking scores were also deemed to be more likely to trust an AV [57], which may result in quicker decisions when encountering an AV. Moreover, the driving skills of pedestrians were also investigated, as it might be interesting to see how a driver (in the form of a pedestrian) would react to a driverless vehicle. Therefore, these variables were also incorporated in the IVs to determine if they could influence pedestrians’ behavior and decisions when interacting with an AV with different eHMI concepts.

2.3.2. Design of Video Clips

The experiment consisted of 29 animated video clips depicting an AV approaching from the pedestrians’ right. The videos were 20 s long each and played at 30 frames per second. A single-lane, unsignalized road (width = 3.6 m, standard width for single-lane roads in China) in a residential area was selected as the traffic scenario. The camera angle was from the eyes of a pedestrian waiting to cross the road at a predefined location. As the participants would be briefed to interact with a driverless car, the AV in the video did not contain a driver or passenger. The yielding behavior and eHMI setup of the AV were manipulated across videos. We acknowledge that the AV could generate yielding or nonyielding behavior in various ways; only one yielding/nonyielding pattern was designed and explored to simplify the scope of this study.
At the beginning of all trials, the AV appeared 35 m away from the pedestrian, travelling at a speed of 40 km/h (the speed limit for the single-lane road). Herein, we assume that the AV had “seen” the pedestrian yet and had no idea whether they were going to cross the street or not. Therefore, the AV braked with a constant deceleration (2.47 m/s2 [14]) from its appearance to a speed of 0, with an elapsed time of 4.5 s and a braking distance of 25 m. If it was a nonyielding trial, the AV would accelerate and pass the pedestrian’s initial location immediately after its speed reached 0. No eHMI concepts were integrated into the nonyielding AVs in the current study, referring to relevant studies by [40]. If it was a yielding trial, the AV would edge for 4 s with a speed below 5 km/h before a complete stop, covering a traveling distance of 5 m. That is to say that, in this condition, the AV would stop completely at a distance of 5 m to the pedestrian and wait for them to cross, with an elapsed time of 9.5 s from the AV’s first appearance. On this basis, if an eHMI was going to be used to exhibit the AV’s yielding intention, it would appear along with the AV’s edging behavior (from 4.5 s onwards) and last until the end of the video.

2.3.3. Procedure and Participants’ Task

Once arrived, participants were asked to first provide informed consent. In the consent form, the task that they needed to complete was stated as follows:
“Imagine that you are waiting at the curb to cross to the other side of the road while you notice an AV approaching from your right. Due to the absence of a zebra or signaling facility, the AV may yield or not yield to you. The AV might display a novel external interface to convey its intentions. When you feel safe to cross the road, please press the “Enter” key on the Numpad to indicate that you are commencing crossing now. After each trial, there will be a one-item questionnaire regarding the information clarity of the AV; please indicate your rating score verbally to the experimenter.”
Once the participant took their position, they needed to wear the mobile eye tracker under the experimenter’s guidance, followed by a standard calibration. Subsequently, to familiarize participants with the experimental setup, they were exposed to 10 practice trials (five yielding and five nonyielding trials, without eHMI, and randomized), in which they had the chance to ask questions. After the practice trials, the formal experimental session containing 29 trials began in a randomized order to counterbalance learning effects. The experimental session lasted around 20 min.
Before concluding the experiment, we conducted a short, semistructured interview with the participants regarding how they experienced the task. Moreover, participants were required to finish an online questionnaire, which contains their demographics, factors influencing their road-crossing decision (e.g., vehicle speed, distance, time-to-arrival (TTA), and eHMI presence), and the 40-item Sensation Seeking Scale by [54].

2.3.4. Dependent Variables (DVs)

The following dependent variables were calculated and reported in this study:
Self-reported clarity. The pedestrians’ responses to the statement “The message conveyed by the AV is clear when I am going to cross the street” on a Likert scale from 1 (completely disagree) to 9 (completely agree).
Crossing decision time. The elapsed time from the AV’s appearance to the moment that the pedestrian presses the “Enter” key (as an indicator of road-crossing initiation) was recorded for every trial by the Psychopy, also known as pedestrians’ crossing decision time.
Gaze-based metrics. For each yielding trial, areas of interest (AOIs) corresponding to the eHMI’s location in this trial were drawn to extract and export the gaze-based metrics using BeGaze V3.6 (supporting software from SMI ETG). Particularly, for the yielding trials where the AV did not mount an eHMI, three AOIs (i.e., grill, windshield, and roof) were drawn for ease of subsequent comparison with eHMI trials. In the nonyielding condition, no AOIs were defined or drawn, given the fact that the no eHMI concepts were integrated and the driving pattern was not comparable to the yielding condition. Based on the predefined AOI, we reported three dependent variables: (1) Dwell time: the total time participants spent looking at an AOI. In general, a longer dwell time reflects a higher level of interest or task relevance of an AOI. However, this can also be attributed to the complex and confusing information conveyed by that AOI [58]. (2) Fixation count: the number of fixations within an AOI. A high number of fixations could mean the user repeatedly revisits the same object out of interest. However, it could also indicate difficulty with comprehension [59]. (3) First fixation duration: the duration of the first fixation on a particular AOI. These metrics are frequently used in other pedestrians’ road-crossing research as well [60,61,62].

2.4. Data Preparation and Statistical Analyses

This study adopted a relatively complex grouping design, with 29 trials depicting two yielding patterns and 19 external appearances. Although briefed with the same “safe-to-cross” instruction, participants had the discretion of perceived safety, which would therefore result in different waiting times before making a decision. For example, early road-crossing decisions (e.g., less than 4.5 s when the AV was decelerating) are fair enough for pedestrians, yet could raise issues with data analysis, such as unequal sample sizes and violation of the normality/sphericity assumptions and variance homogeneity. Therefore, the exported data was re-arranged before statistical analyses.
Firstly, three trials (out of 1798 trials) where the participants self-reported to have mistouched the “Enter” key on the Numpad were excluded. A double-check found that participants reacted in less than 0.1 s in these three trials. Secondly, the trials where participants’ decision time was below 4.5 s (211 out of 1798 trials) were not included in the subsequent analyses. As in these trials, participants commenced crossing before the AV exhibited a clear yielding/nonyielding intent or an eHMI, making it challenging to attribute their behavior or decisions to the experimental manipulations. Thirdly, the results for participants’ decision and behavioral data in the nonyielding condition (556 out of 620 trials) was reported separately and selectively, given the notable differences in driving pattern, external appearance, and measured variables in contrast to yielding/eHMI conditions. With the above exclusion criteria, 1028 yielding trials (54 no-eHMI, 974 with eHMI) were included to determine the effect of IVs. Table 2 provides detailed. descriptive data for each yielding condition after data exclusion, including number of trials, decision time, and clarity score.
Finally, pedestrian-related factors in the questionnaire were also coded for ease of analysis. Table 3 lists the coded results and descriptive data for each subgroup of these factors. For example, according to the distribution of participants’ sensation-seeking score (M (SD) = 15.32 (5.67), Range = 5~25), they were assigned to three groups (low SSS: 5~11; medium SSS: 12~18; high SSS: 19~25). Similarly, the driving experience was also transformed into an ordinal variable, i.e., no experience, novice driver (driving mileage below 3000 km), and veteran driver (driving mileage over 3000 km).
In this context, the effects of eHMI modality, location, and demographics on the dependent variables were assessed using the linear mixed-effects model (LMM) and reported with the fixed effects of the IVs. In general, an LMM does not depend on limited assumptions about the variance–covariance matrix and can accommodate missing data and unbalanced sample sizes (e.g., the unequal sample sizes in Table 2 and Table 3), in contrast to analysis of variance (ANOVA) [63,64]. The analysis was performed using IBM SPSS Statistics 24, with a significance level set at 0.05. Pairwise comparison tests of estimated marginal means for significant differences were carried out with a Bonferroni adjustment.

3. Results

3.1. Self-Reported Clarity

For the yielding condition, the LMM that incorporates multiple IVs resulted in a significant effect of gender (F(1, 997) = 10.45, p = 0.001), accident history (F(1, 997) = 35.78, p < 0.001), and eHMI modality (F(5, 997) = 68.82, p < 0.001) on the clarity-rating scores of the information conveyed by the AV. No interaction effects were found between eHMI modality and location. Male participants provided higher ratings (M = 6.59, 95% CI: 6.43, 6.75) toward these eHMI concepts than female participants (M = 6.23, 95% CI: 6.02, 6.44). Participants who had experienced traffic accidents reported a significantly lower clarity score (M = 6.01, 95% CI: 5.76, 6.25) than those who did not (M = 6.81, 95% CI: 6.67, 6.95). Figure 3 illustrates how participants’ clarity scores towards different eHMI modalities were distributed, as well as the pairwise comparisons between different conditions. In general, the AV with an eHMI in the form of flashing text, sweeping pedestrian icon, or sweeping arrow was regarded as the clearest. The clarity scores for the flashing smiley and flashing light band were significantly lower than the eHMIs in the first tier (p < 0.05) but received a higher score than the sweeping light bar (p < 0.05) and no-eHMI condition (which was labelled as NA (not applicable) in Figure 3). The SLB eHMI and NA conditions received the lowest clarity ratings, around 5 (neutral) in the 9-point Likert scale.

3.2. Decision Time

Regarding the decision time that participants spent before commencing crossing, significant differences were found in participants’ gender (F(1, 997) = 12.35, p < 0.001), sensation-seeking score (F(2, 997) = 32.81, p < 0.001), accident history (F(1, 997) = 7.69, p = 0.006), driving experience (F(2, 997) = 6.52, p = 0.002), and eHMI modality (F(5, 997) = 9.76, p < 0.001). There were no interaction effects between the eHMI modality and location concerning decision time. Male participants exhibited a quicker decision (M = 9.26 s, 95% CI: 9.09, 9.43) than the female participants (M = 9.68 s, 95% CI: 9.46, 9.90). Participants in the high SSS group were observed to exhibit the shortest decision time (M = 8.81 s, 95% CI: 8.57, 9.03) compared with the medium SSS (M = 9.75 s, 95% CI: 9.53, 9.96) and low SSS group (M = 9.86 s, 95% CI: 9.64, 10.08). Moreover, participants who had experienced traffic accidents tended to wait a bit longer (M = 9.67 s, 95% CI: 9.41, 9.92) than those who did not (M = 9.28 s, 95% CI: 9.14, 9.42). Regarding the effect of driving experience, participants with proficient driving skills tended to wait for a longer time (M = 9.77 s, 95% CI: 9.44, 10.11) before launching crossing, compared with participants with no driving experience (M = 9.47 s, 95% CI: 9.26, 9.68) or limited driving mileage (M = 9.18 s, 95% CI: 9.01, 9.34).
Figure 4 provides the results for pairwise comparisons between different yielding conditions. Together with Figure 3, we may make an intuitive yet justifiable inference that a clearer eHMI will lead to a shorter decision time, which was also found in [18] and reported as a “strong correlation between subjective and objective performance”. This was partly confirmed as the three eHMI concepts with the highest clarity scores resulted in the shortest decision time. However, with a higher clarity score for the FS and FLB eHMI compared with the SLB modality, no significant difference was found between FS, FLB, SLB, and no-eHMI conditions regarding their influence on participants’ decision time, as demonstrated by Figure 4. No main effect of eHMI location on participants’ decision time was found.

3.3. Clarity Score and Decision Time in the Nonyielding Condition

Figure 5 indicates participants’ decision time and clarity ratings in a series of encounters where the AV deliberately did not yield to the participant. In this condition, participants’ decision time exhibited a pattern of continuous decrease in the first few encounters, then maintained at around 9.3 s with minor fluctuations. This could be attributed to the fact that participants were getting more familiar with the AV’s behavioral patterns, thus gradually reaching a steady state. Similarly, participants’ clarity-rating scores towards the information conveyed by the nonyielding AV experienced some fluctuations at first, followed by a steady decline, and finally stabilized at a low level. It is important to note that the nonyielding AV kept the same driving pattern and external appearance (without eHMI) across trials, whilst the clarity ratings toward it demonstrated an overall downward trend. Participants provided their explanations in the postexperiment interview for this paradox. At the beginning of the experiment, they followed the road-crossing strategy and rating criterion developed in the practice session to a certain degree. Therefore, relatively high clarity-rating scores were seen, with a decline of the decision time, in the first handful of trials. However, as the experiment went on, they noticed and understood the yielding intent conveyed by some of the eHMIs. Compared with the eHMI conditions, the nonyielding AV without an eHMI was not that clear, thus receiving downward rating scores over time. Some participants stated that they were expecting an eHMI on the nonyielding AV as well.

3.4. Gaze-Based Metrics

Table 4 shows the effects of IVs on participants’ gazed-based metrics, including their dwell time, first fixation duration, and fixation count, when they were looking at the AOIs corresponding to the eHMI locus. Gender was observed to have a significant effect on fixation count (F(1, 658) = 22.87, p < 0.001), with females exhibiting more fixations (M = 5.03, 95% CI: 4.52, 5.55) when preparing to cross the road than males (M = 3.63, 95% CI: 3.23, 4.02). Participants’ sensation-seeking score yielded a significant effect on the dwell time (F(2, 658) = 10.37, p < 0.001) and fixation count (F(2, 658) = 11.45, p < 0.001). Pairwise comparisons showed that participants with high sensation-seeking scores had a shorter dwell time (M = 1.78 s, 95% CI: 1.54, 2.02) and a lower fixation count (M = 3.50, 95% CI: 2.98, 4.02), compared with the low SSS group (dwell time: M = 2.45, 95% CI: 2.23, 2.68; fixation count: M = 5.04, 95% CI: 4.55, 5.53). Besides, the medium SSS participants also generated middle fixation counts (M = 4.53, 95% CI: 3.95, 4.95), which were significantly higher than the sensation seekers (p = 0.003) and lower than the sensation avoiders (p = 0.047). Similarly, driving experience was also seen to significantly affect participants’ dwell time (F(2, 658) = 6.51, p = 0.002) and fixation counts (F(2, 658) = 4.20, p = 0.015). Participants with no driving experience had significantly shorter dwell time (M = 1.79, 95% CI: 1.57, 2.03) and a lower fixation count (M = 3.77, 95% CI: 3.26, 4.28), than participants with limited driving mileage (dwell time: M = 2.18, 95% CI: 2.01, 2.36; fixation count: M = 4.56, 95% CI: 4.18, 4.94) and proficient driving experience (dwell time: M = 2.45, 95% CI: 2.12, 2.79; fixation count: M = 4.66, 95% CI: 3.92, 5.40).
Moreover, eHMI modality and location had significant effects on participants’ dwell time, first fixation duration, and fixation count. Figure 6, Figure 7 and Figure 8 displayed the pairwise comparison results for these three metrics between different conditions of eHMI modalities and locations, separately. Overall, participants exhibited a distributed visual attention when commencing road-crossing before an AV with no special external interface, as demonstrated by the comparisons between NA and other eHMI modalities across these three figures. In other words, a novel eHMI displayed by the AV would attract the participants’ attention, whilst the degree varied among different eHMI concepts. The eHMI in the form of a sweeping light bar resulted in the longest dwell time, first fixation duration, and the most fixation counts. Note that the SLB was regarded as one of the least clear eHMI concepts and led to the longest decision time. It is reasonable to infer that the ambiguous information conveyed by this eHMI raised issues with perception and comprehension of the AV’s intention for participants. Therefore, about six fixations were observed in this particular condition, with the first fixation lasting 0.58 s and the dwell time reaching 3 s. Besides, the dwell time for an FLB eHMI (M = 2.46, 95% CI: 2.17, 2.75) was significantly longer than the flashing text (M = 1.84, 95% CI: 1.52, 2.15), as the textual message was reported by the participants as the clearest form for a road-crossing decision-making task. The first fixation duration led by a flashing smiley (M = 0.53, 95% CI: 0.50, 0.62) is longer than the sweeping pedestrian icon (M = 0.40, 95% CI: 0.32, 0.48).
Regarding the effect of the eHMI locus, it seems that the pairwise comparisons yielded contradictory results across the three metrics. For example, participants’ dwell time and fixation count when looking at the AOI at the grill are significantly longer and more, respectively, than the eHMI placed on the roof. However, the first fixation duration for a grill eHMI (M = 0.39, 95% CI: 0.34, 0.44) is much shorter than a roof eHMI (M = 0.56, 95% CI: 0.49, 0.63). Similarly, the number of fixations on the windshield (M = 4.49, 95% CI: 3.99, 4.99) is significantly more than on the roof (M = 3.59, 95% CI: 3.00, 4.18), while it went to the opposite side when it came to the first fixation count metric (M = 0.48, 95% CI: 0.42, 0.53; Roof: M = 0.56, 95% CI: 0.49, 0.63).

4. Discussion

Using a video-based, eye-tracking experiment, the current research aimed to investigate how pedestrians react to an approaching AV displaying a yielding intention with eHMI in different modalities and locations. Meanwhile, pedestrians’ demographics (gender, driving experience, and traffic accident involvement history) and sensation-seeking scores were also incorporated into a linear mixed model to determine their effects on pedestrians’ clarity score, decision time, and gaze-based metrics in different eHMI conditions.

4.1. eHMI as a Necessity

To begin with, the importance of the eHMI concept and its future application on the AV should be emphasized and advocated from this study. There have been voices that the state that eHMI (or explicit cues) plays a secondary or dispensable role in pedestrian–vehicle interaction, compared to implicit cues of vehicles (e.g., vehicle kinematics) which were assigned with overwhelming dominance [65,66,67]. For example, Moore at al. [66] reported that the vehicle kinematics itself should be sufficient to realize safe and efficient interactions in traffic. Herein, we do not deny the importance of implicit information of the AV in shaping pedestrians’ interactive strategy. On the one hand, during the experiment, in 211 out of 1798 trails (11.74%), the participants initiated crossing while the AV was decelerating, without a clear yielding intention or any extra external cues. Particularly, this kind of road-crossing strategy was observed to be adopted by two participants throughout the whole experiment. On the other hand, Figure 9 provides the percentage distribution of participants’ rating scores toward factors that had influenced their road-crossing decisions. AV’s yielding behavior, speed, and distance seem to have received the most number of “very important” responses.
However, the necessity of eHMI can be confirmed by solid evidence. Firstly, from the LMM results, eHMI concepts in the form of flashing text (FT), sweeping pedestrian icon (SPI), and sweeping arrow (SA) could significantly shorten participants’ decision time (see Figure 4), compared with the no-eHMI yielding condition. Secondly, in the nonyielding condition, with the increasing times of encountering the same AV (nonyielding, no eHMI), participants’ clarity scores toward the AV displayed an overall downward, yet unexpected, trend. This is somewhat contradictory to our intuition that the more we are exposed to the same task, the less time it will take (e.g., decreasing decision time), as everything becomes clearer. Participants attributed such a downtrend clarity to the fact that they had seen clearer options in the trials where an eHMI was used. Thirdly, 90% of the participants rated “eHMI presence” as “very important” or “important”, as demonstrated by Figure 9, which was convincing in proving eHMI influence. Additionally, from the perspective of pedestrians’ gaze behavior, using an eHMI seems able to rivet and centralize pedestrians’ visual attention to a particular area (as indicated by the pairwise comparisons in Figure 6, Figure 7 and Figure 8). Therefore, with a proper eHMI, it should be more efficient in visual searching and intent perception than visually tracking the state of the vehicle and scanning the road, bumper, and windshield [39]. The evidence from the current study sheds light on the positive influence of eHMI in pedestrian–AV interaction from different perspectives.

4.2. Effect of eHMI Modality

Regarding the eHMI modalities employed in this research, the eHMIs shown as FT, SPI, and SA were regarded as the clearest concepts and resulted in the shortest decision time among the six modalities. Besides, the FT, SPI, and SA eHMIs were easier to understand, as reflected by a shorter dwell time and first fixation duration than the other three eHMI forms. A possible explanation would be that the components in the former three eHMIs have practical and concrete meanings (e.g., arrows indicating directions, a pedestrian icon depicting information receivers), compared to the latter eHMIs using abstract and meaningless light strips solely. However, participants also raised potential issues in the postexperiment interview with the textual and arrow eHMIs. For example, the textual eHMI consisting of three Chinese characters was somewhat complicated and difficult to be recognized from a longer distance, which was in line with the concerns from [11]. In addition, some participants stated that the “sweeping arrow” functioned well in the straight road traffic scenario, yet might be confusing at a T-junction or an intersection because it seemed as if the vehicle itself would turn right.
Furthermore, the eHMI appearing as flashing smiley (FS), flashing light band (FLB), and sweeping light bar (SLB) showed a similar yet insignificant effect on participants’ decision time to how NA did, although the former two were subjectively rated clearer. However, regarding the effect on participants’ gaze behavior, these three eHMIs attracted more fixations and took a longer fixation time than the NA condition. It might generate a feeling that these concepts showed no better performance, yet took more time and attention resources compared with the no-eHMI condition. However, we will not jump to an arbitrary conclusion that these concepts are poorly designed or should be abandoned from future deployment. There was some literature that involved only one eHMI (out of these three concepts) and reported significant influence on participants’ decision time, such as the SLB in [14] and FLB in [33]. One possible reason for this disagreement is that the learning effect plays an essential role in pedestrians’ comprehension of eHMI concepts, particularly the abstract ones [37]. However, in the current research, participants had no more than three encounters with each eHMI modality presented at different locations, thus leaving few chances for learning. One analogy would be that, as a young child, we knew little about the meaning of red/green traffic lights until we heard “Red light, stop. Green light, go” multiple times.

4.3. Effect of eHMI Location

The eHMI location (grill, windshield, and roof) did not significantly affect participants’ clarity ratings and decision time, in accordance with the findings from [18]. However, the eye-tracker enabled us to peep at the influence of eHMI’s location on participants’ visual attention. Participants displayed a longer dwell time and more fixations at the grill and windshield, yet a longer first fixation duration at the roof. It is interesting to observe such inconsistency among these three fixation-based metrics because they were thought to be comparable and compatible in revealing the attributes of a visual task from a twofold perspective. More precisely, a higher value of these metrics can indicate more interest and task relevance of a particular AOI, or more uncertainty and difficulty in comprehending target information [68,69]. According to the scanning patterns reported by [39] and implicit cues conveyed by the front of a vehicle, the grill and windshield are expected to have a higher task relevance, not to mention an extra eHMI, thus attracting more visual attention. By contrast, an eHMI on the roof could somewhat redirect participants’ attention to the top of the AV, which seems contrary to our habitual gaze behavior. In addition, numerous researchers hold that pedestrians’ road-crossing decisions are predominantly subject to implicit cues (e.g., vehicle kinematics including speed, distance, TTA, etc.) [70]. In this sense, it is equitable that participants look at the grill/bumper and windshield more frequently and for a longer time to perceive the vehicle intention through its motion pattern. However, it is somewhat contrary to our habitual behavior and scanning pattern when noticing a glowing component on the roof of a vehicle, especially for the first time, which therefore resulted in a longer first fixation duration. However, after the first fixation, it seems that participants shifted their gaze to other places, as demonstrated by the limited total fixation counts and dwell time for the roof eHMI in Figure 6 and Figure 8. Even so, we still hold the point that the efficacy of the roof eHMI should be further investigated, especially in complex traffic environments with multiple in-line vehicles. It has been reported that a front display may not be sufficient in urban scenarios, as it can possibly be occluded if not presented on the first vehicle [71]. In this case, eHMIs mounted on the roof may help address this issue with its superior visibility.

4.4. Potential Issues with eHMI Design from Gaze Behavior

Notwithstanding the statement of eHMI as a necessity from several aspects, the potential issues with eHMI design from pedestrians’ gaze behavior should be noted.
The current eye-tracking study found that the eHMIs could reshape the distribution of pedestrians’ visual attention. However, it remains unknown how this re-allocation of limited visual resources will affect the interaction efficiency and safety simultaneously, particularly in a complex traffic scenario. For some eHMI concepts (e.g., the SPI), a significantly shorter decision time was recorded, along with a longer dwell time and more fixation counts at the eHMI area. It is reasonable to attribute such improvement in decision-making efficiency to the redirection and centralization of pedestrians’ visual attention. However, when it comes to complex traffic scenes, e.g., multiple vehicles approaching from more than one direction at an intersection site, a novel eHMI may lead to “cognitive/attentional tunnelling” [72]. As posed by [66], eHMI risks distracting pedestrians from other relevant cues, such as other cars and road users. It remains to be determined whether and how AVs with eHMIs will affect pedestrians’ perception of other traffic players.
Moreover, a new information channel, i.e., eHMI as an additional stimulus, may induce extra cognitive load onto pedestrians [66]. This situation could be caused by the vagueness and inadequate learning of particular eHMI concepts or the overuse of various eHMIs in one AV or across AVs. Taking the SLB eHMI as an example, this eHMI concept induced a similar decision time and clarity score to the no-eHMI yielding condition, yet resulted in the longest dwell time, fixation counts, and the most fixation counts. The abstract information conveyed by this eHMI seems hard to comprehend and therefore takes additional cognitive resources to handle, especially in very limited encounters. Besides, the excess in numbers and variety in modalities of eHMI could lead to misinterpretation or information overload [10], thus requiring more visual/cognitive resources.
These potential issues in eHMI design and evaluation remain to be further addressed. The eye-tracking device provides a chance to investigate and interpret pedestrians’ visual and cognitive responses to various eHMIs, Avs, and scenarios.

4.5. Factors Related to Pedestrians

Finally, pedestrian-related factors also influenced their subjective and objective responses. In terms of gender, male participants provided higher clarity scores for these eHMI concepts and made quicker road-crossing decisions, with fewer fixations and shorter fixation time, compared with females. This finding was also supported by other research [73]. Besides, participants with traffic accident involvements were more cautious in rating the clarity of the eHMIs and would rather wait for a longer time to ensure a safe crossing [74]. Intuitively, pedestrians’ behavior and decisions seem to have little to do with whether they can drive or not. However, in this experiment, pedestrians’ driving skills did yield significance in their gaze behavior and decision time. Particularly, participants with no driving experience displayed fewer fixations, shorter dwell time, and quicker decisions. By contrast, participants as veteran drivers behaved in a much more prudent way, which might be due to their risk aversion originating from a better understanding of vehicle-related threats.
Additionally, the sensation-seeking traits also significantly influenced pedestrians’ decision time and gaze behavior. When crossing before an approaching AV, the sensation-seekers were observed to have produced fewer fixations and shorter dwell and decision time, in contrast to the sensation-avoiders. The difference induced by sensation-seeking traits can be addressed twofold. On the one hand, sensation seeking has been proven to be a determinant of reckless behavior [54], such as red-light running [75]. On the other hand, previous investigations [57] and meta-analyses [76] indicated that individuals with higher sensation-seeking scores were more likely to trust and accept AVs. Therefore, less visual attention towards the AV and a quicker decision was recorded.

5. Limitations and Recommendations

The current study highlights the necessity of the eHMI and how it influences pedestrians’ subjective and objective responses with different modalities and placements through an elaborated video-based, eye-tracking experiment. However, several limitations should be noted. A major limitation is with the fixed traffic scenario, consisting of a one-way road, one AV, and one pedestrian. Although this setup allows us to determine the behavioral parameters of individual pedestrians deeply, it ignores the complexity and dynamicity of the road-crossing task in the real world. For example, pedestrians may produce varied behavioral patterns when crossing before in-line vehicles (eHMI possibly being occluded) at intersections. Secondly, to simplify the scope of the research, the eHMI was designed to be displayed at only one location in one modality. The effects of eHMIs in multiple modalities at multiple locations should be further investigated. Thirdly, limited encounters were assigned for each eHMI concept, given the research purpose and complex grouping design. However, it fails to reveal the learning effect of a particular eHMI across encounters, especially for the solely light-strip-based eHMIs, which may lead to biased results. It may be worthwhile to examine how the learning effect reshapes pedestrians’ visual attention and perceived clarity in a series of encounters. Additionally, due to the COVID-19 restrictions, the experiment included a convenience sample consisting of undergraduates and graduates, who are expected to have a normal perceptual, cognitive, and moving ability. It remains to be explored how children and older people perceive and interpret these eHMI concepts. Last but not least, the eHMI concepts used in the experiment were obtained from the existing research but modified to a certain degree; therefore, we advocate that these findings should be accessed across scenarios, ages, and cultures.

Author Contributions

Conceptualization, W.L. and F.G.; methodology, W.L., Z.R., M.L. and Z.L.; formal analysis, W.L.; data curation, W.L.; writing—original draft preparation, W.L.; writing—review and editing, W.L. and Z.R.; visualization, W.L.; supervision, F.G.; project administration, F.G.; funding acquisition, F.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 71771045 and 72071035.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Northeastern University Research Ethics Committee (approval code: NEU-EC-2020B003, 24 February 2022).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data available on request due to privacy restrictions.

Acknowledgments

The authors would like to thank Natasha Merat from the University of Leeds for her inspiring work, which has shaped our experimental design.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Fagnant, D.J.; Kockelman, K. Preparing a Nation for Autonomous Vehicles: Opportunities, Barriers and Policy Recommendations. Transp. Res. Part A Policy Pract. 2015, 77, 167–181. [Google Scholar] [CrossRef]
  2. Milakis, D.; Van Arem, B.; Van Wee, B. Policy and Society Related Implications of Automated Driving: A Review of Literature and Directions for Future Research. J. Intell. Transp. Syst. Technol. Plan. Oper. 2017, 21, 324–348. [Google Scholar] [CrossRef]
  3. Kaye, S.A.; Li, X.; Oviedo-Trespalacios, O.; Pooyan Afghari, A. Getting in the Path of the Robot: Pedestrians Acceptance of Crossing Roads near Fully Automated Vehicles. Travel Behav. Soc. 2022, 26, 1–8. [Google Scholar] [CrossRef]
  4. Bansal, P.; Kockelman, K.M.; Singh, A. Assessing Public Opinions of and Interest in New Vehicle Technologies: An Austin Perspective. Transp. Res. Part C Emerg. Technol. 2016, 67, 1–14. [Google Scholar] [CrossRef]
  5. Nordhoff, S.; Kyriakidis, M.; van Arem, B.; Happee, R. A Multi-Level Model on Automated Vehicle Acceptance (MAVA): A Review-Based Study. Theor. Issues Ergon. Sci. 2019, 20, 682–710. [Google Scholar] [CrossRef] [Green Version]
  6. Tabone, W.; de Winter, J.; Ackermann, C.; Bärgman, J.; Baumann, M.; Deb, S.; Emmenegger, C.; Habibovic, A.; Hagenzieker, M.; Hancock, P.A.; et al. Vulnerable Road Users and the Coming Wave of Automated Vehicles: Expert Perspectives. Transp. Res. Interdiscip. Perspect. 2021, 9, 100293. [Google Scholar] [CrossRef]
  7. Faas, S.M.; Mathis, L.; Baumann, M. External HMI for Self-Driving Vehicles: Which Information Shall Be Displayed ? Transp. Res. Part F Psychol. Behav. 2020, 68, 171–186. [Google Scholar] [CrossRef]
  8. SAE International. Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles; SAE International: Warrendale, PA, USA, 2018; Volume 4970. [Google Scholar]
  9. Rasouli, A.; Tsotsos, J.K. Autonomous Vehicles That Interact with Pedestrians: A Survey of Theory and Practice. IEEE Trans. Intell. Transp. Syst. 2020, 21, 900–918. [Google Scholar] [CrossRef] [Green Version]
  10. Dey, D.; Habibovic, A.; Löcken, A.; Wintersberger, P.; Pfleging, B.; Riener, A.; Martens, M.; Terken, J. Taming the EHMI Jungle: A Classification Taxonomy to Guide, Compare, and Assess the Design Principles of Automated Vehicles’ External Human-Machine Interfaces. Transp. Res. Interdiscip. Perspect. 2020, 7, 100174–100198. [Google Scholar] [CrossRef]
  11. Bazilinskyy, P.; Dodou, D.; Winter, J. De Survey on EHMI Concepts: The Effect of Text, Color, and Perspective. Transp. Res. Part F Psychol. Behav. 2019, 67, 175–194. [Google Scholar] [CrossRef]
  12. Fridman, L.; Mehler, B.; Xia, L.; Yang, Y.; Facusse, L.Y.; Reimer, B. To Walk or Not to Walk: Crowdsourced Assessment of External Vehicle-to-Pedestrian Displays. arXiv Prepr. 2017, arXiv:1707.02698. [Google Scholar] [CrossRef]
  13. Carmona, J.; Guindel, C.; Garcia, F.; de la Escalera, A. Ehmi: Review and Guidelines for Deployment on Autonomous Vehicles. Sensors 2021, 21, 2912. [Google Scholar] [CrossRef]
  14. Dey, D.; Matviienko, A.; Berger, M.; Martens, M.; Pfleging, B.; Terken, J. Communicating the Intention of an Automated Vehicle to Pedestrians: The Contributions of EHMI and Vehicle Behavior. IT-Inf. Technol. 2021, 63, 123–141. [Google Scholar] [CrossRef]
  15. Dey, D.; Habibovic, A.; Pfleging, B.; Martens, M.; Terken, J. Color and Animation Preferences for a Light Band EHMI in Interactions between Automated Vehicles and Pedestrians. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; Association for Computing Machinery: New York, NY, USA, 2020. [Google Scholar]
  16. Tiesler-Wittig, H. Functional Application, Regulatory Requirements and Their Future Opportunities for Lighting of Automated Driving Systems. 2019. Available online: https://www.sae.org/publications/technical-papers/content/2019-01-0848/ (accessed on 12 March 2022).
  17. Werner, A. New Colours for Autonomous Driving: An Evaluation of Chromaticities for the External Lighting Equipment of Autonomous Vehicles. Colour Turn 2018, 1, 1–15. [Google Scholar] [CrossRef]
  18. Eisma, Y.B.; van Bergen, S.; ter Brake, S.M.; Hensen, M.T.T.; Tempelaar, W.J.; de Winter, J.C.F. External Human-Machine Interfaces: The Effect of Display Location on Crossing Intentions and Eye Movements. Information 2020, 11, 13. [Google Scholar] [CrossRef] [Green Version]
  19. NISSAN. Nissan IDS Concept: Nissan’s Vision for the Future of EVs and Autonomous Driving. Available online: https://www.nissan-global.com/EN/DESIGN/NISSAN/DESIGNWORKS/CONCEPTCAR/IDS/ (accessed on 12 December 2021).
  20. Daimler Autonomous Concept Car Smart Vision EQ Fortwo: Welcome to the Future Ofcar Sharing-Daimler Global Media Site. Available online: https://group-media.mercedes-benz.com/marsMediaSite/en/instance/ko/Autonomous-concept-car-smart-vision-EQ-fortwo-Welcome-to-the-future-of-car-sharing.xhtml?oid=29042725 (accessed on 12 March 2022).
  21. Semcon. The Smiling Car. Available online: https://semcon.com/smilingcar/ (accessed on 10 March 2022).
  22. Dou, J.; Chen, S.; Tang, Z.; Xu, C.; Xue, C. Evaluation of Multimodal External Human–Machine Interface for Driverless Vehicles in Virtual Reality. Symmetry (Basel) 2021, 13, 687. [Google Scholar] [CrossRef]
  23. Kaleefathullah, A.A.; Merat, N.; Lee, Y.M.; Eisma, Y.B.; Madigan, R.; Garcia, J.; Winter, J. de External Human–Machine Interfaces Can Be Misleading: An Examination of Trust Development and Misuse in a CAVE-Based Pedestrian Simulation Environment. Hum. Factors 2020, 1–16. [Google Scholar] [CrossRef]
  24. Ford Virginia Tech Go Undercover to Develop Signals That Enable Autonomous Vehicles to Communicate with People. Available online: https://media.ford.com/content/fordmedia/fna/us/en/news/2017/09/13/ford-virginia-tech-autonomous-vehicle-human-testing.html (accessed on 16 November 2020).
  25. Hensch, C.; Neumann, I.; Beggiato, M.; Halama, J.; Krems, J.F. How Should Automated Vehicles Communicate?–Effects of a Light-Based Communication Approach in a Wizard-of-Oz Study. In Proceedings of the International conference on applied human factors and ergonomics, Washington, DC, USA, 24–28 July 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 79–91. [Google Scholar]
  26. Chang, C.-M.; Toda, K.; Igarashi, T.; Miyata, M.; Kobayashi, Y. A Video-Based Study Comparing Communication Modalities between an Autonomous Car and a Pedestrian. In Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Toronto, ON, Canada, 23–25 September 2018; ACM: New York, NY, USA, 2018; pp. 104–109. [Google Scholar]
  27. Holländer, K.; Colley, A.; Mai, C.; Häkkilä, J.; Alt, F.; Pfleging, B. Investigating the Influence of External Car Displays on Pedestrians’ Crossing Behavior in Virtual Reality. In Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services, Taipei, China, 1–4 October 2019; Association for Computing Machinery: Taipei, China, 2019; pp. 1–11. [Google Scholar]
  28. Ackermann, C.; Beggiato, M.; Schubert, S.; Krems, J.F. An Experimental Study to Investigate Design and Assessment Criteria: What Is Important for Communication between Pedestrians and Automated Vehicles? Appl. Ergon. 2019, 75, 272–282. [Google Scholar] [CrossRef]
  29. de Winter, J.; Dodou, D. External Human-Machine Interfaces: Gimmick or Necessity? Delft University of Technology, Delft, The Netherlands. 2022; submitted. [Google Scholar]
  30. Faas, S.M.; Baumann, M. Yielding Light Signal Evaluation for Self-Driving Vehicle and Pedestrian Interaction. In Proceedings of the International Conference on Human Systems Engineering and Design: Future Trends and Applications, Munich, Germany, 16–18 September 2019; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; pp. 189–194. [Google Scholar]
  31. de Clercq, K.; Dietrich, A.; Núñez Velasco, J.P.; de Winter, J.; Happee, R. External Human-Machine Interfaces on Automated Vehicles: Effects on Pedestrian Crossing Decisions. Hum. Factors 2019, 61, 1353–1370. [Google Scholar] [CrossRef] [Green Version]
  32. Lee, Y.M.; Madigan, R.; Garcia, J.; Tomlinson, A.; Solernou, A.; Romano, R.; Markkula, G.; Merat, N.; Uttley, J. Understanding the Messages Conveyed by Automated Vehicles. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2019, Utrecht, The Netherlands, 21–25 September 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 134–143. [Google Scholar] [CrossRef]
  33. Lee, Y.M.; Madigan, R.; Uzondu, C.; Garcia, J.; Romano, R.; Markkula, G.; Merat, N. Learning to Interpret Novel EHMI: The Effect of Vehicle Kinematics and EHMI Familiarity on Pedestrian’ Crossing Behavior. J. Saf. Res. 2022, 80, 270–280. [Google Scholar] [CrossRef]
  34. Fuest, T.; Michalowski, L.; Träris, L.; Bellem, H.; Bengler, K. Using the Driving Behavior of an Automated Vehicle to Communicate Intentions-A Wizard of Oz Study. In Proceedings of the 21st IEEE International Conference on Intelligent Transportation Systems, ITSC 2018, Maui, HI, USA, 4–7 November 2018; Institute of Electrical and Electronics Engineers Inc.: Garching, Germany, 2018; Volume 2018, pp. 3596–3601. [Google Scholar]
  35. Fuest, T.; Maier, A.S.; Bellem, H.; Bengler, K. How Should an Automated Vehicle Communicate Its Intention to a Pedestrian?–A Virtual Reality Study. In Proceedings of the International Conference on Human Systems Engineering and Design: Future Trends and Applications, Munich, Germany, 16–18 September 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 195–201. [Google Scholar]
  36. Ackermans, S.; Dey, D.; Ruijten, P.; Cuijpers, R.H.; Pfleging, B. The Effects of Explicit Intention Communication, Conspicuous Sensors, and Pedestrian Attitude in Interactions with Automated Vehicles. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; Association for Computing Machinery: New York, NY, USA, 2020. [Google Scholar]
  37. Hochman, M.; Parmet, Y.; Oron-Gilad, T. Pedestrians’ Understanding of a Fully Autonomous Vehicle’s Intent to Stop: A Learning Effect Over Time. Front. Psychol. 2020, 11, 3407–3418. [Google Scholar] [CrossRef]
  38. Lévêque, L.; Ranchet, M.; Deniel, J.; Bornard, J.C.; Bellet, T. Where Do Pedestrians Look When Crossing? A State of the Art of the Eye-Tracking Studies. IEEE Access 2020, 8, 164833–164843. [Google Scholar] [CrossRef]
  39. Dey, D.; Walker, F.; Martens, M.; Terken, J. Gaze Patterns in Pedestrian Interaction with Vehicles: Towards Effective Design of External Human-Machine Interfaces for Automated Vehicles. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2019, Utrecht, The Netherlands, 21–25 September 2019; ACM: New York, NY, USA, 2019; pp. 369–378. [Google Scholar]
  40. Bazilinskyy, P.; Kooijman, L.; Dodou, D.; De Winter, J.C.F. Coupled Simulator for Research on the Interaction between Pedestrians and (Automated) Vehicles. In Proceedings of the 19th Driving Simulation Conference (DSC), Antibes, France, 9–11 September 2020; Driving Simulation Association (DSA): Antibes, France, 2020; pp. 1–7. [Google Scholar]
  41. Peirce, J.W. PsychoPy—Psychophysics Software in Python. J. Neurosci. Methods 2007, 162, 8–13. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Hudson, C.; Deb, S.; CarruthJohn, D.; McGinley, J.; Frey, D. Pedestrian Perception of Autonomous Vehicles with External Interacting Features. In Proceedings of the International Conference on Applied Human Factors and Ergonomics, Orlando, FL, USA, 21–25 July 2018; Springer: Berlin/Heidelberg, Germany, 2018; Volume 781, pp. 33–39. [Google Scholar]
  43. Fillenberg, S.; Pinkow, S. Continental: Holistic Human-Machine Interaction for Autonomous Vehicles. Available online: https://www.continental.com/en/press/press-releases/2019-12-12-hmi-cube/#:~:text=Theultimategoalisto,thepathoftheshuttle (accessed on 11 March 2022).
  44. Othersen, I.; Conti-Kufner, A.S.; Dietrich, A.; Maruhn, P.; Bengler, K. Designing for Automated Vehicle and Pedestrian Communication: Perspectives on EHMIs from Older and Younger Persons. Proc. Hum. Factors Ergon. Soc. Eur. Chapter 2018 Annu. Conf. 2018, 4959, 135–148. [Google Scholar]
  45. Chang, C.M. A Gender Study of Communication Interfaces between an Autonomous Car and a Pedestrian. In Proceedings of the 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2020, Virtual Event, USA, 21–22 September 2020; ACM: New York, NY, USA, 2020; pp. 42–45. [Google Scholar]
  46. Knightarchive, W. New Self-Driving Car Tells Pedestrians When It’s Safe to Cross the Street. Available online: https://www.technologyreview.com/2016/08/30/7287/new-self-driving-car-tells-pedestrians-when-its-safe-to-cross-the-street/. (accessed on 8 March 2022).
  47. Deb, S.; Strawderman, L.J.; Carruth, D.W. Investigating Pedestrian Suggestions for External Features on Fully Autonomous Vehicles: A Virtual Reality Experiment. Transp. Res. Part F Traffic Psychol. Behav. 2018, 59, 135–149. [Google Scholar] [CrossRef]
  48. Wang, P.; Motamedi, S.; Bajo, T.C.; Zhou, X.; Qi, S.; Whitney, D.; Chan, C.-Y. Safety Implications of Automated Vehicles Providing External Communication to Pedestrians; University of Californi: Berkeley, CA, USA, 2019. [Google Scholar]
  49. Wilbrink, M.; Lau, M.; Illgner, J.; Schieben, A.; Oehl, M. Impact of External Human—Machine Interface Communication Strategies of Automated Vehicles on Pedestrians’ Crossing Decisions and Behaviors in an Urban Environment. Sustainability 2021, 13, 8396. [Google Scholar] [CrossRef]
  50. Schieben, A.; Wilbrink, M.; Kettwich, C.; Dodiya, J.; Sorokin, L.; Merat, N.; Dietrich, A.; Bengler, K.; Kaup, M. Testing External HMI Designs for Automated Vehicles–An Overview on User Study Results from the EU Project InterACT. In Proceedings of the 19 Tagung Automatisiertes Fahren, Munich, Germany, 21–22 November 2019; TüV: Munich, Germany, 2019; pp. 1–7. [Google Scholar]
  51. Dey, D.; Van Vastenhoven, A.; Cuijpers, R.H.; Martens, M.; Pfleging, B. Towards Scalable EHMIs: Designing for AV-VRU Communication beyond One Pedestrian. In Proceedings of the 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2021, Leeds, UK, 9–14 September 2021; ACM: New York, NY, USA, 2021; pp. 274–286. [Google Scholar]
  52. Faas, S.M.; Kraus, J.; Schoenhals, A.; Baumann, M. Calibrating Pedestrians’ Trust in Automated Vehicles: Does an Intent Display in an External HMI Support Trust Calibration and Safe Crossing Behavior? In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; ACM: New York, NY, USA, 2021; pp. 1–17. [Google Scholar]
  53. Faas, S.M.; Kao, A.C.; Baumann, M. A Longitudinal Video Study on Communicating Status and Intent for Self-Driving Vehicle - Pedestrian Interaction. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; Association for Computing Machinery: New York, NY, USA, 2020. [Google Scholar]
  54. Zuckerman, M.; Kolin, E.A.; Price, L.; Zoob, I. Development of a Sensation-Seeking Scale. J. Consult. Psychol. 1964, 28, 477–482. [Google Scholar] [CrossRef]
  55. Rosenbloom, T.; Mandel, R.; Rosner, Y.; Eldror, E. Hazard Perception Test for Pedestrians. Accid. Anal. Prev. 2015, 79, 160–169. [Google Scholar] [CrossRef]
  56. Herrero-Fernández, D.; Macía-Guerrero, P.; Silvano-Chaparro, L.; Merino, L.; Jenchura, E.C. Risky Behavior in Young Adult Pedestrians: Personality Determinants, Correlates with Risk Perception, and Gender Differences. Transp. Res. Part F Traffic Psychol. Behav. 2016, 36, 14–24. [Google Scholar] [CrossRef]
  57. Zhang, T.; Tao, D.; Qu, X.; Zhang, X.; Zeng, J.; Zhu, H.; Zhu, H. Automated Vehicle Acceptance in China: Social Influence and Initial Trust Are Key Determinants. Transp. Res. Part C Emerg. Technol. 2020, 112, 220–233. [Google Scholar] [CrossRef]
  58. Oculid Main Metrics in Eye Tracking. Available online: https://www.oculid.com/oculid-blog/main-metrics-in-eye-tracking. (accessed on 14 March 2022).
  59. Tobii Metrics for Eye Tracking Analytics. Available online: https://vr.tobii.com/sdk/learn/analytics/fundamentals/metrics/. (accessed on 14 March 2022).
  60. Zito, G.A.; Cazzoli, D.; Scheffler, L.; Jäger, M.; Müri, R.M.; Mosimann, U.P.; Nyffeler, T.; Mast, F.W.; Nef, T. Street Crossing Behavior in Younger and Older Pedestrians: An Eye- and Head-Tracking Study Psychology, Psychiatry and Quality of Life. BMC Geriatr. 2015, 15, 1–10. [Google Scholar] [CrossRef] [Green Version]
  61. Jiang, K.; Ling, F.; Feng, Z.; Ma, C.; Kumfer, W.; Shao, C.; Wang, K. Effects of Mobile Phone Distraction on Pedestrians’ Crossing Behavior and Visual Attention Allocation at a Signalized Intersection: An Outdoor Experimental Study. Accid. Anal. Prev. 2018, 115, 170–177. [Google Scholar] [CrossRef]
  62. Gruden, C.; Ištoka Otković, I.; Šraml, M. Safety Analysis of Young Pedestrian Behavior at Signalized Intersections: An Eye-Tracking Study. Sustainability 2021, 13, 4419. [Google Scholar] [CrossRef]
  63. Smith, P.F. A Note on the Advantages of Using Linear Mixed Model Analysis with Maximal Likelihood Estimation over Repeated Measures ANOVAs in Psychopharmacology: Comment on Clark et al. J. Psychopharmacol. 2012, 26, 1605–1607. [Google Scholar] [CrossRef] [PubMed]
  64. Magezi, D.A. Linear Mixed-Effects Models for within-Participant Psychology Experiments: An Introductory Tutorial and Free, Graphical User Interface (LMMgui). Front. Psychol. 2015, 6, 1–7. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  65. Lee, Y.M.; Madigan, R.; Giles, O.; Garach-Morcillo, L.; Markkula, G. Road Users Rarely Use Explicit Communication When Interacting in Today’s Traffic: Implications for Automated Vehicles. Cogn. Technol. Work 2020, 23, 367–380. [Google Scholar] [CrossRef]
  66. Dylan, M.; Currano, R.; Strack, G.E.; David, S. The Case for Implicit External Human-Machine Interfaces for Autonomous Vehicles. In Proceedings of the AutomotiveUI ’19: Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Utrecht, The Netherlands, 21–25 September 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 295–307. [Google Scholar]
  67. Domeyer, J.E.; Lee, J.D.; Toyoda, H. Vehicle Automation-Other Road User Communication and Coordination: Theory and Mechanisms. IEEE Access 2020, 8, 19860–19872. [Google Scholar] [CrossRef]
  68. Joseph, A.W.; Murugesh, R. Potential Eye Tracking Metrics and Indicators to Measure Cognitive Load in Human-Computer Interaction Research. J. Sci. Res. 2020, 64, 168–175. [Google Scholar] [CrossRef]
  69. Jacob, R.J.K.; Karn, K.S. Eye Tracking in Human-Computer Interaction and Usability Research: Ready to Deliver the Promises. In The Mind’s Eye; Elsevier: Amsterdam, The Netherlands, 2003; pp. 573–605. [Google Scholar]
  70. Ackermann, C.; Beggiato, M.; Bluhm, L.F.; Löw, A.; Krems, J.F. Deceleration Parameters and Their Applicability as Informal Communication Signal between Pedestrians and Automated Vehicles. Transp. Res. Part F Traffic Psychol. Behav. 2019, 62, 757–768. [Google Scholar] [CrossRef]
  71. Borkoswki, S.; Spalanzani, A.; Vaufreydaz, D. EHMI Positioning for Autonomous Vehicle/Pedestrians Interaction. In Proceedings of the 31st Conference on L’Interaction Homme-Machine: Adjunct, Grenoble, France, 10–13 December 2019; ACM: New York, NY, USA, 2019; pp. 1–8. [Google Scholar]
  72. Jarmasz, J.; Herdman, C.M.; Johannsdottir, K.R. Object-Based Attention and Cognitive Tunneling. J. Exp. Psychol. Appl. 2005, 11, 3–12. [Google Scholar] [CrossRef]
  73. Holland, C.; Hill, R. The Effect of Age, Gender and Driver Status on Pedestrians’ Intentions to Cross the Road in Risky Situations. Accid. Anal. Prev. 2007, 39, 224–237. [Google Scholar] [CrossRef] [PubMed]
  74. Moyano Díaz, E. Theory of Planned Behavior and Pedestrians’ Intentions to Violate Traffic Regulations. Transp. Res. Part F Traffic Psychol. Behav. 2002, 5, 169–175. [Google Scholar] [CrossRef]
  75. Rosenbloom, T. Sensation Seeking and Pedestrian Crossing Compliance. Soc. Behav. Pers. 2006, 34, 113–122. [Google Scholar] [CrossRef]
  76. Kaye, S.A.; Somoray, K.; Rodwell, D.; Lewis, I. Users’ Acceptance of Private Automated Vehicles: A Systematic Review and Meta-Analysis. J. Saf. Res. 2021, 79, 352–367. [Google Scholar] [CrossRef]
Figure 1. The video-based, eye-tracking experimental setup.
Figure 1. The video-based, eye-tracking experimental setup.
Sustainability 14 05633 g001
Figure 2. The AV with an eHMI in different modalities at the grill, windshield, and roof. The AV showed only one eHMI modality at one location at a time in the experiment. The textual message in the top left sub-figure was a Chinese version of the non-commanding text “Waiting”.
Figure 2. The AV with an eHMI in different modalities at the grill, windshield, and roof. The AV showed only one eHMI modality at one location at a time in the experiment. The textual message in the top left sub-figure was a Chinese version of the non-commanding text “Waiting”.
Sustainability 14 05633 g002
Figure 3. Participants’ clarity score toward different eHMI modalities and no-eHMI conditions for yielding trials. FT: flashing text; SPI: sweeping pedestrian icon; SA: sweeping arrow; FS: flashing smiley; FLB: flashing light band; SLB: sweeping light bar; NA: not applicable, depicting the no-eHMI yielding condition. Error bars indicate the standard error of the mean (SEM). * depicts p < 0.05, ** p < 0.01, *** p < 0.001.
Figure 3. Participants’ clarity score toward different eHMI modalities and no-eHMI conditions for yielding trials. FT: flashing text; SPI: sweeping pedestrian icon; SA: sweeping arrow; FS: flashing smiley; FLB: flashing light band; SLB: sweeping light bar; NA: not applicable, depicting the no-eHMI yielding condition. Error bars indicate the standard error of the mean (SEM). * depicts p < 0.05, ** p < 0.01, *** p < 0.001.
Sustainability 14 05633 g003
Figure 4. Participants’ decision time under different eHMI modalities and no-eHMI conditions for yielding trials. Error bars indicate SEM. * depicts p < 0.05, ** p < 0.01, *** p < 0.001.
Figure 4. Participants’ decision time under different eHMI modalities and no-eHMI conditions for yielding trials. Error bars indicate SEM. * depicts p < 0.05, ** p < 0.01, *** p < 0.001.
Sustainability 14 05633 g004
Figure 5. Participants’ decision time and clarity score in nonyielding trials. Error bars indicate SEM.
Figure 5. Participants’ decision time and clarity score in nonyielding trials. Error bars indicate SEM.
Sustainability 14 05633 g005
Figure 6. Participants’ dwell time in different eHMI modalities and locations. Error bars indicate SEM. * depicts p < 0.05, ** p < 0.01, *** p < 0.001.
Figure 6. Participants’ dwell time in different eHMI modalities and locations. Error bars indicate SEM. * depicts p < 0.05, ** p < 0.01, *** p < 0.001.
Sustainability 14 05633 g006
Figure 7. Participants’ first fixation duration in different eHMI modalities and locations. Error bars indicate SEM. * depicts p < 0.05, ** p < 0.01, *** p < 0.001.
Figure 7. Participants’ first fixation duration in different eHMI modalities and locations. Error bars indicate SEM. * depicts p < 0.05, ** p < 0.01, *** p < 0.001.
Sustainability 14 05633 g007
Figure 8. Participants’ fixation counts in different eHMI modalities and locations. Error bars indicate SEM. * depicts p < 0.05, ** p < 0.01, *** p < 0.001.
Figure 8. Participants’ fixation counts in different eHMI modalities and locations. Error bars indicate SEM. * depicts p < 0.05, ** p < 0.01, *** p < 0.001.
Sustainability 14 05633 g008
Figure 9. Participants’ ratings of the factors influencing their road-crossing decision.
Figure 9. Participants’ ratings of the factors influencing their road-crossing decision.
Sustainability 14 05633 g009
Table 1. Supportive references for the selection of combination of eHMI modality and location.
Table 1. Supportive references for the selection of combination of eHMI modality and location.
ReferenceDescriptionsGrillWindshieldRoof
FTcharacters flashing at 0.5 Hz[18,40][19,28][18,40]
SPIicons sweeping from right to left (from pedestrian’s view)[12,27][42,43][37]
SAarrows sweeping from right to left (from pedestrian’s view)[44,45][12,22][46]
FSsmiley flashing at 0.5 Hz[21,27][22,47][48]
FLBisosceles trapezoid (rectangle if on the roof) flashing at 0.5 Hz[23][33,49][50]
SLBlight bars sweeping from both sides to the middle[14,51][24,44][25,52]
Table 2. Descriptive data for each yielding condition.
Table 2. Descriptive data for each yielding condition.
eHMI ModalityeHMI LocationNo. of TrialsDecision Time (s)Clarity Score
MSDMSD
Flashing text
(FT)
Grill558.461.567.871.19
Windshield568.721.807.961.03
Roof558.771.607.781.33
Sweeping pedestrian icon
(SPI)
Grill538.861.587.771.23
Windshield549.071.567.941.45
Roof529.071.387.731.17
Sweeping arrow
(SA)
Grill539.011.637.511.31
Windshield568.931.787.711.22
Roof579.021.647.191.59
Flashing smiley
(FS)
Grill559.241.686.111.89
Windshield549.482.366.171.75
Roof569.391.675.982.00
Flashing light band
(FLB)
Grill559.382.055.872.24
Windshield519.641.936.182.01
Roof539.741.885.942.11
Sweeping light bar
(SLB)
Grill549.852.165.481.91
Windshield539.781.795.451.86
Roof529.701.915.291.85
No eHMI (NA)549.921.515.372.09
Table 3. Descriptive data for pedestrian-related factors.
Table 3. Descriptive data for pedestrian-related factors.
GenderTraffic Accident HistorySensation-Seeking Score (SSS)Driving Experience
CodingFreq.CodingFreq.CodingFreq.SSS (M (SD))CodingFreq.Mileage (M (SD))
male30no50low SSS189.00 (1.88)none180 (0)
female32yes12medium SSS2414.67 (2.01)novice driver361547.23 (88.92)
high SSS2021.80 (1.91)veteran driver83620.18 (147.26)
Table 4. Inferential statistics of the LMM for the dependent variables “dwell time”, “first fixation duration”, and “fixation count”.
Table 4. Inferential statistics of the LMM for the dependent variables “dwell time”, “first fixation duration”, and “fixation count”.
Effectdf1df2Dwell TimeFirst Fixation DurationFixation Count
FpFpFp
Gender16583.320.0693.750.05322.87<0.001
SSS265810.37<0.0012.970.05211.45<0.001
Accident history16581.170.2790.060.8120.210.646
Driving experience26586.510.0021.200.3034.200.015
eHMI modality66589.34<0.0013.110.0054.89<0.001
eHMI location26583.180.0429.70<0.0018.14<0.001
eHMI modality × eHMI location126580.300.9900.850.5950.810.636
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Guo, F.; Lyu, W.; Ren, Z.; Li, M.; Liu, Z. A Video-Based, Eye-Tracking Study to Investigate the Effect of eHMI Modalities and Locations on Pedestrian–Automated Vehicle Interaction. Sustainability 2022, 14, 5633. https://doi.org/10.3390/su14095633

AMA Style

Guo F, Lyu W, Ren Z, Li M, Liu Z. A Video-Based, Eye-Tracking Study to Investigate the Effect of eHMI Modalities and Locations on Pedestrian–Automated Vehicle Interaction. Sustainability. 2022; 14(9):5633. https://doi.org/10.3390/su14095633

Chicago/Turabian Style

Guo, Fu, Wei Lyu, Zenggen Ren, Mingming Li, and Ziming Liu. 2022. "A Video-Based, Eye-Tracking Study to Investigate the Effect of eHMI Modalities and Locations on Pedestrian–Automated Vehicle Interaction" Sustainability 14, no. 9: 5633. https://doi.org/10.3390/su14095633

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop