Next Article in Journal
Electric Tractors in China: Current Situation, Trends, and Potential
Previous Article in Journal
From Cell to Pack: Empirical Analysis of the Correlations Between Cell Properties and Battery Pack Characteristics of Electric Vehicles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluating Visual eHMI Formats for Pedestrian Crossing Confirmation in Electric Autonomous Vehicles: A Comprehension-Time Study with Simulation and Preliminary Field Validation

by
Nuksit Noomwongs
1,2,
Natchanon Kitpramongsri
1,
Sunhapos Chantranuwathana
1,2,* and
Gridsada Phanomchoeng
1,2,3,*
1
Department of Mechanical Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok 10330, Thailand
2
Smart Mobility Research Unit, Faculty of Engineering, Chulalongkorn University, Bangkok 10330, Thailand
3
Micro/Nano Electromechanical Integrated Device Research Unit, Faculty of Engineering, Chulalongkorn University, Bangkok 10330, Thailand
*
Authors to whom correspondence should be addressed.
World Electr. Veh. J. 2025, 16(9), 485; https://doi.org/10.3390/wevj16090485
Submission received: 26 July 2025 / Revised: 13 August 2025 / Accepted: 21 August 2025 / Published: 25 August 2025

Abstract

Effective communication between electric autonomous vehicles (EAVs) and pedestrians is critical for safety, yet the absence of a driver removes traditional cues such as eye contact or gestures. While external human–machine interfaces (eHMIs) have been proposed, few studies have systematically compared visual formats across demographic groups and validated findings in both simulation and real-world settings. This study addresses this gap by evaluating various eHMI designs using combinations of textual cues (“WALK” and “CROSS”), symbolic indicators (pedestrian and arrow icons), and display colors (white and green). Twenty simulated scenarios were developed in the CARLA simulator, where 100 participants observed an EAV equipped with eHMIs and responded by pressing a button upon understanding the vehicle’s intention. The results showed that green displays facilitated faster comprehension than white, “WALK” was understood more quickly than “CROSS,” and pedestrian symbols outperformed arrows in clarity. The fastest overall comprehension occurred with the green pedestrian symbol paired with the word “WALK.” A subsequent field experiment using a Level 3 autonomous vehicle with a smaller participant group and differing speed/distance conditions provided preliminary support for the consistency of these observed trends. The novelty of this work lies in combining simulation with preliminary field validation, using comprehension time as the primary metric, and comparing results across four age groups to derive evidence-based eHMI design recommendations. These findings offer practical guidance for enhancing pedestrian safety, comprehension, and trust in EAV–pedestrian interactions.

1. Background

1.1. Motivation

Electric autonomous vehicles (EAVs) are increasingly recognized as a key solution for improving road safety and minimizing human error in traffic systems. With the advancement of automation technologies, particularly at Society of Automotive Engineers (SAE) Levels 4 and 5 [1], the absence of a human driver removes conventional social interaction cues, such as eye contact, hand gestures, and verbal signals, that pedestrians typically rely on to judge crossing safety. This creates a critical gap in vehicle-to-pedestrian (V2P) communication, particularly in complex urban environments where hesitation or misinterpretation can lead to unsafe crossing behaviors. To address this motivation, this section also reviews relevant literature on external human–machine interfaces (eHMIs) and their design elements, providing the background necessary to position the present study within current research trends.

1.2. Literature Review

To address this communication barrier, eHMIs have been proposed as a means for EAVs to convey their intentions to nearby pedestrians. eHMIs encompass visual signals such as text, icons, and light patterns projected on the vehicle’s exterior, and aim to replicate or substitute the communicative role previously fulfilled by human drivers. As the deployment of EAVs increases, ensuring that these interfaces are intuitive and universally understood becomes crucial for public safety and trust.
Pedestrian safety continues to be a major concern worldwide. The World Health Organization (WHO) [2] reported 32.7 pedestrian deaths per 100,000 people in 2018, with Thailand ranking third in total road fatalities. Between 2013 and 2017, Thailand’s Consumer Council [3] found that pedestrian-vehicle incidents had the highest severity index, averaging 55 deaths per 100 accidents. These alarming figures underscore the urgent need for effective pedestrian-EAV interaction strategies, particularly at zebra crossings, which remain hotspots for accidents due to unclear mutual intentions [4].
In response to this challenge, eHMI designs have evolved considerably, ranging from simple road projections [5,6] and symbolic displays [7,8] to animated visual elements and culturally adapted formats [9,10,11,12,13,14]. Dey et al. [10] proposed a taxonomy of over 70 eHMI formats, reflecting diverse experimental approaches and design philosophies. Yet, despite this growing body of research, a universally accepted standard has yet to emerge. This is partly due to varying assessment methodologies: some studies rely on observed crossing decisions [12], others on subjective ratings of trust and safety [15], and only a few directly evaluate the clarity or speed of message comprehension.
In addition to purely visual formats, multimodal eHMI designs have been investigated to enhance clarity and robustness under varying environmental conditions. Auditory cues, such as tones or voice messages, can convey urgency or confirmation even when visibility is limited, while haptic signals delivered via wearable or mobile devices can provide discreet, personal feedback. Augmented reality (AR) overlays have also been explored to project vehicle intent directly into a pedestrian’s field of view, offering context-aware guidance. While this study focuses on visual eHMIs due to their high compatibility with current EAV hardware and infrastructure, integrating multimodal elements may further improve inclusivity and resilience in future deployments.
In addition to visual interfaces, recent research has explored other eHMI modalities such as auditory cues (e.g., tones or voice messages) [8], haptic signals (e.g., vibrations via smartphones) [16], and augmented reality (AR) overlays designed to enhance situational awareness through wearable or mobile devices. Each modality presents unique benefits and limitations depending on environmental context, technological access, and pedestrian needs. While the present study focuses on visual eHMIs due to their high compatibility with current EAV platforms and infrastructure readiness, future work should explore multimodal systems that offer broader inclusivity and adaptability. Moreover, ongoing international standardization efforts, such as those from ISO (e.g., ISO/TC 204 on Intelligent Transport Systems) and the UNECE (e.g., WP.29 GRVA framework on EAV signaling), underscore the urgency of evaluating and harmonizing visual eHMI formats for EAVs.
Recent work by Man et al. (2025) [17] offers a bibliometric and systematic review of pedestrian interactions with eHMI-equipped EAVs, identifying dominant research themes, such as comprehension metrics, interface typologies, and communication modalities, while emphasizing the need for greater field validation and demographic diversity in evaluations. Complementarily, Wang et al. (2024) [18] explored how generative AI can support autonomous driving development, particularly through synthetic scenario generation and automated behavioral modeling, which could enhance the design and testing of future eHMIs. These studies reinforce the importance of grounding visual eHMI designs in both empirical evidence and emerging AI-driven methodologies.

1.3. Research Gap

However, despite the breadth of existing work, few studies have systematically compared different visual eHMI configurations while controlling for user demographics, and even fewer have applied an objective, time-based measure of comprehension to assess interface clarity. To address this gap, the present study focuses on comprehension time as the primary evaluation metric.
This study adopts comprehension time, the duration from eHMI onset to participant understanding, as the primary metric for evaluating eHMI effectiveness. Unlike crossing behavior, which can be influenced by external variables such as personal risk tolerance or social pressure, comprehension time offers a more direct and quantifiable measure of how quickly a pedestrian decodes a vehicle’s intention [19]. This approach also facilitates finer-grained comparisons across visual formats and demographic groups, allowing clearer insight into the cognitive accessibility of different eHMI designs.
Nonetheless, existing studies provide limited analysis of how demographic variation (e.g., age, familiarity with traffic norms) and presentation formats jointly affect comprehension performance. Additionally, most studies are conducted solely in simulated environments without real-world validation. Therefore, this research aims to address these gaps by systematically evaluating combinations of eHMI features (text, symbol, and color) in both virtual and field experiments. In doing so, the study seeks to derive practical design guidelines that improve communication between EAVs and pedestrians in real-world urban scenarios. However, there is a lack of systematic studies that compare different visual eHMI formats while incorporating demographic variation and validating findings in both simulation and real-world environments.
To the best of our knowledge, no prior study has systematically compared different visual eHMI formats while incorporating demographic variation and validating results in both simulation and real-world environments using comprehension time as the primary metric. This lack of integrated, demographically informed evaluation represents a critical research gap that the present study aims to address.

1.4. Contributions

While some individual findings in this study (e.g., the superiority of “WALK” over “CROSS” or the use of pedestrian icons over arrows) have been reported in previous work, our contribution lies in three key areas: (1) a comparative analysis across four distinct age groups to assess the demographic sensitivity of eHMI formats; (2) the use of comprehension time as a primary metric, providing more objective insight than behavioral proxies like crossing initiation; and (3) a two-stage evaluation combining virtual simulation with field experimentation, which is rarely applied in this domain. Together, these elements support the reliability, generalizability, and practical relevance of our findings for future eHMI design.
Recent advances in AI-driven EAV systems have introduced new paradigms in both perception and decision-making. For example, generative AI models have been used to create rich and diverse driving scenarios for simulation and testing, improving model robustness and rare-event coverage [20]. Similarly, foundation models have enabled scalable, generalizable representations that unify multiple EAV tasks such as object detection, trajectory prediction, and scenario generation [21]. From a safety standpoint, control-theoretic approaches have also gained attention in ensuring the stability and interpretability of AI decisions under uncertainty [22]. These emerging directions support the development of intelligent and human-aligned EAV systems and provide a broader context for enhancing eHMIs that facilitate pedestrian interaction and safety.
In summary, this section has outlined both the practical motivation for improving pedestrian–EAV communication and the current state of research on eHMI design, highlighting existing approaches, methodologies, and unresolved challenges. While prior studies have examined individual elements such as text, symbols, or color schemes, few have combined these features systematically across diverse demographic groups and validated findings in both simulation and real-world settings. Addressing this gap, the present study aims to provide an integrated evaluation of eHMI formats, measured primarily through comprehension time, to derive practical design recommendations that enhance safety, comprehension, and trust in pedestrian–EAV interactions.
The novelty of this work lies in its integrated approach to evaluating visual eHMI formats for EAV–pedestrian communication. Specifically: (1) a comparative analysis across four distinct age groups to assess demographic sensitivity; (2) the use of comprehension time as the primary, objective evaluation metric; and (3) a two-stage experimental design combining controlled simulation with preliminary real-world field validation. These elements collectively address the current lack of systematic, demographically informed evaluations validated in both virtual and physical environments, enabling the derivation of practical, evidence-based eHMI design recommendations.
The remainder of this paper is structured as follows. Section 2 describes the experimental design, including the virtual simulation setup, data collection procedures, and field-testing methodology. Section 3 presents and discusses the results from both the simulation and field experiments, highlighting the effectiveness of different eHMI formats. Finally, Section 4 concludes the paper with key findings and recommendations for future eHMI design in EAV systems.

2. Methods

2.1. Experimental Equipment

The virtual simulation was conducted using a custom Python-based (Python 3.13) application designed for synchronized video playback and participant response logging. Video stimuli were generated using the CARLA Simulator (Car Learning to Act), an open-source autonomous driving simulation platform that allows precise control over environmental and traffic parameters. In this study, the simulator was configured to represent a two-lane urban road with a pedestrian crosswalk. Adjustable parameters included weather conditions (clear, cloudy, rainy), ambient lighting (daytime, dusk), and traffic density (low, medium, high). The Mitsubishi Fuso Rosa bus model was selected as the EAV platform, and scripted driving scenarios ensured consistent vehicle approach speed and stopping behavior across trials. All scenarios were recorded automatically in CARLA using fixed camera viewpoints, then exported at 1920 × 1080 resolution and 60 frames per second via OBS Studio. eHMI elements were digitally overlaid using Adobe After Effects to create the experimental stimuli.
The simulation workflow was semi-automated: scenario playback and parameter variation were preprogrammed in CARLA, while stimulus presentation to participants and comprehension-time logging were managed manually via the Python interface. This setup ensured that simulation playback, eHMI presentation, and response logging were consistently synchronized across all trials. Hardware included a 24-inch LG Full HD IPS monitor, Intel Core i9-10900K CPU, NVIDIA GeForce RTX 3080 GPU, 512 GB SSD, and 32 GB RAM. Participants were seated 90 cm from the screen at a fixed height to maintain a consistent viewing angle, as shown in Figure 1.

2.2. Experimental Procedure Design

Each participant viewed 20 simulation events. One featured no eHMI display, four used text-only eHMIs, four used symbol-only formats, eight used mixed text-symbol formats, and three no-eHMI events involved non-stopping vehicles (excluded from analysis). To maintain consistency, each event followed a fixed structure: the vehicle moved at 30 km/h for 16 m before decelerating over another 16 m and stopping 1 m before the crosswalk. It then remained stationary for 3 s. The entire sequence lasted 10 s per event.
The simulation environment setup, including the crosswalk and road layout, is depicted in Figure 2. Displayed eHMI formats varied in message type (“WALK” or “CROSS”), icon (pedestrian or arrow), and color (green or white). Mixed formats combined both elements. The range of eHMI display models implemented for this experiment is illustrated in Figure 3. The eHMI was placed on the top of the windshield for optimal visibility.
It is important to note that all eHMI prompts in this experiment indicated yielding behavior. This design decision was made to allow a controlled comparison between different visual formats of yield messages. However, we acknowledge that this may have led to conditioned expectations rather than genuine interpretation. While this design facilitates consistent testing of message recognition and comprehension speed, it does not capture the full interpretive range of eHMI content. Future studies should incorporate both yield and non-yield cues to better isolate comprehension accuracy under varying vehicle behaviors.
In the present study, non-yielding conditions were deliberately excluded to maintain focus on controlled assessment of yielding eHMI formats. This choice ensured consistency in vehicle behavior and minimized potential confounding factors. Nonetheless, incorporating both yielding and non-yielding cues in future experiments is recognized as essential for capturing the ambiguity present in real-world pedestrian–vehicle interactions.
Five experimental scenarios were designed for this study: one baseline condition with no eHMI and four with different eHMI configurations (text only, symbol only, color-coded, and combined text–symbol–color). The baseline condition provided a control reference to evaluate pedestrian comprehension in the absence of visual cues, while the four eHMI formats were selected to represent distinct and widely discussed design strategies in the literature and current standardization efforts. This combination of scenarios enabled both the isolation of individual visual elements and the assessment of their combined effects on comprehension time, supporting a comprehensive comparison across formats.
To provide a clear overview of the methodological flow without the use of a flowchart, the entire procedure can be summarized as follows. The study began with the design of a controlled simulation environment replicating zebra crossing scenarios, followed by the selection and preparation of eHMI formats incorporating variations in text, symbol, and color. Participants from four distinct age groups were recruited and given a standardized briefing before the trials. Each participant completed the simulation tasks while comprehension time was recorded for every eHMI condition. After completing the virtual tests, selected participants took part in a preliminary real-world field evaluation under similar conditions to validate the simulation findings. The collected data were then processed and statistically analyzed to compare comprehension times across formats and demographics, enabling the derivation of practical recommendations for eHMI design.

2.3. Sampling Method

A total of 100 participants were recruited using stratified, institution-based sampling with age-by-sex quotas to ensure diversity in gender, age group, and driving experience. The sample included approximately equal representation of male and female participants across the following age groups: 18–30 (recruited from educational institutions), 31–45 (factories), 46–60 (government offices), and over 60 (nursing homes).
Sampling frame and procedure: (i) define age × sex quotas for four age strata; (ii) identify recruiting institutions appropriate to each stratum; (iii) invite volunteers via institutional channels and enroll consecutively until quotas were met; (iv) screen eligibility and conduct standardized orientation/practice before testing. Participant counts by age group and sex are reported in Table 1, with additional details provided in Appendix A (Table A1).
Balancing and control of institutional differences: Because recruiting institutions were aligned with age strata, institution type was collinear with age group. To mitigate potential confounding, we (a) enforced age-by-sex quotas within strata, (b) standardized the test environment and instructions (identical 24-inch display, fixed 90 cm viewing distance, identical lighting), and (c) used a within-subject design with semi-randomized trial order so that each participant experienced all eHMI formats under the same conditions.
Participant screening ensured that all individuals met the inclusion criteria: basic English proficiency (sufficient to understand the words “WALK” and “CROSS”), normal color vision verified using the Ishihara test [15], and no prior experience or training with EAV systems. All participants provided written informed consent before data collection. The study protocol was reviewed and approved by the Research Ethics Review Committee for Research Involving Human Participants (Group II–Social Sciences, Humanities, and Arts), Chulalongkorn University, under Certificate of Approval (COA) No. 163/66, Project ID 660112, titled “Communication Between Autonomous Shuttle and Pedestrians at Crosswalks”.
While age groups were deliberately stratified to ensure demographic representation, we acknowledge that cognitive level and age may jointly influence eHMI comprehension. As these variables were not independently controlled or measured, future studies should consider cognitive assessments or matched-group designs to isolate the effects of age and cognitive processing ability.

2.4. Experimental Procedure

To reduce the influence of learning and fatigue effects, the 20 scenarios were presented in a semi-randomized order for each participant. The sequences were designed to ensure that eHMI types (color, symbol, and text) were evenly distributed across the trial timeline, preventing any format from appearing consistently earlier or later in the session.
Each participant received a briefing and completed practice trials. During testing, participants faced the screen and were prompted before each event. They were instructed to press the Space bar when they understood the vehicle’s intention. After each trial, they verbally confirmed the perceived message. Data accuracy was verified post-session.

2.5. Data Analysis Method

Comprehension time was the dependent variable, recorded in seconds from video start to keypress. Independent variables were the eHMI format features: color (green vs. white), text (“WALK” vs. “CROSS”), symbol (pedestrian vs. arrow), and mixed text–symbol combinations.
Four pre-specified within-subject contrasts were tested: (i) color (green vs. white), (ii) text (“WALK” vs. “CROSS”), (iii) symbol (pedestrian vs. arrow), and (iv) mixed text–symbol combinations, each analyzed separately by age group and for the pooled sample. Pairwise comparisons used paired-sample t-tests (α = 0.05). Given that the comparisons were pre-specified and limited in number, no multiplicity adjustment was applied. We acknowledge the potential for inflated Type I error and therefore report Cohen’s dz effect sizes and 95% confidence intervals (CI) for the mean differences to aid interpretation.
Descriptive statistics (mean, standard deviation, coefficient of variation) and 95% CIs were computed for each format and condition. All analyses and statistical computations were performed using MATLAB 2024a, and summary tables/figures were prepared using Microsoft Excel. Full descriptive statistics for each condition are provided in Appendix B.

2.6. Field Experiment Design

A real-world validation was conducted using a Level 3 EAV (Turing OPAL T2) equipped with a front monitor to display eHMIs, as shown in Figure 4. Scenarios included the best-performing (green pedestrian “WALK”) and worst-performing (white “CROSS”) formats from the simulation, along with a no-eHMI condition.
Due to safety and spatial constraints, the field vehicle traveled 50 m at 15 km/h. Testers stood 1 m from the zebra crossing and turned to observe the approaching vehicle upon cue. The movement and eHMI-based interaction setup is illustrated in Figure 5. Two networked computers recorded keypress data and system timestamps, synchronized via Unix format.
The term ‘turnaround time’ refers to the moment at which the EAV completed its turning motion and began facing the pedestrian path. This event was recorded using synchronized Global Positioning System (GPS) timestamps and onboard video footage. The eHMI activation was triggered immediately after this point to align the signal with the vehicle’s yielding behavior.
The eHMI was displayed immediately after the vehicle completed its turning maneuver and began decelerating toward the pedestrian zone. This timing ensured that the eHMI content would reflect the vehicle’s intention to yield after the turn had been completed, preventing ambiguity between vehicle motion and display signaling. This sequence was designed to simulate realistic interactions at intersections and enhance ecological validity.
A Transmission Control Protocol (TCP) connection and 5G Customer Premises Equipment (CPE) device were employed to minimize latency in system response and synchronization, achieving average network delays of 4 ms, 26 ms, and 21 ms during testing.
Figure 6 presents the waypoint layouts used in this study’s field testing. Figure 6a shows the original waypoint map provided by the university’s research center, replicating the Chulalongkorn University campus road layout for autonomous vehicle navigation. For this experiment, the route was modified, as shown in Figure 6b, to create a controlled 50 m test track. This adjustment ensured the EAV could accelerate from 0 to 15 km/h before the interaction zone. The final 16 m before the crosswalk were designated for controlled deceleration at −1 m/s2 while the eHMI prompt was displayed to the pedestrian positioned 1 m from the stopping point. This setup enabled repeatable, safe, and scenario-specific evaluation of pedestrian comprehension of the eHMI under real-world conditions.
Although the field test procedure differed from the simulation in terms of vehicle speed and distance, the cognitive task, interpreting eHMI displays, remained consistent. The goal was not procedural replication but verification of comprehension ranking consistency across modalities. The strong similarity in response patterns supports the simulation’s predictive validity.

3. Results

This section presents findings from both the virtual simulation and the real-world field experiments. A total of 100 participants were categorized by age group (18–30, 31–45, 46–60, and over 60) and evaluated for their comprehension time across various eHMI formats. These formats included text-only, symbol-only, and mixed formats (text and symbol), each presented in green and white colors.

3.1. Simulation Experiment Results

As shown in Figure 7, the CARLA simulation presented participants with scenarios where a pedestrian confirmed the vehicle’s crossing intent. These scenes were used during the comprehension-time measurement phase. This figure shows the experiment setup of the participants and simulation as mentioned in Section 2.1. Across all age groups, the presence of eHMI displays improved comprehension times compared to no-eHMI conditions. In the younger groups (18–30 and 31–45), average comprehension times were generally shorter across most eHMI conditions (Figure 8a,b). For the older groups (46–60 and over 60), the improvement in comprehension time was present but less pronounced, with greater variability across formats (Figure 8c,d).
Statistical analysis (ANOVA, p < 0.05) confirmed significant differences in comprehension time between age groups, with participants aged 18–30 responding fastest and those over 60 responding slowest. Post hoc comparisons indicated that the performance gap was most pronounced between the youngest and oldest groups. This suggests that age-sensitive design considerations, such as larger symbols and higher-contrast colors, may improve accessibility for older pedestrians.
When comparing color schemes, green displays typically resulted in shorter average comprehension times than white displays, particularly in the 18–30 age group (Figure 9a). A statistically significant improvement (p = 0.0416) was observed for the green pedestrian symbol combined with the word “WALK” in this group. In other age groups (Figure 9b–d), the trend favoring green displays was generally observed, though not always consistent. Aggregated results (Figure 9e) further supported the overall trend favoring green eHMI formats.
In terms of message content, the text “WALK” elicited faster responses than “CROSS” across all age groups (Figure 10, Figure 11, Figure 12 and Figure 13). Similarly, pedestrian icons produced shorter comprehension times than arrows, particularly in the 18–30 and 31–45 age groups. While statistical significance was not observed for all comparisons, the consistency of these trends across multiple age groups highlights their potential value for eHMI design.
For mixed-format displays (text + symbol), the combination of the green pedestrian icon with the word “WALK” yielded the shortest average comprehension times in the 18–30 and 31–45 groups (Figure 14 and Figure 15). In the 46–60 group, the green arrow paired with “WALK” resulted in the shortest response time, which differs from the other age groups (Figure 16). In the over-60 group, the green pedestrian symbol with “WALK” again led to the fastest responses (Figure 17). Combined group results confirmed the general effectiveness of this mixed-format configuration (Figure 18).
To complement the box plots and facilitate verification of the main claims (e.g., green vs. white; “WALK” vs. “CROSS”; pedestrian vs. arrow; and mixed combinations), summary descriptive statistics (mean, SD, n, and, where available, median/IQR) for each eHMI condition are provided in Appendix B (Table A2). Effect sizes (Cohen’s dz) and 95% confidence intervals for the mean differences are also provided in Appendix B to aid interpretation.

3.2. Fastest and Slowest eHMI Formats

The fastest and slowest comprehension times varied by age group. For 18–30 participants, the fastest format was the green pedestrian “WALK,” and the slowest was the white arrow symbol (Figure 19a). For 31–45, the green pedestrian “WALK” remained the fastest, while the white “CROSS” text was slowest (Figure 19b). The 46–60 group showed a preference for the green arrow with “WALK” format, while the white pedestrian symbol with “CROSS” was the slowest (Figure 19c). In the over-60 group, green pedestrian “WALK” remained the fastest, and white “CROSS” was the slowest (Figure 19d). Aggregated results (Figure 19e) reflect a consistent trend favoring green pedestrian “WALK.”

3.3. Field Experiment Results

A real-world field test was conducted with eight participants aged 18–30 to validate the simulation results. Four scenarios were tested: green pedestrian “WALK,” white “CROSS” text, white arrow symbol, and no eHMI. Comprehension time patterns were consistent with the simulation, with the green pedestrian “WALK” condition yielding the fastest comprehension and the white “CROSS” format yielding the slowest (Figure 20). Although the field experiment involved a smaller sample and some procedural differences (e.g., vehicle speed and path length), the similar response trends across both experimental setups suggest general consistency in how participants interpreted different eHMI formats.
While the field test included a limited sample of eight participants aged 18–30, it served as a preliminary validation of the simulation findings. The consistent trends observed between virtual and real-world settings support the potential generalizability of the results, though further large-scale field testing is necessary.

4. Discussions

The findings from the simulation experiment confirm that eHMIs significantly enhance pedestrian comprehension of EAV intentions, supporting the study’s hypothesis that visual cues improve message clarity compared to vehicle kinematics alone. Across all tested conditions, participants exposed to eHMIs consistently demonstrated faster comprehension times than those in no-eHMI scenarios, aligning with prior studies that emphasize the communicative potential of visual displays in EAV–pedestrian interactions [23,24,25].
While comprehension time differences between eHMI formats were observed, the semi-randomized presentation order was implemented to minimize potential bias caused by learning or fatigue. This helped ensure that differences in comprehension speed could be attributed to the eHMI characteristics themselves rather than to order effects.
Age-related differences in performance were particularly notable. The 18–30 and 31–45 age groups responded swiftly and consistently across eHMI formats, whereas participants over 60 exhibited delayed responses and greater variability. These outcomes are consistent with prior research on cognitive aging, which highlights age-related decline in visual processing speed, motion perception, and contrast sensitivity, factors that directly affect how older pedestrians interpret road signals [26,27]. Furthermore, Dommes et al. [28] found that older pedestrians exhibit more cautious and hesitant behavior in EAV crossing situations, especially in unfamiliar or ambiguous contexts, suggesting that intuitive eHMI design is critical for this demographic.
Color played a pivotal role in eHMI effectiveness. Green displays generally yielded faster comprehension than white, particularly among younger and middle-aged participants. This finding aligns with Bazilinskyy et al. [24], who noted that green is widely associated with safety and movement due to its established role in global traffic signaling. However, the superior performance of green may also be attributed to its inherent visual salience, such as higher luminance contrast and visibility under varied screen conditions. To minimize potential presentation order bias, scenarios were presented in a semi-randomized sequence, ensuring a balanced distribution of color formats across participants. Nonetheless, perceptual factors, including ambient lighting and individual sensitivity to brightness, may have further influenced recognition times. Additionally, the diminished responsiveness to color cues observed in older participants is consistent with findings by Paramei [27], who reported age-related declines in color discrimination. These results suggest that color alone may not ensure effective communication across all age groups, highlighting the importance of integrating textual or symbolic reinforcement to enhance eHMI clarity, particularly for elderly pedestrians.
Textual messages also impacted comprehension. The word “WALK” consistently outperformed “CROSS” across all age groups, likely due to its brevity, clarity, and stronger directive connotation. Chen et al. [25] similarly recommend the use of short, imperative language in EAV signals to minimize cognitive load and ensure rapid understanding. Symbols, particularly pedestrian icons, also enhanced comprehension more effectively than arrows. This effect was amplified when symbols were paired with supporting text, consistent with the dual-channel processing theory that suggests multimodal signals (text + symbol) improve information retention and interpretation.
Interestingly, the 46–60 age group favored the green arrow combined with “WALK,” diverging from other age groups that preferred pedestrian icons. This pattern may reflect generational familiarity with traditional road signage or different interpretive heuristics. The over-60 group also displayed smaller performance gaps between formats, suggesting that design variation may have a reduced impact on their decision-making, a finding that warrants further investigation using behavioral metrics beyond comprehension time alone.
Validation through field experimentation further supports the robustness of the simulation findings. Despite the field study’s limited sample size, the comprehension rankings closely mirrored those observed in the simulation, indicating good ecological alignment between virtual and real-world environments. However, the small sample size reduces the ability to generalize these findings to broader populations. As Dommes et al. [28] emphasized, real-world pedestrian behavior involves dynamic and embodied processes such as gaze shifts, hesitation, and spatial negotiation, none of which were captured in the current button-press paradigm. Future research should therefore explore immersive setups, such as virtual reality or staged crossings, to complement cognitive metrics with behaviorally grounded observations.
Overall, this study reinforces the effectiveness of eHMI formats that incorporate green color, the word “WALK,” and pedestrian symbols, particularly when used in combination. However, it also highlights the need for age-adaptive design and multimodal redundancy to accommodate perceptual and cognitive variation across demographics.
These findings have practical implications for multiple stakeholders. For vehicle manufacturers, the demonstrated advantages of green color, short imperative text, and recognizable symbols can inform the development of eHMI systems that accommodate both younger and older pedestrians. Urban planners and transportation authorities can integrate these insights into crosswalk design guidelines and AV deployment strategies to enhance pedestrian safety in mixed-traffic environments. Policymakers may use this evidence to shape regulatory standards for eHMI formats, ensuring consistency and accessibility across jurisdictions. Finally, researchers can build on these results to explore adaptive, context-aware eHMI systems that dynamically adjust display formats to match environmental conditions and demographic needs.
Recruitment was institution-aligned with age strata, so institution type is collinear with age. Although we used age-by-sex quotas, standardized procedures, and a within-subject/semi-randomized presentation to mitigate bias, residual confounding (e.g., education, cognitive processing, familiarity with technology) may remain. Future work will adopt matched-group recruitment and/or cognitive screening to better isolate age effects. This note complements our existing discussion on potential confounders.
This study did not address conflict scenarios where the eHMI signal from an EAV may be contradicted by the behavior of other road users, such as a non-autonomous vehicle ignoring traffic rules and proceeding through a crosswalk. Such situations could undermine pedestrian trust in eHMIs and pose safety risks. Future research should explore multi-agent interaction contexts, incorporating unpredictable human driver behavior into simulation and field experiments to evaluate how pedestrians interpret and act upon potentially conflicting cues. Additionally, the scope of the present study was limited to a controlled set of crossing scenarios under consistent environmental conditions, which may not capture the variability of lighting, weather, and traffic density in real-world settings. Moreover, participants were primarily drawn from specific institutional contexts, which may limit the generalizability of the findings to broader, more diverse populations.

5. Conclusions

This study offers a novel contribution by providing a systematic, age-stratified evaluation of multiple eHMI formats, integrating both simulation and real-world field testing on an actual Level 3 EAV. Unlike previous works that often focus on a single format or simulated context, this research compares multiple combinations of color, text, and symbols in a controlled yet ecologically validated framework. The findings contribute to the body of knowledge by identifying design configurations that not only achieve statistical significance but also demonstrate practical, real-world applicability across diverse age groups.
In line with this aim, this study evaluated whether eHMIs improve pedestrian comprehension of EAV intentions compared to vehicle kinematics alone. Simulation results from 100 participants in four age groups showed that all eHMI formats reduced comprehension time relative to no-eHMI scenarios. Among formats, green displays generally yielded faster responses than white, “WALK” was more effective than “CROSS,” and pedestrian symbols outperformed arrows. Mixed-format designs combining a green pedestrian symbol with “WALK” achieved the fastest and most consistent comprehension across groups. Field tests on a Level 3 EAV confirmed that comprehension rankings closely matched simulation results.
Based on these findings, the following design recommendations are proposed:
  • Use green as the primary display color for faster recognition.
  • Implement “WALK” over “CROSS” for textual cues.
  • Employ pedestrian symbols rather than arrows.
  • Combine text and symbols (e.g., green pedestrian icon with “WALK”) for optimal clarity.
Limitations include the exclusive use of yielding signals, possible demographic confounding from recruitment across different institutions, the small sample size in the field test, which limits the generalizability of these findings, reliance on 2D simulations, and the omission of behavioral measures such as gaze shifts. Future work should test both yielding and non-yielding signals, use matched-group recruitment, and adopt immersive or interactive methods with more diverse participants to enhance ecological validity. In addition, future research could explore distributed communication frameworks that enable real-time information exchange between vehicles, pedestrians, and other road users. Such systems could complement eHMI designs by providing multi-source, synchronized cues, thereby improving safety and comprehension in complex traffic environments.

Author Contributions

Conceptualization, N.N., S.C. and G.P.; data curation, N.K.; formal analysis, N.K., S.C. and G.P.; funding acquisition, N.N.; investigation, N.K.; methodology, N.K.; project administration, N.N.; resources, N.N.; software, N.K.; supervision, N.N., S.C. and G.P.; validation, N.K.; visualization, N.N., S.C. and G.P.; writing—original draft, N.N., S.C. and G.P.; writing—review and editing, N.N., S.C. and G.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research project is supported by the Thailand Science Research and Innovation Fund Chulalongkorn University (IND_FF_68_007_2100_001).

Data Availability Statement

Any inquiry can be directly sent to the corresponding author.

Acknowledgments

Ethical approval for this study was obtained from the Research Ethics Review Committee for Research Involving Human Participants (Group II—Social Sciences, Humanities, and Arts), Chulalongkorn University, under Certificate of Approval (COA) No. 163/66, Project ID 660112, titled “Communication Between Autonomous Shuttle and Pedestrians at Crosswalks”. The study was conducted in compliance with the Declaration of Helsinki, the Belmont Report, CIOMS guidelines, and the International Conference on Harmonization—Good Clinical Practice (ICH-GCP). All participants provided written informed consent prior to participation. During the preparation of this manuscript, the authors used ChatGPT (GPT-5, OpenAI, 2025) to assist in language editing and refinement of the text. The authors have reviewed and edited the output and take full responsibility for the content of this publication. This study was partially supported financially from the Special Task Force for Activating Research (STAR), Ratchadaphiseksomphot Endowment Fund, Chulalongkorn University. The tested vehicle was supported by Broadcasting and Telecommunications Research and Development Fund for Public Interest. The tested equipment was supported by Smart Mobility Research Center, Chulalongkorn University.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Sampling Details and Recruiting Sources

Appendix A.1. Procedure Overview

Participants were recruited using stratified, institution-based sampling with age-by-sex quotas. The sampling frame comprised four age strata (18–30, 31–45, 46–60, >60). Recruitment proceeded in four steps:
  • Define age × sex quotas for each stratum.
  • Identify recruiting institutions appropriate to each stratum.
  • Invite volunteers via institutional channels and enroll consecutively until quotas were met.
  • Screen for eligibility and conduct standardized orientation/practice before testing.

Appendix A.2. Recruiting Sources by Stratum

  • 18–30 years: Universities and student housing
  • 31–45 years: Factories
  • 46–60 years: Government offices
  • >60 years: Nursing homes and households with older adults
These sources were selected to match the target age ranges and to facilitate participant access.

Appendix A.3. Participant Counts by Age Group, Sex, and Driving License Status

Table A1. Participant counts by age group, sex, and license status (n = 100).
Table A1. Participant counts by age group, sex, and license status (n = 100).
Age GroupFemale LicensedFemale UnlicensedMale LicensedMale UnlicensedTotal
18–30767626
31–45767626
46–60666624
>60666624
Total26242624100

Appendix A.4. Controls for Institutional Differences

Because recruiting institutions were aligned with age strata, institution type was collinear with age group. To reduce potential confounding:
  • Age-by-sex quotas were enforced within strata.
  • Test environment and instructions were standardized (identical 24-inch display, fixed 90 cm viewing distance, identical lighting).
  • A within-subject design with semi-randomized trial order was used so that each participant experienced all eHMI formats under the same conditions.
Residual unmeasured differences (education, cognitive processing, technology familiarity) are acknowledged in the Limitations, with plans for matched-group recruitment or cognitive screening in future work.

Appendix B. Summary Descriptive Statistics

Table A2 summarizes the descriptive statistics (n, mean ± standard deviation, and median with interquartile range) for comprehension time (in seconds) across all eHMI display conditions.
Table A2. Summary descriptive statistics for comprehension time by eHMI display condition.
Table A2. Summary descriptive statistics for comprehension time by eHMI display condition.
Condition (eHMI Display)nMean (SD), SMedian (IQR), S
Text-only messages
“WALK” text—Green1003.76 (0.94)~3.59 (1.05)
“WALK” text—White1003.94 (1.15)~3.73 (1.50)
“CROSS” text—Green1004.01 (1.06)~3.95 (1.47)
“CROSS” text—White1004.16 (1.32)~4.10 (1.76)
Symbol-only messages
Pedestrian symbol—Green1003.73 (0.84)~3.66 (1.09)
Pedestrian symbol—White1003.92 (1.04)~3.98 (1.48)
Arrow symbol—Green1003.91 (0.96)~3.90 (0.96)
Arrow symbol—White1004.03 (1.12)~4.27 (1.61)
Combined text + symbol
Pedestrian + “WALK”—Green1003.61 (0.72)~3.77 (1.24)
Pedestrian + “WALK”—White1003.89 (1.06)~4.15 (1.08)
Pedestrian + “CROSS”—Green1003.84 (0.98)~3.73 (1.34)
Pedestrian + “CROSS”—White1004.04 (1.17)~4.09 (1.65)
Arrow + “WALK”—Green1003.71 (0.97)~3.54 (1.19)
Arrow + “WALK”—White1003.86 (1.16)~3.86 (1.56)
Arrow + “CROSS”—Green1003.73 (1.04)~3.76 (1.70)
Arrow + “CROSS”—White1003.93 (1.07)~3.93 (1.39)
Note: Medians and IQR are approximated from the aggregated data; all conditions had n = 100 participants.

References

  1. On-Road Automated Driving (ORAD) Committee. Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles; SAE International: Warrendale, PA, USA, 2021; Available online: https://www.sae.org/standards/content/j3016_202104/ (accessed on 12 August 2025).
  2. Segui-Gomez, M.; Luo, F.; Tingvall, C.; Taylor, M.P. Assessing the impact of the WHO Global Status Reports on Road Safety. Inj. Prev. 2025. ahead of print. [Google Scholar] [CrossRef] [PubMed]
  3. Choocharukul, K.; Sriroongvikrai, K. Road safety awareness and comprehension of road signs from international tourist’s perspectives: A case study of Thailand. Transp. Res. Procedia 2017, 25, 4518–4528. [Google Scholar] [CrossRef]
  4. Yue, L.; Abdel-Aty, M.; Wu, Y.; Zheng, O.; Yuan, J. In-depth approach for identifying crash causation patterns and its implications for pedestrian crash prevention. J. Saf. Res. 2020, 73, 119–132. [Google Scholar] [CrossRef] [PubMed]
  5. Mitsubishi Electric Corporation. Mitsubishi Electric Introduces Road-Illuminating Directional Indicators. Illuminated Projections on Road Surfaces Expected to Help Avoid Accidents; News Release. 23 October 2015. Available online: https://www.mitsubishielectric.com/news/2015/pdf/1023.pdf (accessed on 12 August 2025).
  6. Pereira, M.B. Mercedes-Benz Group AG Equity Research: Are Luxury Electric Vehicles Its Future? Master’s Thesis, Universidade NOVA de Lisboa, Lisbon, Portugal, 2024. Available online: https://www.proquest.com/openview/b005654ed4308db013595d823e5429aa/1?pq-origsite=gscholar&cbl=2026366&diss=y&casa_token=PZIgpyZfvfIAAAAA:5NYcaefBiDjGNtPOLzlMJQofNB2W6Lo1jZx4s2_Io9F5ip5QtOJHxLNomF6KkJOCpUiPAg (accessed on 12 August 2025).
  7. Chang, C.M.; Toda, K.; Sakamoto, D.; Igarashi, T. Eyes on a Car: An Interface Design for Communication between an Autonomous Car and a Pedestrian. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Oldenburg, Germany, 24–27 September 2017; pp. 65–73. [Google Scholar] [CrossRef]
  8. Onkhar, V.; Bazilinskyy, P.; Dodou, D.; De Winter, J.C.F. The effect of drivers’ eye contact on pedestrians’ perceived safety. Transp. Res. Part F Traffic Psychol. Behav. 2022, 84, 194–210. [Google Scholar] [CrossRef]
  9. Dey, D.; Habibovic, A.; Löcken, A.; Wintersberger, P.; Pfleging, B.; Riener, A.; Martens, M.; Terken, J. Taming the eHMI jungle: A classification taxonomy to guide, compare, and assess the design principles of automated vehicles’ external human-machine interfaces. Transp. Res. Interdiscip. Perspect. 2020, 7, 100174. [Google Scholar] [CrossRef]
  10. Dey, D.; Habibovic, A.; Pfleging, B.; Martens, M.; Terken, J. Color and animation preferences for a light band eHMI in interactions between automated vehicles and pedestrians. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–13. [Google Scholar] [CrossRef]
  11. Joisten, P.; Liu, Z.; Theobald, N.; Webler, A.; Abendroth, B. Communication of automated vehicles and pedestrian groups: An intercultural study on pedestrians’ street crossing decisions. In Proceedings of the Mensch und Computer 2021, Ingolstadt, Germany, 5–8 September 2021; pp. 49–53. [Google Scholar] [CrossRef]
  12. Eisma, Y.B.; van Bergen, S.; Ter Brake, S.M.; Hensen, M.T.T.; Tempelaar, W.J.; de Winter, J.C. External human–machine interfaces: The effect of display location on crossing intentions and eye movements. Information 2019, 11, 13. [Google Scholar] [CrossRef]
  13. Bazilinskyy, P.; Kooijman, L.; Dodou, D.; de Winter, J.C. How should external Human-Machine Interfaces behave? Examining the effects of colour, position, message, activation distance, vehicle yielding, and visual distraction among 1434 participants. Appl. Ergon. 2021, 95, 103450. [Google Scholar] [CrossRef] [PubMed]
  14. Guo, J.; Yuan, Q.; Yu, J.; Chen, X.; Yu, W.; Cheng, Q.; Wang, W.; Luo, W.; Jiang, X. External human–machine interfaces for autonomous vehicles from pedestrians’ perspective: A survey study. Sensors 2022, 22, 3339. [Google Scholar] [CrossRef] [PubMed]
  15. Ishihara, S. Ishihara’s Test Chart Book: 38 Plates Original Edition; The Memory Guru of India: Kanpur, India, 2014. [Google Scholar]
  16. Brill, S.; Payre, W.; Debnath, A.; Horan, B.; Birrell, S. External human–machine interfaces for automated vehicles in shared spaces: A review of the human–computer interaction literature. Sensors 2023, 23, 4454. [Google Scholar] [CrossRef] [PubMed]
  17. Man, S.S.; Huang, C.; Ye, Q.; Chang, F.; Chan, A.H.S. Pedestrians’ interaction with eHMI-equipped autonomous vehicles: A bibliometric analysis and systematic review. Accid. Anal. Prev. 2025, 209, 107826. [Google Scholar] [CrossRef] [PubMed]
  18. Winter, K.; Vivekanandan, A.; Polley, R.; Shen, Y.; Schlauch, C.; Bouzidi, M.K.; Derajic, B.; Grabowsky, N.; Mariani, A.; Rochau, D.; et al. Generative AI for Autonomous Driving: A Review. arXiv 2025, arXiv:2505.15863. [Google Scholar] [CrossRef]
  19. Lundgren, V.M.; Habibovic, A.; Andersson, J.; Lagström, T.; Nilsson, M.; Sirkka, A.; Fagerlönn, J.; Fredriksson, R.; Edgren, C.; Krupenia, S.; et al. Will there be new communication needs when introducing automated vehicles to the urban context? In Advances in Human Aspects of Transportation, Proceedings of the AHFE 2016 International Conference on Human Factors in Transportation, Walt Disney World®, Lake Buena Vista, FL, USA, 27–31 July 2016; Springer International Publishing: Cham, Switzerland, 2016; pp. 485–497. [Google Scholar] [CrossRef]
  20. Wang, Y.; Xing, S.; Can, C.; Li, R.; Hua, H.; Tian, K.; Mo, Z.; Gao, X.; Wu, K.; Zhou, S.; et al. Generative ai for autonomous driving: Frontiers and opportunities. arXiv 2025, arXiv:2505.08854. [Google Scholar] [CrossRef]
  21. Gao, Y.; Piccinini, M.; Zhang, Y.; Wang, D.; Moller, K.; Brusnicki, R.; Zarrouki, B.; Gambi, A.; Totz, J.F.; Storms, K.; et al. Foundation Models in Autonomous Driving: A Survey on Scenario Generation and Scenario Analysis. arXiv 2025, arXiv:2506.11526. [Google Scholar] [CrossRef]
  22. Ullrich, L.; Zimmer, W.; Greer, R.; Graichen, K.; Knoll, A.C.; Trivedi, M. A New Perspective On AI Safety Through Control Theory Methodologies. IEEE Open J. Intell. Transp. Syst. 2025, 6, 938–966. [Google Scholar] [CrossRef]
  23. Kato, S.; Takeuchi, E.; Ishiguro, Y.; Ninomiya, Y.; Takeda, K.; Hamada, T. An open approach to autonomous vehicles. IEEE Micro 2015, 35, 60–68. [Google Scholar] [CrossRef]
  24. Bazilinskyy, P.; Dodou, D.; De Winter, J. Survey on eHMI concepts: The effect of text, color, and perspective. Transp. Res. Part F Traffic Psychol. Behav. 2019, 67, 175–194. [Google Scholar] [CrossRef]
  25. Chen, H.; Cohen, R.; Dautenhahn, K.; Law, E.; Czarnecki, K. Autonomous vehicle visual signals for pedestrians: Experiments and design recommendations. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 1819–1826. [Google Scholar] [CrossRef]
  26. Owsley, C. Aging and vision. Vis. Res. 2011, 51, 1610–1622. [Google Scholar] [CrossRef] [PubMed]
  27. Paramei, G.V. Color discrimination across four life decades assessed by the Cambridge Colour Test. J. Opt. Soc. Am. A 2012, 29, A290–A297. [Google Scholar] [CrossRef] [PubMed]
  28. Dommès, A.; Merlhiot, G.; Lobjois, R.; Dang, N.T.; Vienne, F.; Boulo, J.; Oliver, A.H.; Cretual, A.; Cavallo, V. Young and older adult pedestrians’ behavior when crossing a street in front of conventional and self-driving cars. Accid. Anal. Prev. 2021, 159, 106256. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Experimental setup showing the pedestrian crossing interface on a 24-inch monitor at a viewing distance of 90 cm.
Figure 1. Experimental setup showing the pedestrian crossing interface on a 24-inch monitor at a viewing distance of 90 cm.
Wevj 16 00485 g001
Figure 2. Simulation environment location used in the CARLA simulator for pedestrian crossing confirmation experiments.
Figure 2. Simulation environment location used in the CARLA simulator for pedestrian crossing confirmation experiments.
Wevj 16 00485 g002
Figure 3. Various eHMI display models designed and implemented for pedestrian communication in EAV scenarios.
Figure 3. Various eHMI display models designed and implemented for pedestrian communication in EAV scenarios.
Wevj 16 00485 g003
Figure 4. The eHMI display of the Turing OPAL T2 EAV, providing visual cues to pedestrians for safe road-crossing confirmation.
Figure 4. The eHMI display of the Turing OPAL T2 EAV, providing visual cues to pedestrians for safe road-crossing confirmation.
Wevj 16 00485 g004
Figure 5. Diagram of the field experiment simulation setup used to evaluate pedestrian crossing confirmation via eHMI communication.
Figure 5. Diagram of the field experiment simulation setup used to evaluate pedestrian crossing confirmation via eHMI communication.
Wevj 16 00485 g005
Figure 6. Waypoint layouts for the field testing route: (a) Original waypoint map provided by the university research center, replicating the Chulalongkorn University campus road layout for autonomous vehicle navigation, (b) Modified 50 m route with acceleration and deceleration zones for eHMI–pedestrian interaction testing.
Figure 6. Waypoint layouts for the field testing route: (a) Original waypoint map provided by the university research center, replicating the Chulalongkorn University campus road layout for autonomous vehicle navigation, (b) Modified 50 m route with acceleration and deceleration zones for eHMI–pedestrian interaction testing.
Wevj 16 00485 g006
Figure 7. Experimental setup in the CARLA simulator for pedestrian–vehicle interaction.
Figure 7. Experimental setup in the CARLA simulator for pedestrian–vehicle interaction.
Wevj 16 00485 g007
Figure 8. Box plot comparing pedestrian response times in scenarios without eHMI and with eHMI communication.
Figure 8. Box plot comparing pedestrian response times in scenarios without eHMI and with eHMI communication.
Wevj 16 00485 g008
Figure 9. Box plot comparing pedestrian response times for white versus green eHMI signal colors.
Figure 9. Box plot comparing pedestrian response times for white versus green eHMI signal colors.
Wevj 16 00485 g009aWevj 16 00485 g009b
Figure 10. Box plot comparing pedestrian response times to eHMI displays in green and white colors for the 18–30 age group.
Figure 10. Box plot comparing pedestrian response times to eHMI displays in green and white colors for the 18–30 age group.
Wevj 16 00485 g010
Figure 11. Box plot comparing pedestrian response times to eHMI displays in green and white colors for the 31–45 age group.
Figure 11. Box plot comparing pedestrian response times to eHMI displays in green and white colors for the 31–45 age group.
Wevj 16 00485 g011
Figure 12. Box plot comparing pedestrian response times to eHMI displays in green and white colors for the 46–60 age group.
Figure 12. Box plot comparing pedestrian response times to eHMI displays in green and white colors for the 46–60 age group.
Wevj 16 00485 g012
Figure 13. Box plot comparing pedestrian response times to eHMI displays in green and white colors for the over-60 age group.
Figure 13. Box plot comparing pedestrian response times to eHMI displays in green and white colors for the over-60 age group.
Wevj 16 00485 g013
Figure 14. Box plot comparing pedestrian response times to eHMI mixed-format displays combining text and symbols in green and white colors for the 18–30 age group.
Figure 14. Box plot comparing pedestrian response times to eHMI mixed-format displays combining text and symbols in green and white colors for the 18–30 age group.
Wevj 16 00485 g014
Figure 15. Box plot comparing pedestrian response times to eHMI mixed-format displays combining text and symbols in green and white colors for the 31–45 age group.
Figure 15. Box plot comparing pedestrian response times to eHMI mixed-format displays combining text and symbols in green and white colors for the 31–45 age group.
Wevj 16 00485 g015
Figure 16. Box plot comparing pedestrian response times to eHMI mixed-format displays combining text and symbols in green and white colors for 46–60 group.
Figure 16. Box plot comparing pedestrian response times to eHMI mixed-format displays combining text and symbols in green and white colors for 46–60 group.
Wevj 16 00485 g016
Figure 17. Box plot comparing pedestrian response times to eHMI mixed-format displays combining text and symbols in green and white colors for over-60 age group.
Figure 17. Box plot comparing pedestrian response times to eHMI mixed-format displays combining text and symbols in green and white colors for over-60 age group.
Wevj 16 00485 g017
Figure 18. Box plot comparing pedestrian response times to eHMI mixed-format displays combining text and symbols in green and white colors for the all age group.
Figure 18. Box plot comparing pedestrian response times to eHMI mixed-format displays combining text and symbols in green and white colors for the all age group.
Wevj 16 00485 g018
Figure 19. Box plot comparing the fastest and slowest eHMI formats in terms of pedestrian comprehension time.
Figure 19. Box plot comparing the fastest and slowest eHMI formats in terms of pedestrian comprehension time.
Wevj 16 00485 g019
Figure 20. Box plot showing variation in performance between the fastest and slowest field experiment formats.
Figure 20. Box plot showing variation in performance between the fastest and slowest field experiment formats.
Wevj 16 00485 g020
Table 1. Participant counts by age group and sex (n = 100).
Table 1. Participant counts by age group and sex (n = 100).
Age GroupFemale (n)Male (n)Total (n)
18–30131326
31–45131326
46–60121224
>60121224
Total5050100
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Noomwongs, N.; Kitpramongsri, N.; Chantranuwathana, S.; Phanomchoeng, G. Evaluating Visual eHMI Formats for Pedestrian Crossing Confirmation in Electric Autonomous Vehicles: A Comprehension-Time Study with Simulation and Preliminary Field Validation. World Electr. Veh. J. 2025, 16, 485. https://doi.org/10.3390/wevj16090485

AMA Style

Noomwongs N, Kitpramongsri N, Chantranuwathana S, Phanomchoeng G. Evaluating Visual eHMI Formats for Pedestrian Crossing Confirmation in Electric Autonomous Vehicles: A Comprehension-Time Study with Simulation and Preliminary Field Validation. World Electric Vehicle Journal. 2025; 16(9):485. https://doi.org/10.3390/wevj16090485

Chicago/Turabian Style

Noomwongs, Nuksit, Natchanon Kitpramongsri, Sunhapos Chantranuwathana, and Gridsada Phanomchoeng. 2025. "Evaluating Visual eHMI Formats for Pedestrian Crossing Confirmation in Electric Autonomous Vehicles: A Comprehension-Time Study with Simulation and Preliminary Field Validation" World Electric Vehicle Journal 16, no. 9: 485. https://doi.org/10.3390/wevj16090485

APA Style

Noomwongs, N., Kitpramongsri, N., Chantranuwathana, S., & Phanomchoeng, G. (2025). Evaluating Visual eHMI Formats for Pedestrian Crossing Confirmation in Electric Autonomous Vehicles: A Comprehension-Time Study with Simulation and Preliminary Field Validation. World Electric Vehicle Journal, 16(9), 485. https://doi.org/10.3390/wevj16090485

Article Metrics

Back to TopTop