Next Article in Journal
Speed Control Based on State Vector Applied for Electrical Drive with Elastic Connection
Previous Article in Journal
Modeling and Analysis of Meteorological Contour Matching with Remote Sensor Data for Navigation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Game-Based Simulation and Study of Pedestrian-Automated Vehicle Interactions

by
Georgios Pappas
1,2,3,†,
Joshua E. Siegel
4,*,†,
Eva Kassens-Noor
5,
Jacob Rutkowski
4,
Konstantinos Politopoulos
2 and
Antonis A. Zorpas
6
1
Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48824, USA
2
Department of Electrical and Computer Engineering, National Technical University of Athens, Zografou, 15780 Athens, Greece
3
Laboratory of Educational Material and Methodology, Open University of Cyprus, Latsia, Nicosia 2220, Cyprus
4
Department of Computer Science and Engineering, Michigan State University, East Lansing, MI 48824, USA
5
Institute of Transport Planning and Traffic Engineering, TU Darmstadt, 64287 Darmstadt, Germany
6
Laboratory of Chemical Engineering and Engineering Sustainability, Faculty of Pure and Applied Sciences, Open University of Cyprus, Latsia, Nicosia 2252, Cyprus
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Automation 2022, 3(3), 315-336; https://doi.org/10.3390/automation3030017
Submission received: 23 May 2022 / Revised: 17 June 2022 / Accepted: 27 June 2022 / Published: 29 June 2022

Abstract

:
We identify the need for enhanced pedestrian–vehicle simulation tools and build such a tool to explore the interaction among pedestrian “players” and virtual human- and automated-vehicles for different scenarios taking place in an urban environment. We first present contemporary research tools and then propose the design and development of a new desktop application that facilitates pedestrian-point-of-view research. We then conduct a three-step user experience experiment, in which a small number of participants answer questions before and after using the application to interact with virtual human and automated vehicles in diverse road-crossing scenarios. Behavioral results observed in virtuality, especially when motivated by consequence, tend to simulate real life sufficiently well to inform design choices. From the simulation, we observed valuable insights into human–vehicle interactions. Upon completing this preliminary testing, we iterated the tool’s design and ultimately conducted an 89-participant study of human–vehicle interactions for three scenarios taking place in a virtual environment. Our tool raised participant awareness of autonomous vehicles and their capabilities and limitations, which is an important step in overcoming public distrust of AVs. We additionally saw that participants trust humans and technology less as drivers than in other contexts, and that pedestrians feel safer around vehicles with autonomy indicators. Further, we note that study participants increasingly feel safe with automated vehicles with increased exposure. These preliminary results, as well as the efficacy of the tool’s design, may inform future socio-technical design for automated vehicles and their human interactions.

1. Introduction

Automated Vehicles (AVs) are a next step in the evolution of the transportation system. AVs have the potential to lead positive societal change, including reduced emissions, travel costs, and transit times [1,2,3,4,5,6]. Beyond these impacts, AVs bring about the potential for enhanced safety, as human error is a leading cause of road accidents [2]. Despite AV’s potential benefits, there remains distrust about their safety [3,7,8,9,10,11,12]. AV–pedestrian interaction in particular poses uncertainty [13] due to the complexity of capturing real-world data from pedestrians and vehicles in potentially hazardous scenarios. In order to safely conduct such research, realistic simulation might instead be used. Realistic simulation has been used to great effect in other critical problem domains [14].
To meet this need, AV development relies on broadly accessible simulators such as AirSim [15], CARLA [16], and purpose-built [17] data capture tools; these examples emulate data from the perspective of the AV itself. Other studies have demonstrated simulators’ abilities to generate realistic data for AV–pedestrian interactions [18,19]. We see an opportunity to merge elements of these tools to build a pedestrian-centric, freely-available simulation tool for studying human–AV interaction. Such a tool would capture data informing not only the development of AV technology, but also informing the “policy” and design of automated vehicles to minimize adoptional friction.
Addressing this need, our team has developed a first-person simulator that allows participants to control a human avatar within an urban environment, allowing individuals to interact with virtual automated and human-operated vehicles as they attempt to cross a street. This simulator has the potential to help researchers better study human–vehicle interactions and their impacts and implications on and for social welfare. For example, one might study whether humans, comfortable with the knowledge that automated vehicles see and will respond to their behaviors, will choose to cross unsafely with the certainty they will not be hit–no matter the social cost to the occupants of those vehicles, or the surrounding traffic. That is, if a human knows a vehicle is automated, and that automated vehicles prioritize pedestrian safety, they might abuse automated vehicles and create dangerous situations and traffic through repeated unsafe crossings. This tool allows researchers to capture and permute over such representative interactions and may help raise participant awareness of AV capabilities while building trust and awareness, easing the safe and efficient adoption of driverless technologies.
In this article, we conduct a survey of prior art in AV–pedestrian simulators; describe the process by which our simulator was developed, tested, and validated; and include preliminary results from a study of AV–pedestrian interactions conducted within the simulator. These results aim to answer core questions related to human–AV perception and interaction, particularly in urban street crossing contexts. In describing our tool, we detail multiple design iterations leading up to the current version that underpinned an international study of three pedestrian–AV interaction scenarios. We close by exploring how our tool can be applied to advance the efforts of the AV industry, particularly related to human interaction.
The point of the newly-developed tool is to build on previously-developed XR capabilities and prior human–AV interaction studies to capture data from scenarios not yet explored in depth, with aims of better understanding human–AV interactions. Specific goals include particularly highlighting considerations, concerns, and opportunities in adapting vehicle designs and behaviors to facilitate cordial interactions and overall social well-being, e.g., whether an AV should even intentionally cause concern—real or imagined—for pedestrians as a means of maintaining mutual respect, or whether AV self-identification drives trust or recklessness more.

2. Background and Motivation

AVs have the potential to benefit the efficiency, environmental friendliness, and safety of our world [2,3,4,8,20]. Surveyed individuals would consider using automated vehicles when they become available for general public use, so long as they do not exacerbate societal inequity [2,7,21]. However, some people do not trust AVs, citing concerns about privacy, safety, and uncertainties with regard to algorithm performance [2,22,23,24,25]. The likelihood of AV adoption depends largely on individual trust [9,26,27]. Reig et al., concluded that those in the general public who do not support AVs may not understand AVs’ capabilities and lack education on the topic [28], suggesting that education may drive support. Even this, however, may not be sufficient to comfort individuals concerned about uncertain algorithm performance, such as how vehicles will perform in the absence of sensor data or external communications [29,30]. Studies, therefore, explore the best ways for autonomous vehicles to communicate information with human drivers and pedestrians in real time, better allowing humans to predict the algorithm’s behavior and plan accordingly [4,31,32,33,34].
Pedestrian–AV safety is one such ongoing research concern. Exploration in the area of pedestrian safety has historically been limited by the constant inability to put real human subjects into dangerous situations [18]. Due to the risk for AVs to cause property damage, injury, or loss of life, there are also ethical ramifications that must be considered. One dilemma that faces AVs is whether or not they should follow traffic laws perfectly under all circumstances. One potential solution comes from a study conducted by Thornton et al. [35], who found that law-abiding should be based on the role of the vehicle, where vehicles such as ambulances carrying a patient to the hospital are allowed to break the law to increase mobility, whereas taxis will follow laws strictly [35]. Another moral dilemma is that of the trolley problem, except in terms of automated vehicles that are in a situation where they inevitably crash. From the perspectives of Mirnig and Meschtscherjakov [36], the trolley problem is a moral dilemma that should simply be used for thinking about an issue further. However, Keeling finds that the trolley problem should be considered with driving cars but in a manner of minimizing damage, not in a manner of what is morally right or wrong [37].
Recently, Deb et al., explored the use of virtual environments to research pedestrian–automated vehicle interactions, concluding that virtual environments can simulate pedestrian actions with sufficient realism to mirror real-world results [18]. Jayaraman et al., applied a similar concept to develop a simulator that collected information on pedestrian–AV interactions [38], creating “an accurate long-term pedestrian crosswalk behavior (trajectory) prediction model” from synthetic data [38].
Many of these tools and applications are designed using popular game engines. The Unity Game Engine is one option that has been used successfully in pedestrian simulations [38]. This multi-platform engine allows developers to create a simulated environment where they can spawn or destroy (Game) objects [39,40]. The design and development process of a tool has become easier since a variety of 3D objects can be acquired through Unity’s extensive asset store [39,41]. This flexibility allows researchers to develop simulations [17], including gamified solutions [6,17,40], iteratively, particularly at early stages [42].
Another important factor in doing virtual research is determining the ideal mode of interaction (2D, 3D, or some form of XR including VR, AR, and MR). Virtual Reality (VR) allows participants to feel as though their actions have real consequences [41,43]. However, working with virtual reality can cause “virtual reality sickness” as a result of prolonged time spent in virtual environments [18], suggesting breaks may need to be part of game design. Despite this, VR has been used successfully in AV studies such as Shahrdar et al.’s research. This experiment has participants immerse themselves in a VR simulation and tests their trust of virtual AVs that make observable mistakes [22]. The study found that most participants were able to regain trust in an AV shortly after the vehicle had made an error [22]. Löcken et al. also conducted a comparative analysis of the AV–pedestrian interaction in VR [44]. In general, eXtended Reality (XR) or Mixed Reality (MR) can also be used in future experiments involving AVs‘ [45], allowing for more information to be communicated to passengers and pedestrians as well as helping to build trust between humans and AVs [46].

2.1. Embodiment in Virtual Environments and Video Games

Virtual Environments and video games captivate their audiences by allowing individuals access to non-typical experiences. This corresponds to the idea of embodiment, in which a player assumes the role of a virtual character and, in so doing, subconsciously extends themselves to inherit that character’s goals and aspirations, strengths, and limitations [47,48]. Moreover, embodiment then makes the character’s experience that of the player. Embodiment is an important concept in video games, as player engagement depends upon how well the player can take control of their virtual character; without sufficient interactivity, games become similar to other forms of media including films or books. In order to have a strong sense of embodiment, one must design a virtual world around the character that helps fully realize the goals of the character while also playing to the character’s strengths [47]. This allows the player the ability to use the character’s affordances effectively to achieve an end goal.
Embodiment is also important to gamified research, although there is debate about the efficacy of embodiment and its limitations on immersion. Alton raises the question of whether or not one should refer to current virtual character–player relationships as embodiment, because embodiment requires physical reactions and feelings that come as a result of the virtual experience [49]. It may be argued whether or not these criteria are met by VR or even video games in general, which leads to some ambiguity in terms of embodiment in virtual gaming. Although this raises concerns about the idea of embodiment in gamified research, embodiment has been proven effective in virtual environments that allow users to take control of an avatar and explore, e.g., a study conducted by Kiela et al. found that embodiment in video games is a valid first step to developing AI [50]. In that study, humans virtually simulate what a human would and should do in a situation, and then an AI can uncover patterns from the humans, building upon these to refine an algorithm that can be used to simulate what a human would do in the same situation [50]. This allows algorithms to constantly improve without feedback from human subjects in the virtual world. Another study by Gonzalez-Franco and Peck found that the use of embodied avatars in VR has led to higher levels of user immersion inside simulation [51]. This is important as the more immersed a user is in a simulation, the more realistic the simulation will feel to the user, leading to the creation of better representative real-world data.

2.2. Existing AV Simulators

There have been prior efforts to study pedestrian–AV interaction, including approaches incorporating both the notions of gamification and embodiment. In considering such efforts, we now explore three representative applications, with the first two being among the most popular in the field of automated vehicle simulation and the third selected having the more significant focus on pedestrian-centric data generation. The first application is CARLA [16], which was developed by the CARLA Team. This open-source automated vehicle simulator is built with the Unreal Game Engine and works with the OpenDRIVE standard to define roads and other urban scenery. CARLA has grown to be one of the most widely used options for automated vehicle simulations, and it has built a large and growing community.
The second application examined is AirSim [15], developed by Microsoft Research. AirSim is a computer-based simulator built for drone and self-driving car research. Similar to CARLA; this simulator is built upon the Unreal Game Engine, but there is also an experimental version based on Unity. This simulator is primarily used for research purposes surrounding AI and experimentation with deep learning, computer vision, and reinforcement learning.
The final application that we looked at was a tool developed by Deb et al. for research at Mississippi State University [3,18]. This simulation is not publicly available, though we can conclude that the simulation focuses on pedestrians crossing an intersection in virtual reality using the HTC Vive headset. The tool was also developed upon the Unity Game Engine, setting itself apart from the other two applications.
These three applications are not an exhaustive list of simulators that serve to study human–vehicle interactions; however, they are broadly representative of the contemporary research field.
To compare these applications, we analyzed each application based on five categories:
  • Availability—Availability refers to which simulators are publicly available for further research and if the simulators are free and/or open-source. Broad availability increases the likelihood of public use, and therefore increases the research utility of the tool.
  • Engines and Technical Development—This category refers to the underlying technical elements enabling the simulators. Most simulators are built upon game engines as a means of enabling realistic physics and graphics, with relatively straightforward development techniques. The dominant game engines are Unity and Unreal, although there are other options available too. Depending on the game engine, different programming knowledge may be needed for tool development or modification. Unity requires C# (or Javascript), and Unreal requires C++. Similarly, the game engine determines the supported platforms for tool use. More platform support tends to increase the availability of the tool for research use among diverse audiences.
  • Usability, Type of Application, and Target Audience—This refers to the type of application each simulator is, namely, whether the simulation is a desktop or virtual reality application. This dictates the cost and complexity of the requisite software and hardware. We then identify, in coarse and subjective terms, the complexity of setting up and running the simulator. Due to the fact that this research combines social and technical elements, it is not reasonable to assume a high level of technical familiarity from all application users—even survey facilitators. We therefore also identify the target audience for which the simulator may be readily used to conduct research.
  • Realistic Graphics Environment—This category identifies the subjective visual quality of the 3D environment and models or prefabs used in the simulation. Graphics are an important element towards increasing the engagement of a simulator and a user’s perceived embodiment. The more plausible the graphics, the better and more realistic the experience the tool will offer. Increased realism is often seen as essential to generating plausible and repeatable research data from game-based tools.
  • Pedestrian Point of View—Pedestrian Point of View denotes whether a tool is well-suited to pedestrian-centric research and development application. Since we explore human interaction with AVs, and not the other way around, it is important to identify which simulators offer the ability for players to “become” a pedestrian.
Comparing these tools allows us to identify unmet development needs to inform our own tool design.
Table 1 shows that the CARLA Simulator and AirSim are both free and open source. This gives these tools an advantage over the Mississippi State University Tool, as each has had years to build a community that supports their applications. The Mississippi State University tool is not publicly available.
Table 1 indicates that CARLA is built upon the Unreal Engine and therefore accessible to users with knowledge of C++. Some knowledge of Python is also required to implement some forms of Deep Learning within the simulator. AirSim is also built upon the Unreal Engine, and therefore it requires users to have the same C++ and Python knowledge in order to perform similar development. However, AirSim also has experimental compatibility with the Unity Game Engine; therefore, it increases its community by allowing for the development with the C# programming language. The Mississippi State University Tool also uses the Unity Game Engine, but it is not publicly available for open development.
Usability and target audience analysis (Table 1) show that the CARLA Simulator and AirSim require strong programming skills in multiple programming languages and are relatively hard to set up, especially for inexperienced users. This limits the audience of these applications to programmers and engineers. The Mississippi State University Tool succeeds comparatively due to its ease of use and setup, as the application is a pre-built export application. This allows the tool to be used by programmers, engineers, enthusiasts, and researchers who may want to conduct research. The use of virtual reality is another unique differentiator for research.
The next category, which is more subjective, compares the graphical realism of each simulator (Table 1). The CARLA Simulator and AirSim have multiple scenes and use high-quality 3D assets, allowing for an immersive experience within the simulation. The Mississippi State University Tool falls relatively short in this category. We use this as a proxy in part for the perceived realism of, and therefore the transferrability of findings from, the experimental simulator.
The final category is the implementation of the pedestrian point of view in the application (Table 1), which is the most important category for studying under-explored areas of AV–pedestrian interaction. Both CARLA and AirSim lack in this category since their focus is on automated vehicles rather than pedestrian research. As of this writing, there is no direct user control of pedestrians, as they are more AI-oriented. Both of these applications do, however, allow for customizable scenarios with pedestrians, such as volume and diversity of pedestrian populations. The Mississippi State University Tool focuses on the pedestrian point of view, allowing users to embody the pedestrian in virtual reality.
This comparative analysis identified the opportunity in creating a new, readily-adaptable, easy-to-use tool focused on pedestrian-centered movement. This tool should ideally also be publicly available (if not open source) as this would grant the largest possible user-base without unnecessarily increasing complexity of use. Ease of use was a priority for our team, as we need our tool to be used not only by those well versed in the technical fields but also by participants in research studies and other scientists with diverse backgrounds. The graphics should also be realistic in order to better simulate and immerse users in the experience.
In the following section, we more explicitly describe the unmet opportunity motivating the development of a new, realistic, easy-to-use, and pedestrian-centric simulator for capturing data from simulated human–AV interactions. We also introduce preliminary research questions that we try to answer with the tool as a means of demonstrating the simulator’s plausibility as a research-enabler.

2.3. Unmet Opportunity

Having identified related research tools and explored these in Section 2.2, we noted the lack of tools that combine freely available and easy-to-use simulator with pedestrian-centric viewpoint, realistic graphics, and straightforward game-development tools. In particular, we see an opportunity to build a cross-platform tool capable of capturing data from within the game environment, without any requisite technical expertise.
Given these needs and the yet-unrealized potential to use game-based tools to address pedestrian–automated vehicle interactions from a new angle, we set out to design, develop, and validate through a small-scale survey an easy-to-use pedestrian point-of-view simulator. We do so by creating and demonstrating the use of a tool to answer questions that have been under-explored in prior studies:
  • What are typical pedestrian–vehicle interaction patterns for varying degrees of autonomy?
  • Can goal-driven humans effectively “coexist” with goal-driven automated vehicles, e.g., in a social context?
  • Does the degree to which a pedestrian is familiar with autonomy change these interactions? How?
  • Does the knowledge that a vehicle is automated change a pedestrian’s interaction patterns? How?
  • Does exposure to automated vehicles increase or decrease pedestrian trust in those vehicles?
These questions are important, because they will provide valuable insights from the pedestrian perspective captured from a large and diverse spectrum of study participants. We will specifically be able to explore the stage of building awareness for the AVs, increasing the trust towards self-driving cars, and even identify possible “abuse” resulting from pedestrians acting with hostility towards AVs. The most prominent example, borne out in reality for years, is that pedestrians may seek to “challenge” AVs with seemingly-unsafe crossing, causing the vehicle to swerve, stop, and/or wait. Better understanding of the motivation and potential response to such hostility will help to inform technical decisions and perhaps policy, helping move towards the broader adoption of automated vehicles and resulting social welfare.
Towards the aim of addressing these and other questions, we set out to develop “The Pedestrian Tool” (Appendix A.2), an easy-to-use, Unity-based simulator focused on pedestrian embodiment and pedestrian–vehicle interactions. This will allow users to better understand interaction patterns between humans and automated vehicles through data collection using the tool. This tool will also become important due to approaching reality that automated vehicles are beginning to be introduced into the general public, and, with this, a lot of questions have been raised about their interactions with humans. In doing so, this research crucially contributes toward addressing a gap in the current social science literature, which ignores the period of transition, when both AVs and human drivers share the road space [52]. There is a general distrust surrounding automated vehicles due to a lack of understanding of how they work or what they will do, so this tool will allow researchers to better understand how to combat this distrust while also allowing the general public to interact with automated vehicles in a safe and harmless environment. The stigma labeling automated vehicles as ’unpredictable machines’ could create additional challenges against their involvement in society and potential commercial failure, and we believe that their benefits far outweigh their drawbacks. One of the main goals of the tool is to raise awareness and improve public perception towards automated vehicles and their safety.
The following sections describe the materials and methods used to develop a game-based tool aimed at answering these questions and describe the data capture and analysis from a small cohort to prove the tool’s utility.

3. Materials and Methods

This section describes the Pedestrian Tool’s design and development.
Following the principles of research-creation [53], we aimed to create a meaningful [54] interactive user experience to therefore increase the likelihood of generating accurate, useful, and repeatable results. We therefore aimed to design a serious gamified tool [55] that builds awareness and at the same time educates [56] participants about autonomous vehicles though interactions taking place within a realistic simulated environment. Embodiment was particularly important, as users were encouraged to take on the role of a virtual pedestrian.
Our application was developed using the agile methodology and the Scrum model [57]. We developed and conducted parts of our research with stable builds of our tool, working in sprints, while iterating [58] until we reached stable builds. The tool was developed in the Unity Game Engine using both free and purchased assets from the Unity Asset Store. Early builds, including survey results and demographic data, are included in detail in [19].

3.1. Initial Build and Play Testing

The authors first built a basic tool in which participants are dropped into a simulated urban environment and asked to explore and cross the road. The first build of our application featured only an urban scene, and the users could walk and cross intersections freely without any consequence even if they were ’virtually’ hit by a vehicle. We surveyed users formally and informally about their response to the game, as well as to automated vehicles.
First, participants were surveyed about their feelings toward self-driving vehicles such as level of familiarity, if they think self-driving vehicles will become a safer alternative to human-driven vehicles, and questions exploring various plausible human–AV crosswalk interactions. The respondents were generally familiar with automated vehicles, were positive to neutral about AV’s potential, claimed to obey traffic laws, and were uncertain about how safe they would feel should an AV approach them while crossing the road. Participants were then asked to use the simulator for five minutes with the goal of crossing intersections safely, after which a followup survey was conducted. From this survey, we learned that participants may not check for safety prior to crossing and tend to find the concept of being hit by a virtual AV either fun or not scary. This lent credence to the idea that a participant may try to “speed run” across the road in the absence of consequence and suggested the need for consequence in future studies as a means of discouraging such behavior and better-reflecting reality.

3.2. Second Build and Play Testing

A second iteration of the simulator added a scoring mechanism using in-game “collider” elements (Figure 1) to motivate faster crossing and to shift the focus of “fun” [59] from being hit by a car to maximizing points collected. For every successful crossing, a user obtained 100 points. We also implemented a timer to limit participant study time. In this build, “careless” users also lose the game if they are hit by a vehicle. Penalization for contact ends the game and transitions to a Game Over scene (Figure 2), where the final score is shown.
Additional surveys were conducted of four different testers with more gaming experience and less familiarity with automated vehicles than the previous group (as self-identified). In testing this version, we learned that the user experience depended greatly on the computer used, and game-play fluidity was associated with feelings of immersion. For the first time, we also witnessed players fully engaging with the game world; this behavior is known as an “explorer” player type [60,61]. Importantly, we noted that user behavior towards vehicles changed: users started to avoid cars and follow other pedestrian crossing signals. This suggested that supervision—either human or through additional in-game scoring mechanisms—was necessary to capture data more reflective of a reality with consequence.

3.3. Third Build and Play Testing

A third experimental build was play tested with nine participants. This game version built upon the second, which subjective play testing and objective surveys supported as being realistic and supportive of embodiment and added three unique two-minute play scenarios. The first scenario included vehicles that always stopped for pedestrians, the second scenario included some vehicles that would stop for pedestrians and some that would not, and the third scenario included some that would stop and others that would not; among these vehicles, automated vehicles indicated their presence with a bright green light when they were entered within 15 m of the player.
A more expansive set of survey questions was captured, including questions related to real-world scenarios, pedestrian behaviors, and technological and autonomous vehicle familiarity. Intermediate surveys were captured between each play scenario. Following the final scenario, additional questions were answered regarding user experience, usability [62], realism [63], and demographics.
Pre-test data suggested that most (71%) participants would never try to cross a real-world intersection without checking traffic lights. A majority of participants also indicated that they would not feel safe if an automated vehicle approached them in an intersection, even if that vehicle indicated that it saw people in the intersection and planned to stop. Additional data concluded that participants were somewhat unlikely to trust AVs to stop for an object or person in the road (M = 4.7, Likert scale 1–10) and slightly unfamiliar with AVs at intake (M = 4, Likert scale 1–10).
Post-test data indicated that participants gained awareness of AVs throughout the process, with 53% noticing AVs in scenario two and 69% noticing AVs in scenario three. 37% of participants noted self-driving vehicles in scenario one, though none were present. When asked about the clarity of self-driving vehicle intentions, participants in scenario two were mostly unsure (57%), while a majority of those in scenario three reported that intentions were clear (57%). Participants’ likelihood of following signals decreased from neutral to moderately unlikely as participants encountered scenarios with an increased presence and visibility of self-driving vehicles. Participants also became less trusting of the self-driving vehicles as their presence increased across the scenarios (scenario one: slightly likely, scenario two: slightly unlikely, scenario three: moderately unlikely) and increased the likelihood with which their avatar “ran” across the road. Additionally, participants reported becoming increasingly cautious at the onset of scenario three, as opposed to careless in their behaviors. In scenarios one and two 86% and 71%, respectively, of participants reported being careless, while only 43% reported carelessness in scenario three.
User data confirmed that the simulation was easy to use and realistic; this, combined with the earlier-noted trends and user feedback led to the creation of the simulator’s final version featuring both the play testing and sampling phases from within the tool.

3.4. Final Build and Play Testing

In the fourth and final build, we implemented survey questions in-game (Figure 3, Figure 4, Figure 5 and Figure 6) to facilitate ease of use. This build comprises a first-person perspective gamified application and features a consent form (Figure 7, text in Appendix A.1), a main menu (Figure 8), three different scenarios (adapted from the prior build) within an invariant urban environment (Figure 9) along with their guidelines (Figure 10), four in-game surveys (Figure 6) that auto-populate external Google Forms, and a game over scene (Figure 2).
Upon launching, users are presented with the Tool’s Terms of Service. If users consent, the game loads to an Intake survey (first survey—Figure 6) and, from there, to the main menu where they select one of the three scenarios. These three scenarios take place in a common environment, but the cars in each (virtual AVs and human driven) comprise different mixes and capabilities.
The in-game surveys are associated with four Google Forms that are auto-filled when users submit their selections from within the game. Information capture comprises:
  • the survey answers to the questions,
  • the location data (country/city) that is automatically extracted from the users’ IPs using a free API service, and
  • some in-game info extracted by events during the scenarios (score, being “hit” by a car, and time of “hit”).
Our tool specifically informs all participants of the data collection and is compliant with Michigan State University IRB requirements (exempted under STUDY00005920) and GDPR law. The capture mechanism is presented in Figure 11.
  • Scenario 1: All cars, human or AI operated, stop safely before hitting the user-pedestrian if physically possible. The possibility of being “hit” is low.
  • Scenario 2: Some cars may stop before colliding with the user-pedestrian. Users do not know which, if any, given car (AV or human driven) will stop or not. The possibility of being “hit” is increased.
  • Scenario 3: Some cars may stop before the user-pedestrian while others will not. In this case, the AVs are identified by a green light indication that appears under the vehicles. This indication starts when the pedestrian–AV distance is less than 15 m. The AVs will always stop before hitting the pedestrian if physically possible (Figure 12).
Developing multiple versions of the tool with participant observation and surveys allowed us to build a tool that better meets researcher and participant needs than those existing, particularly as ease-of-use and ease-of-data-capture outstrip those of prior solutions.

4. Results

An 89-person study was conducted using the fourth and most-recent version of the tool. Though a larger study would be preferable to establish more robust and statistically significant results, the sample was seen being of appropriate size to gain preliminary insights into the function of the tool as well as of human–vehicle interactions. We are seeking external support for this work and intend to continue capturing data using this tool after this manuscript’s publication, and will continue to share updated datasets, and if appropriate, updated articles, as new findings emerge.
All study participants were recruited by word of mouth. There were no exclusion criteria, except for eliminating data captured by study designers who may have been biased due to over-familiarity with the simulator and/or automated vehicles and their social and technical capabilities and limitations.
Data were captured unsupervised, typically using the participant’s own computer. All participants agreed to participate in the study using opt-in consent and language reviewed and approved for IRB exemption by Michigan State University. This exemption indicates that the IRB had no significant concerns about the ethics or potential consequence arising from the study. The study was also conducted within the guidelines of GDPR to allow for the capture of data from European participants.
The participants reflected views and experiences from four countries and 27 cities. The questions and results are presented in Table 2 and Table 3, respectively. Scenarios are as defined in Section 3.4.
Given the preliminary nature of the study, analysis was limited to capturing the mean and standard deviation in response to each question. For further analysis, sanitized pedestrian data (no unique identification or timestamp) have been made available for download alongside this manuscript (Appendix A.2). Additional data will be added to the repository as the tool’s use continues.

5. Discussion

When analyzing the preliminary results among aggregate data (not divided among city, country, or any other identifiable parameters), we observe compelling trends.
The first finding relates to comfort when crossing around human and automated vehicles. In analyzing sample data, we identify that participants trust humans less than technology, and, at the same time, these same individuals trust automated vehicles less than other technologies but more than humans. We also note that participants trust humans less when driving than in completing other tasks. This leads to the conclusion that humans are more trusting of automated vehicles to stop for pedestrians than for human drivers to do the same; this effect is more pronounced after exposure to Virtual Automated Vehicles (VAVs).
The second finding relates to participant comfort with VAVs. We find that, at the outset, participants are somewhat comfortable with the notion of AVs. Experience engaging with VAVs emboldens the participants to cross blindly and walk in front of VAVs more assertively than in front of humans. This effect appears to increase with exposure.
This relates to a crucial insight that participants claimed to feel safer knowing that an AV acknowledges their presence; this is borne out by increased confidence crossing in front of VAVs in Scenario 3. Based on this, we present finding three: that participants, acting as pedestrians, prefer knowing which vehicles are automated and that those automated vehicles have perceived the participant’s presence. This is borne out by participants’ reduced feelings of safety in Scenario 2 (in which VAVs coexist with virtual, human-driven vehicles, but vehicles are not clearly marked) versus Scenarios 1 and 3, where vehicles are all human operated, or where human-operated and automated vehicles intermix and are clearly indicated, respectively. It is clear that explicit communication of a vehicle’s operating mode and the status of pedestrian perception are essential to communicate with bystanders to improve pedestrian comfort in and around mixed traffic. This suggests AVs require indication to build bystander comfort, which is backed up by the fact that participants generally had low confidence in their ability to detect unmarked AVs as well as a strong participant preference to know rather than think a vehicle is automated.
Regarding participant confidence, it decreased across sequential scenarios while safe crossing scores increased; this may suggest that heightened awareness of AVs leads to increased caution, thereby improving overall safety. It is unclear whether, with further exposure, this trend might reverse.
Finally, we noticed that the participant scores, in general, had a high variability. This may be a reflection of a “gamer divide” and/or indicative that different control options (e.g., locomotion, gamepad, keyboard, and mouse) may be desirable for future builds. This is an important area for continued study, as any feelings of loss of control reduce embodiment and make results less robust.
With regard to our specific questions from Section 2.3, we then find from the data that pedestrians and vehicles interoperate reasonably well, with improved coexistence based on the availability and clarity of vehicle type and state communication.
We note that diverging or competing goals (e.g., rapid pedestrian street crossing and smooth ride/fast transit for vehicle occupants) may cause adversarial play scenarios, and also observed this in simulator testing in participants’ higher scores with increased exposure to vehicles. In general, however, “coexistance” is possible for a single-pedestrian/multiple vehicle scenario. This may change in more complicated environments, e.g., where multiple pedestrian agents may coordinate and/or collude.
In answering this question, we observed that the question itself is limited in utility, as it only considers the pedestrian perspective. This suggests a potential future area of exploration, in which some general social score taking into consideration both pedestrian and passenger perspectives might be studied.
We see also from the results that pedestrian familiarity with automated vehicles (and the tool itself) increases comfort around VAVs, and that knowledge of a vehicle’s operational mode and status does change a pedestrian’s comfort and interaction patterns.
From scores alone, we also see that increased exposure increases comfort around automated vehicles, which may be seen as a surrogate for measuring trust.
Our intention in conducting this survey was to prove the viability of the tool; now that this is proven, we will continue to capture data and share the tool as executable and and with the source code (as allowable per asset licenses) with researchers so as to grow our result data. This will allow us to better disaggregate data, e.g., to look at trends across geographies. With more data, we might be able to make bolder claims and conclusions, for example, in determining how frequently an AV “should” run a stop sign to strike the right balance of pedestrian caution and overall social welfare of the transit system. It will also allow us to further explore the limitations of such a tool, e.g., in determining the consequence on result significance of reality and virtuality not being exact “mirrors”.

6. Conclusions

We developed a series of tools addressing unmet needs in pedestrian–vehicle interaction simulation. These tools have a pedestrian-first focus and are easy to use and widely available. To prove the utility of the refined version of these tools, we conducted a survey of diverse participants from a global background and with varied levels of familiarity with games and automated vehicles.
Three scenarios were tested: we asked participants to cross intersections as quickly as possible in three different scenarios: (1), all human operated vehicles, (2) a mix of human and automated vehicles, and (3) a mix of human and self-announcing automated vehicles. Getting hit by a vehicle caused the game to end as a consequence, while successful crossings awarded players points, serving as motivation.
From a survey of 89 participant-players, this study provided some valuable information on the user perspective on AVs. Initially, the users’ survey gave us a first indication that people are not so ready to trust autonomous vehicles being on the streets. This finding aligns with a large body of research that suggests that trust is a universal factor in comfort and adoption of technology and automated vehicles [64,65,66,67]. In particular, Ayoub et al. identified perceptions of benefits and risks, feelings, and knowledge of AVs as key indicators of trust [66]. AVs represent a known unknown, when the most common concerns people have with AVs are with what has yet to be standardized by government agencies, developers, and firms: liability, control, privacy, and security [8,9,10]. We found that this distrust does not only include being passengers in such vehicles but also facing them as pedestrians.
Further, the same group of people, even though they would hesitate crossing an intersection when a self-driving car approached in real life, had different behavior in the virtual world relative to those studied using the first version of the tool. Since they had no consequences when they were virtually “hit” by a vehicle, they tended to be careless or even abuse cars when they stopped before them. Using the next version of the tool, where a reward and a penalization system were implemented, users started behaving closer to reality. Their approach was more strategic, but they were more careful in virtuality, since being “hit” by a vehicle (self-driven or not) resulted in a game over.
A final version of the tool proved the utility in creating smarter pedestrian simulators focused on ease-of-use and confirmed that the tool itself is usable, realistic, and not too complex for simulation use. We see that participants trust humans and technology less as drivers than in other contexts, and that pedestrians feel safer with autonomy indicators and increasingly feel safe with automated vehicles after having experience engaging with such vehicles.
Significant takeaways include the notion that consequence is necessary to encourage pedestrians to consider an AV and its occupants when crossing, and that imperfect information, such as uncertainty as to whether a vehicle is automated or not and as to whether or not that vehicle “sees” a pedestrian, may ultimately improve social welfare by increasing the likelihood of a negative outcome from a pedestrian’s careless crossing. It may in fact be argued that it is “better” in cases to hide a vehicle’s automation capability and operating status, or for an automated vehicle to drive closer to a human pedestrian than is necessary to create a “healthy respect” or fear that ultimately improves traffic flow and encourages pedestrians to keep their wits about them.
Importantly, these preliminary results studied pedestrian–vehicle interaction broadly and only from the pedestrian perspective; in future work, studying consequence and opportunity from both the pedestrian and vehicle perspective simultaneously may yield more informative results.
AV designers must also remain cognizant that familiarity breeds in pedestrians a disregard for safety around automated vehicles, and occasional “failures” may be necessary to induce in pedestrians a higher level of awareness, as well as to serve as a reminder that AVs have a job to do in getting their occupants safely and efficiently to their destinations without interruption. Development goals, then, might better strike a healthy balance between efficiency and the mutual respect of on- and near-road agents.
While these conclusions are drawn from a small-sample preliminary study, importantly, we showed that the developed tool can be used to generate insight related to automated vehicle interaction with pedestrians, and that this area of study may become increasingly useful and important in the near future. We plan to continue the use of this and related tools to conduct broader studies of pedestrian–vehicle interaction that may yield more statistically-significant results.
We encourage readers to download the latest version of the tool and all derivative datasets (Appendix A.2) to contribute to the future of this and related studies.

7. Future Work

The present work resulted in developing a first-person gamified research tool about pedestrian-autonomous vehicle interaction. This simulator addressed unmet needs in creating easy-to-use pedestrian-centric tools that are widely accessible to the research community.
In the future, we intend to create a VR version of the tool for users to have a fully immersive experience. Moreover, we intend to have additional intersections and (probably) more levels in order for users to have more tasks; they would also not have to repeat tests the same environment. Regarding the AI pedestrian models, we will add more or customize existing ones in order to make the tool even more diverse [68], e.g., by incorporating real-world data from Internet of Things sensing systems [69]. We are releasing a free executable of the tool (Appendix A.2) and plan to deploy a web-only version to expand our audience reach.
This tool provides a unique option for studying public perceptions and behavioral intentions surrounding AVs. Additional studies could conduct larger, more diverse scale testing, comparative studies of simulation, VR, and reality, additional scenario studies surrounding urban vs. rural environments and varying weather conditions, as well as longitudinal human-factor studies surrounding trust, acceptance, perceived risk, and interaction preferences with consumer implications.

Author Contributions

Conceptualization, G.P. and J.E.S.; methodology, G.P., J.E.S. and J.R.; software, G.P.; validation, G.P., J.E.S., J.R., E.K.-N., K.P. and A.A.Z.; formal analysis, J.E.S. and E.K.-N.; investigation, G.P., J.E.S. and J.R.; resources, J.E.S.; data curation, G.P., J.E.S., K.P. and A.A.Z.; writing—original draft preparation, G.P., J.E.S. and J.R.; writing—review and editing, G.P., J.E.S., E.K.-N., K.P. and A.A.Z.; visualization, G.P. and J.E.S.; supervision, G.P. and J.E.S.; project administration, G.P. and J.E.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study (STUDY00005920) on 11 March 2021, by Michigan State University, as the research was deemed exempted from IRB approval under 45 CFR 46.104(d) 3(i)(b).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Survey response data for the game build discussed in this manuscript, as well as a compiled build of the game, are available for download at [70].

Acknowledgments

We thank Andrea Schaaf for helping to design and capture samples for the tool’s initial survey, and Professor Carrie Heeter for providing important game design insights.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Appendix A.1

This study seeks to understand more about the user experience in a computer simulated interaction with autonomous vehicles, from a pedestrian perspective. This is an exploratory study in which subjects will use computer keys to navigate through cross-walks in a simulated city with both standard and autonomous vehicles, all from a pedestrian perspective. Aims are to answer the questions of how users interact with their environment and whether or not they feel a sense of trust regarding the autonomous vehicles within the simulation. These results will inform future simulation developments and research studies.
  • BRIEF SUMMARY
You are being asked to participate in a research study that is going to provide our team with valuable insights. This is also going to be used by further research on Self-Driving vehicles by the DeepTech Lab of Michigan State University in collaboration with National Technical University of Athens. Researchers are required to provide a consent form to inform you about the research study, to convey that participation is voluntary, to explain risks and benefits of participation including why you might or might not want to participate.
  • PURPOSE OF RESEARCH
The purpose of this research study is to create a gamified research tool (application) that will help people, living in urban environments, build awareness on self-driving vehicles. At the current stage, we would like to get samples on how people behave crossing intersections in virtuality, if they are let free. Identifying risky or unrealistic behaviors will help our team develop the right mechanisms and preventing features towards a new version of the gamified tool. Also, we would like to see if we can potentially raise awareness on autonomous vehicles to the public.
  • WHAT YOU WILL BE ASKED TO DO
Initially, you will take a short general survey on technology and self-driving vehicles. Then, you will be able to test three (3) different scenarios and after each one you will be required respond to some more very short surveys. The total number of survey including the introductory one will be four (4).
  • POTENTIAL BENEFITS
This research will enlighten our team on developing the right type of mechanisms to correctly simulate realistic behaviors inside our research tool. Ultimately, this will help us build an accurate research application that will train people and raise awareness on self-driving vehicles.
  • POTENTIAL RISKS
There are no foreseeable risks to participating.
  • PRIVACY AND CONFIDENTIALITY
During the research, we will NOT know your identity. We will gather your survey responses and in addition to them, we will collect some personal data like location (country and city of residence) and your in-game behavior (being “hit” by a car, time and score) automatically. Your avatar will be associated with a generated key, unique for each participant, that will enable full unanimity and it will not trace back to you anyhow.
  • YOUR RIGHTS TO PARTICIPATE, SAY NO, OR WITHDRAW
Participation in this study is voluntary. You may withdraw at any time without penalty.
  • CONTACT INFORMATION
If you have concerns or questions about this study, please contact the researcher:
Georgios Pappas,
If you have questions or concerns about your role and rights as a research participant, would like to obtain information or offer input, or would like to register a complaint about this study, you may contact, anonymously if you wish, the Principal Investigator of this research, Joshua Siegel, [email protected].
  • DOCUMENTATION OF INFORMED CONSENT
If you decide to participate in this study, please sign the form below with your full name. This will not be associated with your responses or performance results. If you do not agree to the above terms, please close this window and do not proceed further. You may keep a copy of this consent form if would like.

Appendix A.2

The Pedestrian Tool and results for this survey, as well as future builds and survey results, may be downloaded from [70].

References

  1. Tirachini, A.; Antoniou, C. The economics of automated public transport: Effects on operator cost, travel time, fare and subsidy. Econ. Transp. 2020, 21, 100151. [Google Scholar] [CrossRef]
  2. Piao, J.; McDonald, M.; Hounsell, N.; Graindorge, M.; Graindorge, T.; Malhene, N. Public Views towards Implementation of Automated Vehicles in Urban Areas; Elsevier: Amsterdam, The Netherlands, 2016; Volume 14, pp. 2168–2177. [Google Scholar] [CrossRef] [Green Version]
  3. Deb, S.; Strawderman, L.J.; Carruth, D.W. Investigating Pedestrian Suggestions for External Features on Fully Autonomous Vehicles: A Virtual Reality Experiment; Elsevier: Amsterdam, The Netherlands, 2018; Volume 59, pp. 135–149. [Google Scholar] [CrossRef]
  4. Habibovic, A.; Lundgren, V.M.; Andersson, J.; Klingegård, M.; Lagström, T.; Sirkka, A.; Fagerlönn, J.; Edgren, C.; Fredriksson, R.; Krupenia, S.; et al. Communicating intent of automated vehicles to pedestrians. Front. Psychol. 2018, 9, 1336. [Google Scholar] [CrossRef] [PubMed]
  5. Voukkali, I.; Zorpas, A. Evaluation of urban metabolism assessment methods through SWOT analysis and analytical hierocracy process. Sci. Total Environ. 2021, 807, 150700. [Google Scholar] [CrossRef]
  6. Pappas, G.; Papamichael, I.; Zorpas, A.; Siegel, J.E.; Rutkowski, J.; Politopoulos, K. Modelling Key Performance Indicators in a Gamified Waste Management Tool. Modelling 2021, 3, 27–53. [Google Scholar] [CrossRef]
  7. Kassens-Noor, E.; Kotval-Karamchandani, Z.; Cai, M. Willingness to ride and perceptions of autonomous public transit. Transp. Res. Part A Policy Pract. 2020, 138, 92–104. [Google Scholar] [CrossRef]
  8. Parida, S.; Franz, M.; Abanteriba, S.; Mallavarapu, S.S.C. Autonomous Driving Cars: Future Prospects, Obstacles, User Acceptance and Public Opinion. In Advances in Human Aspects of Transportation, Proceedings of the AHFE 2018 International Conference on Human Factors in Transportation, Orlando, FL, USA, 21–25 July 2018; Springer: Berlin/Heidelberg, Germany, 2019; pp. 318–328. [Google Scholar]
  9. Kaur, K.; Rampersad, G. Trust in Driverless Cars: Investigating Key Factors Influencing the Adoption of Driverless Cars. 2018. Available online: http://www.scopus.com/inward/record.url?scp=85046728683&partnerID=8YFLogxK (accessed on 1 September 2021).
  10. Schoettle, B.; Michael, S. A Survey of Public Opinion about Autonomous and Self-Driving Vehicles in the US, the UK, and Australia; Technical Report; University of Michigan Transportation Research Institute: Ann Arbor, MI, USA, 2014. [Google Scholar]
  11. Kassens-Noor, E.; Cai, M.; Kotval-Karamchandani, Z.; Decaminada, T. Autonomous Vehicles and Mobility for People with Special Needs. Transp. Res. Part A Policy Pract. 2021, 150, 385–397. [Google Scholar] [CrossRef]
  12. Siegel, J.; Pappas, G. Morals, ethics, and the technology capabilities and limitations of automated and self-driving vehicles. AI Soc. 2021. [Google Scholar] [CrossRef]
  13. Kassens-Noor, E.; Siegel, J.; Decaminada, T. Choosing Ethics Over Morals: A Possible Determinant to Embracing Artificial Intelligence in Future Urban Mobility. Front. Sustain. Cities 2021, 3, 723475. [Google Scholar] [CrossRef]
  14. Papamichael, I.; Pappas, G.; Siegel, J.; Zorpas, A. Unified waste metrics: A gamified tool in next-generation strategic planning. Sci. Total Environ. 2022, 833, 154835. [Google Scholar] [CrossRef]
  15. Shah, S.; Dey, D.; Lovett, C.; Kapoor, A. AirSim: High-Fidelity Visual and physical simulation for autonomous vehicles. Field Serv. Robot. 2017, 5, 621–635. [Google Scholar] [CrossRef] [Green Version]
  16. Dosovitskiy, A.; Ros, G.; Codevilla, F.; López, A.; Koltun, V. CARLA: An open urban driving simulator. In Proceedings of the 1st Annual Conference on Robot Learning, Mountain View, CA, USA, 13–15 November 2017; Volume 78, pp. 1–16. [Google Scholar]
  17. Karur, K.; Pappas, G.; Siegel, J.; Zhang, M. End-to-End Synthetic LiDAR Point Cloud Data Generation and Deep Learning Validation; SAE Technical Paper; SAE International: Warrendale, PA, USA, 2022; Available online: https://www.sae.org/publications/technical-papers/content/2022-01-0164/ (accessed on 29 March 2022).
  18. Deb, S.; Carruth, D.W.; Sween, R.; Strawderman, L.; Garrison, T.M. Efficacy of Virtual Reality in Pedestrian Safety Research; Elsevier: Amsterdam, The Netherlands, 2017; Volume 65, pp. 449–460. [Google Scholar] [CrossRef]
  19. Pappas, G.; Siegel, J.E.; Rutkowski, J.; Schaaf, A. Game and Simulation Design for Studying Pedestrian-Automated Vehicle Interactions. arXiv 2021. [Google Scholar] [CrossRef]
  20. Shabanpour, R.; Golshani, N.; Shamshiripour, A.; Mohammadian, A.K. Eliciting preferences for adoption of fully automated vehicles using best-worst analysis. Transp. Res. Part C Emerg. Technol. 2018, 93, 463–478. [Google Scholar] [CrossRef]
  21. Yurtsever, E.; Lambert, J.; Carballo, A.; Takeda, K. A Survey of Autonomous Driving: Common Practices and Emerging Technologies. IEEE Access. 2020, 8, 58443–58469. [Google Scholar] [CrossRef]
  22. Shahrdar, S.; Park, C.; Nojoumian, M. Human trust measurement using an immersive virtual reality autonomous vehicle simulator. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’19, Honolulu, HI, USA, 27–28 January 2019; pp. 515–520. [Google Scholar]
  23. Ackermann, C.; Beggiato, M.; Schubert, S.; Krems, J.F. An Experimental Study to Investigate Design and Assessment Criteria: What Is Important for Communication between Pedestrians and Automated Vehicles? Elsevier: Amsterdam, The Netherlands, 2019; Volume 75, pp. 272–282. [Google Scholar] [CrossRef]
  24. Keeling, G.; Evans, K.; Thornton, S.M.; Mecacci, G.; Santoni de Sio, F. Four Perspectives on What Matters for the Ethics of Automated Vehicles. Road Veh. Autom. 2019, 6, 49–60. [Google Scholar]
  25. Yigitcanlar, T.; Wilson, M.; Kamruzzaman, M. Disruptive Impacts of Automated Driving Systems on the Built Environment and Land Use: An Urban Planner’s Perspective. J. Open Innov. Technol. Mark. Complex. 2019, 5, 24. [Google Scholar] [CrossRef] [Green Version]
  26. Kassens-Noor, E.; Wilson, M.; Cai, M.; Durst, N.; Decaminada, T. Autonomous vs. Self-Driving Vehicles: The Power of Language to Shape Public Perceptions. J. Urban Technol. 2021, 28, 5–24. [Google Scholar] [CrossRef]
  27. Golbabaei, F.; Yigitcanlar, T.; Paz, A.; Bunker, J. Individual Predictors of Autonomous Vehicle Public Acceptance and Intention to Use: A Systematic Review of the Literature. J. Open Innov. Technol. Mark. Complex. 2020, 6, 106. [Google Scholar] [CrossRef]
  28. Reig, S.; Norman, S.; Morales, C.G.; Das, S.; Steinfeld, A.; Forlizzi, J. A field study of pedestrians and autonomous vehicles. In Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI ’18, Toronto, ON, Canada, 23–25 September 2018; pp. 198–209. [Google Scholar]
  29. Li, N.; Kolmanovsky, I.; Girard, A.; Yildiz, Y. Game Theoretic Modeling of Vehicle Interactions at Unsignalized Intersections and Application to Autonomous Vehicle Control. In Proceedings of the 2018 Annual American Control Conference (ACC), Milwaukee, WI, USA, 27–29 June 2018; Volume 2018, pp. 3215–3220. [Google Scholar]
  30. Chen, B.; Zhao, D.; Peng, H. Evaluation of automated vehicles encountering pedestrians at unsignalized crossings. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017; pp. 1679–1685. [Google Scholar]
  31. Fisac, J.F.; Bronstein, E.; Stefansson, E.; Sadigh, D.; Sastry, S.S.; Dragan, A.D. Hierarchical game-theoretic planning for autonomous vehicles. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; Volume 2019, pp. 9590–9596. [Google Scholar]
  32. Song, Y.; Lehsing, C.; Fuest, T.; Bengler, K. External HMIs and Their Effect on the Interaction Between Pedestrians and Automated Vehicles BT—Intelligent Human Systems Integration; Springer International Publishing: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  33. Robert, L. The Future of Pedestrian-Automated Vehicle Interactions. XRDS Crossroads ACM Mag. Stud. 2019, 25, 30–33. [Google Scholar] [CrossRef]
  34. Ezzati Amini, R.; Katrakazas, C.; Riener, A.; Antoniou, C. Interaction of automated driving systems with pedestrians: Challenges, current solutions, and recommendations for eHMIs. Transp. Rev. 2021, 41, 788–813. [Google Scholar] [CrossRef]
  35. Thornton, S.M.; Pan, S.; Erlien, S.M.; Gerdes, J.C. Incorporating Ethical Considerations into Automated Vehicle Control. IEEE Trans. Intell. Transp. Syst. 2017, 18, 1429–1439. [Google Scholar] [CrossRef]
  36. Mirnig, A.G.; Meschtscherjakov, A. Trolled by the Trolley Problem: Trolled by the Trolley Problem: On What Matters for Ethical Decision Making in Automated Vehicles. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI’19, Glasgow, UK, 4–9 May 2019; pp. 1–10. [Google Scholar]
  37. Keeling, G. Why Trolley Problems Matter for the Ethics of Automated Vehicles; Springer: Dordrecht, The Netherlands, 2020; Volume 26, pp. 293–307. [Google Scholar] [CrossRef] [Green Version]
  38. Jayaraman, S.; Tilbury, D.; Pradhan, A.; Robert, L.J. Analysis and Prediction of Pedestrian Crosswalk Behavior during Automated Vehicle Interactions. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020. [Google Scholar]
  39. Juliani, A.; Berges, V.P.; Teng, E.; Cohen, A.; Harper, J.; Elion, C.; Goy, C.; Gao, Y.; Henry, H.; Mattar, M.; et al. Unity: A General Platform for Intelligent Agents. arXiv 2018. [Google Scholar] [CrossRef]
  40. Pappas, G.; Siegel, J.E.; Politopoulos, K.; Sun, Y. A Gamified Simulator and Physical Platform for Self-Driving Algorithm Training and Validation. Electronics 2021, 10, 1112. [Google Scholar] [CrossRef]
  41. Nguyen, V.T.; Dang, T. Setting up Virtual Reality and Augmented Reality Learning Environment in Unity. In Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct), Nantes, France, 9–13 October 2017; pp. 315–320. [Google Scholar]
  42. Morschheuser, B.; Hassan, L.; Werder, K.; Hamari, J. How to design gamification? A method for engineering gamified software. Inf. Softw. Technol. 2018, 95, 219–237. [Google Scholar] [CrossRef] [Green Version]
  43. Fernandez, M. Augmented-Virtual Reality: How to improve education systems. High. Learn. Res. Commun. 2017, 7, 1–15. [Google Scholar] [CrossRef]
  44. Löcken, A.; Golling, C.; Riener, A. How Should Automated Vehicles Interact with Pedestrians? A Comparative Analysis of Interaction Concepts in Virtual Reality. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Utrecht, The Netherlands, 21–25 September 2019; pp. 262–274. [Google Scholar] [CrossRef]
  45. Pappas, G.; Siegel, J.; Politopoulos, K. VirtualCar: Virtual Mirroring of IoT-Enabled Avacars in AR, VR and Desktop Applications. In Proceedings of the ICAT-EGVE 2018—International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, Limassol, Cyprus, 7–9 November 2018; pp. 15–17. [Google Scholar]
  46. Riegler, A.; Riener, A.; Holzmann, C. A Research Agenda for Mixed Reality in Automated Vehicles. In Proceedings of the 19th International Conference on Mobile and Ubiquitous Multimedia, Essen, Germany, 22–25 November 2020; pp. 119–131. [Google Scholar] [CrossRef]
  47. Gee, J.P. Video games and embodiment. Games Cult. 2008, 3, 253–263. [Google Scholar] [CrossRef] [Green Version]
  48. Moghimi, M.; Stone, R.; Rotshtein, P.; Cooke, N. The Sense of embodiment in Virtual Reality. Presence Teleoperators Virtual Environ. 2016, 25, 81–107. [Google Scholar] [CrossRef] [Green Version]
  49. Alton, C. Experience, 60 Frames Per Second: Virtual Embodiment and the Player/Avatar Relationship in Digital Games. Load. J. Can. Game Stud. Assoc. 2017, 10, 214–227. [Google Scholar]
  50. Kiela, D.; Bulat, L.; Vero, A.L.; Clark, S. Virtual Embodiment: A Scalable Long-Term Strategy for Artificial Intelligence Research. arXiv 2016. [Google Scholar] [CrossRef]
  51. Gonzalez-Franco, M.; Peck, T.C. Avatar embodiment. Towards a standardized questionnaire. Front. Robot. AI 2018, 5, 1–9. [Google Scholar] [CrossRef]
  52. Kassens-Noor, E.; Dake, D.; Decaminada, T.; Kotval-K, Z.; Qu, T.; Wilson, M.; Pentland, B. Sociomobility of the 21st century: Autonomous vehicles, planning, and the future city. Transp. Policy 2020, 99, 329–335. [Google Scholar] [CrossRef]
  53. Lelièvre, E. Research-Creation Methodology for Game Research. 2018. Available online: https://hal.archives-ouvertes.fr/hal-02615671 (accessed on 1 September 2021).
  54. Salen, K.; Zimmerman, E. Rules of Play: Game Design Fundamentals; MIT Press: Cambridge, MA, USA, 2003. [Google Scholar]
  55. Ott, M.; De Gloria, A.; Arnab, S.; Bellotti, F.; Kiili, K.; de Freitas, S.; Berta, R. Designing serious games for education: From pedagogical principles to game mechanisms. In Proceedings of the 7th European Conference on Management Leadership and Governance, ECMLG 2011, Sophia-Antipolis, France, 6–7 October 2011; Volume 2011. [Google Scholar]
  56. Michael, D.R.; Chen, S. Serious Games: Games that Educate, Train, and Inform; Muska & Lipman/Premier-Trade: Boston, MA, USA, 2005. [Google Scholar]
  57. Jesse, S. The Art of Game Design: A Book of Lenses; AK Peters: Natick, MA, USA; CRC Press: Boca Raton, FL, USA, 2014. [Google Scholar]
  58. Zimmerman, E. Play as Research: The Iterative Design Process. Des. Res. Methods Perspect. 2003, 2003, 176–184. [Google Scholar]
  59. Lazzaro, N. The 4 Keys 2 Fun. Available online: http://www.nicolelazzaro.com/the4-keys-to-fun/ (accessed on 1 November 2020).
  60. Bartle, R. Hearts, clubs, diamonds, spades: Players who suit MUDs. J. Mud Res. 1996, 1, 19. [Google Scholar]
  61. Schneider, M.O.; Moriya, É.T.U.; da Silva, A.V.; Néto, J.C. Analysis of Player Profiles in Electronic Games applying Bartle’s Taxonomy. In Proceedings of the SBC—Proceedings of SBGames 2016, São Paulo, Brazil, 8–10 September 2016. [Google Scholar]
  62. Brooke, J. SUS: A “quick and dirty” usability scale. In Usability Evaluation in Industry; CRC Press: London, UK, 1996; p. 189. [Google Scholar]
  63. McGloin, R.; Farrar, K.; Krcmar, M. Video games, immersion, and cognitive aggression: Does the controller matter? Media Psychol. 2013, 16, 65–87. [Google Scholar] [CrossRef]
  64. Erskine, M.A.; Brooks, S.; Greer, T.H.; Apigian, C. From driver assistance to fully-autonomous: Examining consumer acceptance of autonomous vehicle technologies. J. Consum. Mark. 2020, 37, 883–894. [Google Scholar] [CrossRef]
  65. Choi, J.K.; Ji, Y.G. Investigating the Importance of Trust on Adopting an Autonomous Vehicle. Int. J. Hum. Comput. Interact. 2015, 31, 692–702. [Google Scholar] [CrossRef]
  66. Ayoub, J.; Yang, X.J.; Zhou, F. Modeling dispositional and initial learned trust in automated vehicles with predictability and explainability. Transp. Res. Part F Traffic Psychol. Behav. 2021, 77, 102–116. [Google Scholar] [CrossRef]
  67. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User Acceptance of Information Technology: Toward a Unified View. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef] [Green Version]
  68. Kafai, Y.B.; Richard, G.T.; Tynes, B.M. Diversifying Barbie and Mortal Kombat: Intersectional Perspectives and Inclusive Designs in Gaming; Lulu Press: Morrisville, NC, USA, 2016; Available online: http://press.etc.cmu.edu/files/Diversifying-Barbie-Mortal-Kombat_Kafai-Richard-Tynes-etal-web.pdf (accessed on 1 September 2021).
  69. Pappas, G.; Siegel, J.; Vogiatzakis, I.; Politopoulos, K. Gamification and the Internet of Things in Education. In Handbook of Intelligent Techniques in Educational Process; Springer: Berlin/Heidelberg, Germany, 2022. [Google Scholar] [CrossRef]
  70. Siegel, J.; Pappas, G.; Rutkowski, J. Game and Survey Data for “Game-Based Simulation and Study of Pedestrian-Automated Vehicle Interactions”. Harv. Dataverse 2022. [Google Scholar] [CrossRef]
Figure 1. The invisible collider system is the main mechanic that the scoring system is built upon.
Figure 1. The invisible collider system is the main mechanic that the scoring system is built upon.
Automation 03 00017 g001
Figure 2. The Score/Game Over Scene shows the participant’s score based on their successful intersection crossings.
Figure 2. The Score/Game Over Scene shows the participant’s score based on their successful intersection crossings.
Automation 03 00017 g002
Figure 3. This survey appears after a user completes the first scenario.
Figure 3. This survey appears after a user completes the first scenario.
Automation 03 00017 g003
Figure 4. This survey appears after a user completes the second scenario.
Figure 4. This survey appears after a user completes the second scenario.
Automation 03 00017 g004
Figure 5. This survey appears after a user completes the third scenario.
Figure 5. This survey appears after a user completes the third scenario.
Automation 03 00017 g005
Figure 6. The Intake Survey is one of the four in-game surveys. Users need to complete it before testing any of the scenarios.
Figure 6. The Intake Survey is one of the four in-game surveys. Users need to complete it before testing any of the scenarios.
Automation 03 00017 g006
Figure 7. The consent form informs the users about the terms of use of the tool.
Figure 7. The consent form informs the users about the terms of use of the tool.
Automation 03 00017 g007
Figure 8. The main menu helps the participants to switch between the various scenarios of the tool.
Figure 8. The main menu helps the participants to switch between the various scenarios of the tool.
Automation 03 00017 g008
Figure 9. The urban area of the three scenarios features various buildings, vehicles, a skybox, and a weather system.
Figure 9. The urban area of the three scenarios features various buildings, vehicles, a skybox, and a weather system.
Automation 03 00017 g009
Figure 10. The guidelines explain the concept of each scenario before loading it.
Figure 10. The guidelines explain the concept of each scenario before loading it.
Automation 03 00017 g010
Figure 11. The schematic explains unified participant data collection from multiple sources (surveys, external APIs, in-game mechanics).
Figure 11. The schematic explains unified participant data collection from multiple sources (surveys, external APIs, in-game mechanics).
Automation 03 00017 g011
Figure 12. In scenario 3, the AVs are identified by a green light underneath. The light appears when the pedestrian is 15 m or less away from the vehicle.
Figure 12. In scenario 3, the AVs are identified by a green light underneath. The light appears when the pedestrian is 15 m or less away from the vehicle.
Automation 03 00017 g012
Table 1. Comparison among three significant pedestrian–AV interaction simulators on topics ranging from game engine and development complexity, to usability and technical audience, to visual fidelity of virtual environments, and to pedestrian point of view.
Table 1. Comparison among three significant pedestrian–AV interaction simulators on topics ranging from game engine and development complexity, to usability and technical audience, to visual fidelity of virtual environments, and to pedestrian point of view.
CARLA SimulatorAirSimMississippi State University Tool
AvailabilityFree and Open SourceFree and Open SourceNot on Public Repository
UnityNo CompatibilityExperimental CompatibilityCompatible
Unreal EngineCompatibleCompatibleNo Compatibility
Technical DevelopmentC++, PythonC++, Python, C#No Compatibility
Ease of UseHard to UseHard to UseEasy to use
Type of ApplicationDesktopDesktopVirtual Reality
Target AudienceMainly Programmers or EngineersMainly Programmers or EngineersAcademia Audience
GraphicsHighHighAverage
Pedestrian Point of ViewNo Pedestrian POVNo Pedestrian POVPedestrian POV
Table 2. Summarized results from the intake survey and simulator scenarios 1–3.
Table 2. Summarized results from the intake survey and simulator scenarios 1–3.
ScenarioQ1Q2Q3Q4Q5Q6Q7Q8Q9Q10Q11Q12Q13S1
Intake
(n = 89)
μ = 2.91
σ = 0.90
μ = 2.39
σ = 0.95
μ = 2.61
σ = 1.02
μ = 3.85
σ = 0.83
μ = 3.28
σ = 0.84
μ = 3.28
σ = 0.94
μ = 3.21
σ = 0.79
μ = 3.57
σ = 0.90
1
(n = 78)
μ = 3.63
σ = 1.19
μ = 3.81
σ = 1.08
μ = 544
σ = 459
2
(n = 67)
μ = 3.29
σ = 1.22
μ = 3.56
σ = 0.98
μ = 2.01
σ = 1.09
μ = 565
σ = 419
3
(n = 62)
μ = 2.66
σ = 0.96
μ = 3.68
σ = 0.97
μ = 3.40
σ = 1.17
μ = 3.56
σ = 1.11
μ = 3.94
σ = 0.96
μ = 642
σ = 470
Table 3. Questions associated with the survey, as shortened in the results table.
Table 3. Questions associated with the survey, as shortened in the results table.
QuestionLow (Text)High (Text)
Q1In general, I trust humans1 (Strongly Disagree)5 (Strongly Agree)
Q2In general, I trust human drivers1 (Strongly Disagree)5 (Strongly Agree)
Q3As a pedestrian, I trust human drivers to stop for me when I cross the road1 (Strongly Disagree)5 (Strongly Agree)
Q4In general, I trust technology1 (Strongly Disagree)5 (Strongly Agree)
Q5In general, I trust automated vehicles1 (Strongly Disagree)5 (Strongly Agree)
Q6As a pedestrian, I trust automated vehicles to stop for me when I cross the road1 (Strongly Disagree)5 (Strongly Agree)
Q7Knowing that a vehicle near me is automated makes me1 (Very Uncomfortable)5 (Very Comfortable)
Q8Knowing that an automated vehicle “sees me” makes me1 (Feel Very Unsafe)5 (Feel Very Safe)
Q9When crossing the road, I felt safe1 (Strongly Disagree)5 (Strongly Agree)
Q10Successfully crossing the road made me feel very confident1 (Strongly Disagree)5 (Strongly Agree)
Q11I believe I can tell which cars are automated and which are human driven1 (Strongly Disagree)5 (Strongly Agree)
Q12I was more likely to walk in front of oncoming automated vehicles than human-operated vehicles1 (Strongly Disagree)5 (Strongly Agree)
Q13I am more comfortable around a vehicle I KNOW is automated than one I THINK is automated1 (Strongly Disagree)5 (Strongly Agree)
S1Game Score (points)0≈2000
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pappas, G.; Siegel, J.E.; Kassens-Noor, E.; Rutkowski, J.; Politopoulos, K.; Zorpas, A.A. Game-Based Simulation and Study of Pedestrian-Automated Vehicle Interactions. Automation 2022, 3, 315-336. https://doi.org/10.3390/automation3030017

AMA Style

Pappas G, Siegel JE, Kassens-Noor E, Rutkowski J, Politopoulos K, Zorpas AA. Game-Based Simulation and Study of Pedestrian-Automated Vehicle Interactions. Automation. 2022; 3(3):315-336. https://doi.org/10.3390/automation3030017

Chicago/Turabian Style

Pappas, Georgios, Joshua E. Siegel, Eva Kassens-Noor, Jacob Rutkowski, Konstantinos Politopoulos, and Antonis A. Zorpas. 2022. "Game-Based Simulation and Study of Pedestrian-Automated Vehicle Interactions" Automation 3, no. 3: 315-336. https://doi.org/10.3390/automation3030017

Article Metrics

Back to TopTop