Augmented Reality for Vehicle-Driver Communication: A Systematic Review
Abstract
:1. Introduction
1.1. Relevant Past Reviews
1.2. Aim of the Study
2. Materials and Methods
Paper Selection
- Peer-reviewed, original full research
- Clearly describes the AR design and its target features
- Graphical representation of the displays
- User testing and evaluative comparison of AR display designs using statistical analysis
- Originally published in the past decade (2012–2022)
- In English
- Exclusion: Work-in-progress articles
- Exclusion: AR that communicates to actors outside the vehicle
3. Results
3.1. High-Level Overview of Reviewed Articles
3.1.1. Articles over Time
Reference | Year | Publication Type | Topic | N Participants | Mean Age | Age Range | Gender M:F (Other) | ADS Level | Study Design | Display Mode | Display Design | Displayed Information |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Colley et al. [39] | 2021 | Conference | Communicating hazard detection | 20 | 32.05 | N/A | 10:10 | HAV | Within | WSD | Warning symbols; lightband; bounding circle; bounding circle & symbol | Hazard detection; pedestrian detection; vehicle detection |
Colley et al. [52] | 2020 | Conference | Communicating pedestrian recognition and intent | 15 | 25.33 | N/A | 11:4 | HAV | Within | WSD | Symbol & icons | Pedestrian detection & intention |
Colley et al. [53] | 2021 | Conference | Communicating hazard detection | Pilot: 32 Main: 41 | Pilot: 27.06 Main: 27.63 | N/A | Pilot: 23:9 Main: 20:21 | HAV | Within | WSD | Solid highlight | Dynamic (vehicles; pedestrians); static (road signs) |
Colley et al. [54] | 2021 | Conference | Takeover performance | 255 | 31.46 | N/A | 139:86 | 2 | Between | WSD | Warning symbols; bounding boxes | Hazard detection |
Currano et al. [55] | 2021 | Conference | Situation Awareness via AR | 298 | 35.1 | N/A | 156:139 | N/A | Mixed | Simulated optical HUD | Circle; highlighting; arrow | Hazard detection; intended route |
Detjen et al. [56] | 2021 | Conference | Intended route navigation | 27 | 21.74 | N/A | 24:3 | Study 1: 3–5 Study 2: 2 | Within | WSD | Symbols and icons; arrows | Intended route; vehicle detection |
Du et al. [18] | 2021 | Conference | Acceptance of alert system | 60 | 24 | 18–45 | 35:25 | 3 | Mixed | WSD | Bounding boxes; warning symbols; arrows | Hazard detection; road signs |
Ebnali et al. [57] | 2020 | Conference | Communicating system status and reliability | 15 | 26.02 | 21–34 | 7:8 | 2 | Mixed | WSD | Lane marking | Automation reliability |
Eyraud et al. [41] | 2015 | Journal | Visual Attention Allocation | 48 | 34 | 21–53 | 27:21 | 0 | Mixed | WSD | Solid highlight | Intended route; road signs |
Faria et al. [58] | 2021 | Conference | Takeover performance | 8 | 23.56 | 18–30 | N/A | 2 | Within | WSD | Bounding boxes | Pedestrian detection; Vehicle detection |
Fu et al. [59] | 2013 | Conference | Improving safety through AR | 24 | Young: 20 Older: 65 | 18–75 | 14:10 | 0 | Mixed | Projected on screen | Colored blocks; lines | Merging traffic; braking vehicles |
Gabbard et al. [37] | 2019 | Journal | Intended route navigation | 22 | Males: 20.3 Females: 20.4 | N/A | 13:9 | 0 | Within | Optical HUD | Arrows | Intended route |
Hwang et al. [60] | 2016 | Journal | Risk perception using AR | 28 | AR: 32.27 Control: 39.08 | N/A | 28:0 | 0 | Between | Optical HUD | Lines | Pedestrian detection; Vehicle detection |
Jing et al. [42] | 2022 | Journal | Takeover performance | 36 | 22.7 | 18–31 | 24:12 | 3 | Between | WSD | Arrows; highlighting | Pedestrian detection; Vehicle detection |
Kim & Gabbard [61] | 2019 | Journal | Visual Distraction by AR interfaces | 23 | 21.4 | N/A | N/A | N/A | Within | WSD | Bounding boxes; shadow highlighting | Pedestrian detection |
Kim et al. [38] | 2018 | Journal | Communicating hazard detection | 16 | 42 | N/A | N/A | 0 | Within | Optical HUD | Shadow highlighting | Pedestrian detection |
Lindemann et al. [62] | 2018 | Journal | Situation Awareness via AR | 32 | 27 | 19–58 | 24:8 | 4–5 | Within | WSD | Symbols | Automation reliability; hazard detection; intended route |
Lindemann et al. [40] | 2019 | Conference | Takeover performance | 18 | 25 | N/A | 10:8 | 3 | Within | WSD | Arrows; highlighting | Hazard detection; intended route |
Merenda et al. [63] | 2018 | Journal | Location identification of parking spaces and pedestrians | 24 | 27.35 | 18–40 | 17:7 | 0 | Within | Optical HUD | Warning symbols | Object location |
Oliveira et al. [64] | 2020 | Journal | Communicating navigation and hazard detection | 25 | N/A | N/A | 21:4 | 4 | Within | WSD | Warning symbols; lane marking | Hazard detection; intended route |
Pfannmuller et al. [65] | 2015 | Conference | Intended route navigation | 30 | 32.9 | 22–52 | 23:7 | N/A | Within | Optical HUD | Arrows | Intended route |
Phan et al. [66] | 2016 | Conference | Communicating hazard detection | 25 | N/A | 21–35 | 21:4 | 0 | Within | Simulated optical HUD | Bounding boxes; warning symbols | Pedestrian detection |
Rusch et al. [67] | 2013 | Journal | Directing attention | 27 | 45 | N/A | 13:14 | 0 | Within | WSD | Converging bounding box | Pedestrian detection; vehicle detection; road signs |
Schall et al. [68] | 2013 | Journal | Hazard detection in elderly | 20 | 73 | 65–85 | 13:7 | 0 | Within | WSD | Converging bounding box | Pedestrian detection; vehicle detection; road signs |
Schewe & Vollrath. [69] | 2021 | Journal | Visualizing vehicle’s maneuvers | 43 | Males: 37 Females: 34 | N/A | 35:8 | 2 | Within | WSD | Symbol; lane marking | Speed changes |
Schneider et al. [70] | 2021 | Conference | UX of AR transparency | 40 | 24.65 | N/A | 26:14 | 5 | Mixed | WSD | Bounding boxes; lane marking | Hazard detection; intended route |
Schwarz & Fastenmeier. [71] | 2017 | Journal | AR warnings | 81 | 31 | 20–54 * | 70 *:18 * | 0 | Between | Optical HUD | Warning symbols; arrows | Hazard detection |
Schwarz & Fastenmeier. [72] | 2018 | Journal | AR warnings | 80 | 31 * | 22–55 | 70:10 | 0 | Between | Optical HUD | Warning symbols | Hazard detection |
Wintersberger et al. [73] | 2017 | Conference | Trust in ADSs through AR | 26 | 23.77 | 19–35 | 15:11 | 5 | Within | WSD | Symbols & icons | Vehicle detection |
Wintersberger et al. [43] | 2018 | Journal | Trust in ADSs through AR | Study 1:26 Study 2:18 | Study 1:23.77 Study 2:24.8 | Study 1: 19–35 Study 2: 19–41 | Study 1:15:11 Study 2:12:6 | 5 | Within | WSD | Symbols & icons; arrows | Vehicle detection; intended route |
Wu et al. [74] | 2020 | Conference | Communicating hazard detection | 16 | N/A | N/A | 12:4 | 2 | Within | WSD | Bounding boxes | Pedestrian detection; vehicle detection |
3.1.2. Article Origin
3.1.3. Article Type and Design
3.1.4. Participants
3.1.5. Accessibility
3.1.6. Metrics
3.1.7. Experimental Procedure and Analysis
3.1.8. Automation Level
3.1.9. AR modality, Information Displayed, and Visualization
3.2. AR Visualizations–Key Results
3.2.1. Object Detection
Bounding Shapes
Highlighting
Symbols and Icons
Lines
3.2.2. Intended Route
3.2.3. Automation Reliability
3.2.4. Speed
4. Discussion
4.1. High-Level Descriptives
4.2. AR Designs
4.3. Future Research
- User reporting and inclusive design. Reported within this review was the fact that the large majority of studies predominantly recruited young, healthy, white males. Recruiting largely this population group reduces generalizability to other populations and considerations of inclusive AR interface design. To ensure the advantages of transport mobility via ADS-equipped vehicles is received by all, research should evaluate interfaces recruiting individuals from vulnerable populations, those who are considered neurodiverse, and those who require greater visual accessibility requirements such as those who are colorblind.
- Outdoor studies with AR. Most of the research were conducted in safe, controlled laboratory settings. As AR technology in vehicles are advancing, there is strong motivation to progress research towards outdoor settings. This could be a progressive shift from laboratory studies to outdoor test tracks before on-road testing to understand how environmental and social factors influence the interaction between AR interfaces and drivers’ behaviors and perceptions.
- Sub-optimal driving conditions. All but two articles evaluated AR displays during sunny, optimal conditions with clear visibility. The two articles that utilized AR interfaces during impaired visibility (i.e., foggy weather) driving situations found different effects than clear driving. As the technology for automated features advances and operation is less restricted to optimal conditions, there is a research avenue to understand how behaviors, attitudes, or reliance on the AR interface changes when road elements are lowest in visibility such as foggy or night-time driving.
- Longitudinal impact. The articles reviewed were one-off interactions or scenarios repeated within the same day. The reported results could thus be considered a possible artefact from novel interactions. As drivers will engage more and more with these emerging systems, further examination is required to understand how drivers’ behaviors adapt over time as familiarity with the AR increases and expectations are dynamically calibrated.
- Visual complexity, relevance, and clutter. Many of the articles compared different visual AR complexity levels of the same design (e.g., highlighting dynamic or dynamic and static objects) or compared to a control group with no interface. As the visual complexity of an interface increases, so too does the risk of visual clutter. For example, an interface that highlights vehicles and pedestrians may risk visual occlusion and object sensitivity as the number of dynamic objects increases in the driving scene. Additionally, presenting too much information may direct drivers’ attention away from the relevant, crucial information. Therefore, research should understand what information drivers find relevant in different contexts and the potential threshold of presenting too much information before the AR interfaces becomes detrimental.
- System reliability. The reviewed articles typically presented AR interfaces that had perfect reliability. Unfortunately, the current object detection techniques present in vehicles are not without error. Although a few articles examined communicating varying degrees of system reliability, more research is required to understand drivers’ perceptions and behaviors during system failures where either the ADS fails to detect hazardous objects or communicates inappropriate maneuverers leading to detrimental outcomes.
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- SAE International. Taxonomy and Definitions for Terms Related to Driving Automation Systems for on-Road Motor Vehicles (J3016_202104); SAE International: Warrendale, PA, USA, 2021. [Google Scholar]
- Drews, F.A.; Yazdani, H.; Godfrey, C.N.; Cooper, J.M.; Strayer, D. Text Messaging During Simulated Driving. Hum. Factors 2009, 51, 762–770. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Strayer, D.L.; Cooper, J.M.; Goethe, R.M.; Mccarty, M.M.; Getty, D.J.; Biondi, F. Assessing the visual and cognitive demands of in-vehicle information systems. Cogn. Res. Princ. Implic. 2019, 4, 18. [Google Scholar] [CrossRef] [PubMed]
- Turrill, J.; Coleman, J.R.; Hopman, R.; Cooper, J.M.; Strayer, D.L. The Residual Costs of Multitasking: Causing Trouble down the Road. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2016, 60, 1967–1970. [Google Scholar] [CrossRef]
- NHTSA. Visual-Manual NHTSA Driver Distraction Guidelines for In-Vehicle Electronic Devices. Federal Register. 2012. Available online: https://www.federalregister.gov/documents/2012/02/24/2012-4017/visual-manual-nhtsa-driver-distraction-guidelines-for-in-vehicle-electronic-devices (accessed on 15 April 2022).
- Strayer, D.L.; Fisher, D.L. SPIDER: A Framework for Understanding Driver Distraction. Hum. Factors 2016, 58, 5–12. [Google Scholar] [CrossRef]
- Petermeijer, S.; Bazilinskyy, P.; Bengler, K.; de Winter, J. Take-over again: Investigating multimodal and directional TORs to get the driver back into the loop. Appl. Ergon. 2017, 62, 204–215. [Google Scholar] [CrossRef] [Green Version]
- Vlakveld, W.; van Nes, N.; de Bruin, J.; Vissers, L.; van der Kroft, M. Situation awareness increases when drivers have more time to take over the wheel in a Level 3 automated car: A simulator study. Transp. Res. Part F Traffic Psychol. Behav. 2018, 58, 917–929. [Google Scholar] [CrossRef]
- Zeeb, K.; Buchner, A.; Schrauf, M. Is take-over time all that matters? The impact of visual-cognitive load on driver take-over quality after conditionally automated driving. Accid. Anal. Prev. 2016, 92, 230–239. [Google Scholar] [CrossRef]
- Nordhoff, S.; de Winter, J.; Kyriakidis, M.; van Arem, B.; Happee, R. Acceptance of Driverless Vehicles: Results from a Large Cross-National Questionnaire Study. J. Adv. Transp. 2018, 2018, e5382192. [Google Scholar] [CrossRef] [Green Version]
- Choi, J.K.; Ji, Y.G. Investigating the Importance of Trust on Adopting an Autonomous Vehicle. Int. J. Hum. Comput. Interact. 2015, 31, 692–702. [Google Scholar] [CrossRef]
- Hoff, K.A.; Bashir, M. Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust. Hum. Factors 2014, 57, 407–434. [Google Scholar] [CrossRef]
- Diels, C.; Thompson, S. Information Expectations in Highly and Fully Automated Vehicles. In International Conference on Applied Human Factors and Ergonomics; Springer: Cham, Switzerland, 2018; pp. 742–748. [Google Scholar] [CrossRef]
- Beggiato, M.; Hartwich, F.; Schleinitz, K.; Krems, J.; Othersen, I.; Petermann-Stock, I. What would drivers like to know during automated driving? Information needs at different levels of automation. In Proceedings of the 7th Conference on Driver Assistance, Munich, Germany, 25 January 2015; p. 6. [Google Scholar] [CrossRef]
- Wintersberger, P.; Nicklas, H.; Martlbauer, T.; Hammer, S.; Riener, A. Explainable Automation: Personalized and Adaptive UIs to Foster Trust and Understanding of Driving Automation Systems. In Proceedings of the 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Washington, DC, USA, 21–22 September 2020. [Google Scholar] [CrossRef]
- Politis, I.; Langdon, P.; Adebayo, D.; Bradley, M.; Clarkson, P.J.; Skrypchuk, L.; Mouzakitis, A.; Eriksson, A.; Brown, J.W.H.; Revell, K.; et al. An Evaluation of Inclusive Dialogue-Based Interfaces for the Takeover of Control in Autonomous Cars. In Proceedings of the 23rd International Conference on Intelligent User Interfaces, Tokyo, Japan, 7–11 March 2018; pp. 601–606. [Google Scholar] [CrossRef]
- Large, D.R.; Burnett, G.; Clark, L. Lessons from Oz: Design guidelines for automotive conversational user interfaces. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings, Utrecht, The Netherlands, 22–25 September 2019; pp. 335–340. [Google Scholar] [CrossRef] [Green Version]
- Du, N.; Zhou, F.; Tilbury, D.; Robert, L.P.; Yang, X.J. Designing Alert Systems in Takeover Transitions: The Effects of Display Information and Modality. In Proceedings of the 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Leeds, UK, 9–14 September 2021. [Google Scholar] [CrossRef]
- Yang, Y.; Götze, M.; Laqua, A.; Dominioni, G.C.; Kawabe, K.; Bengler, K. A method to improve driver’s situation awareness in automated driving. In Proceedings of the Human Factors and Ergonomics Society Europe Chapter 2017 Annual Conference, Rome, Italy, 28–30 September 2017; p. 20. [Google Scholar]
- Wiegand, G.; Schmidmaier, M.; Weber, T.; Liu, Y.; Hussmann, H. I Drive—You Trust: Explaining Driving Behavior Of Autonomous Cars. In Proceedings of the Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, New York, NY, USA, 4–9 May 2019; pp. 1–6. [Google Scholar] [CrossRef]
- Koo, J.; Kwac, J.; Ju, W.; Steinert, M.; Leifer, L.; Nass, C. Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance. Int. J. Interact. Des. Manuf. 2015, 9, 269–275. [Google Scholar] [CrossRef]
- Wong, P.N.Y.; Brumby, D.P.; Babu, H.V.R.; Kobayashi, K. “Watch Out!”: Semi-Autonomous Vehicles Using Assertive Voices to Grab Distracted Drivers’ Attention. In Proceedings of the Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, New York, NY, USA, 4–9 May 2019. [Google Scholar] [CrossRef]
- Large, D.R.; Burnett, G.; Anyasodo, B.; Skrypchuk, L. Assessing Cognitive Demand during Natural Language Interactions with a Digital Driving Assistant. In Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Ann Arbor, MI, USA, 24–26 October 2016; pp. 67–74. [Google Scholar] [CrossRef]
- Waytz, A.; Heafner, J.; Epley, N. The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. J. Exp. Soc. Psychol. 2014, 52, 113–117. [Google Scholar] [CrossRef]
- Wintersberger, P.; Dmitrenko, D.; Schartmüller, C.; Frison, A.-K.; Maggioni, E.; Obrist, M.; Riener, A. S(C)ENTINEL: Monitoring automated vehicles with olfactory reliability displays. In Proceedings of the 24th International Conference on Intelligent User Interfaces, New York, NY, USA, 16–20 March 2019; pp. 538–546. [Google Scholar] [CrossRef] [Green Version]
- Ma, Z.; Liu, Y.; Ye, D.; Zhao, L. Vibrotactile Wristband for Warning and Guiding in Automated Vehicles. In Proceedings of the Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, New York, NY, USA, 4–9 May 2019. [Google Scholar] [CrossRef]
- Cohen-Lazry, G.; Katzman, N.; Borowsky, A.; Oron-Gilad, T. Directional tactile alerts for take-over requests in highly-automated driving. Transp. Res. Part F Traffic Psychol. Behav. 2019, 65, 217–226. [Google Scholar] [CrossRef]
- Petermeijer, S.; Cieler, S.; de Winter, J. Comparing spatially static and dynamic vibrotactile take-over requests in the driver seat. Accid. Anal. Prev. 2017, 99, 218–227. [Google Scholar] [CrossRef]
- Geitner, C.; Biondi, F.; Skrypchuk, L.; Jennings, P.; Birrell, S. The comparison of auditory, tactile, and multimodal warnings for the effective communication of unexpected events during an automated driving scenario. Transp. Res. Part F Traffic Psychol. Behav. 2019, 65, 23–33. [Google Scholar] [CrossRef]
- Dong, J.; Lawson, E.; Olsen, J.; Jeon, M. Female Voice Agents in Fully Autonomous Vehicles Are Not Only More Likeable and Comfortable, But Also More Competent. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2020, 64, 1033–1037. [Google Scholar] [CrossRef]
- Lee, S.C.; Sanghavi, H.; Ko, S.; Jeon, M. Autonomous driving with an agent: Speech style and embodiment. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings, Utrecht, The Netherlands, 22–25 September 2019; pp. 209–214. [Google Scholar] [CrossRef]
- Karatas, N.; Yoshikawa, S.; Tamura, S.; Otaki, S.; Funayama, R.; Okada, M. NAMIDA: Sociable driving agents to maintain driver’s attention in autonomous driving. In Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, Portugal, 28–31 August 2017; pp. 143–149. [Google Scholar] [CrossRef]
- Tamura, S.; Ohshima, N.; Hasegawa, K.; Okada, M. Design and Evaluation of Attention Guidance Through Eye Gazing of “NAMIDA” Driving Agent. J. Robot. Mechatron. 2021, 33, 24–32. [Google Scholar] [CrossRef]
- Milgram, P.; Kishino, F. A Taxonomy of Mixed Reality Visual Displays. IEICE Trans. Inf. Syst. 1994, 77, 1321–1329. Available online: https://www.semanticscholar.org/paper/A-Taxonomy-of-Mixed-Reality-Visual-Displays-Milgram-Kishino/f78a31be8874eda176a5244c645289be9f1d4317 (accessed on 2 August 2022).
- Daponte, P.; De Vito, L.; Picariello, F.; Riccio, M. State of the art and future developments of the Augmented Reality for measurement applications. Measurement 2014, 57, 53–70. [Google Scholar] [CrossRef]
- Boboc, R.G.; Gîrbacia, F.; Butilă, E.V. The Application of Augmented Reality in the Automotive Industry: A Systematic Literature Review. Appl. Sci. 2020, 10, 4259. [Google Scholar] [CrossRef]
- Gabbard, J.L.; Smith, M.; Tanous, K.; Kim, H.; Jonas, B. AR DriveSim: An Immersive Driving Simulator for Augmented Reality Head-Up Display Research. Front. Robot. AI 2019, 6, 98. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kim, H.; Gabbard, J.L.; Anon, A.M.; Misu, T. Driver Behavior and Performance with Augmented Reality Pedestrian Collision Warning: An Outdoor User Study. IEEE Trans. Vis. Comput. Graph. 2018, 24, 1515–1524. [Google Scholar] [CrossRef] [PubMed]
- Colley, M.; Krauss, S.; Lanzer, M.; Rukzio, E. How Should Automated Vehicles Communicate Critical Situations?: A Comparative Analysis of Visualization Concepts. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2021, 5, 1–23. [Google Scholar] [CrossRef]
- Lindemann, P.; Muller, N.; Rigolll, G. Exploring the Use of Augmented Reality Interfaces for Driver Assistance in Short-Notice Takeovers. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 804–809. [Google Scholar] [CrossRef]
- Eyraud, R.; Zibetti, E.; Baccino, T. Allocation of visual attention while driving with simulated augmented reality. Transp. Res. Part F Traffic Psychol. Behav. 2015, 32, 46–55. [Google Scholar] [CrossRef]
- Jing, C.; Shang, C.; Yu, D.; Chen, Y.; Zhi, J. The impact of different AR-HUD virtual warning interfaces on the takeover performance and visual characteristics of autonomous vehicles. Traffic Inj. Prev. 2022, 23, 277–282. [Google Scholar] [CrossRef]
- Wintersberger, P.; Frison, A.-K.; Riener, A.; von Sawitzky, T. Fostering User Acceptance and Trust in Fully Automated Vehicles: Evaluating the Potential of Augmented Reality. Presence Virtual Augment. Real. 2018, 27, 46–62. [Google Scholar] [CrossRef]
- Dey, A.; Billinghurst, M.; Lindeman, R.; Swan, J.E.I. A Systematic Review of 10 Years of Augmented Reality Usability Studies: 2005 to 2014. Front. Robot. AI 2018, 5. [Google Scholar] [CrossRef]
- Frison, A.-K.; Forster, Y.; Wintersberger, P.; Geisel, V.; Riener, A. Where We Come from and Where We Are Going: A Systematic Review of Human Factors Research in Driving Automation. Appl. Sci. 2020, 10, 8914. [Google Scholar] [CrossRef]
- Riegler, A.; Riener, A.; Holzmann, C. Augmented Reality for Future Mobility: Insights from a Literature Review and HCI Workshop. I-Com 2021, 20, 295–318. [Google Scholar] [CrossRef]
- Riegler, A.; Riener, A.; Holzmann, C. A Systematic Review of Augmented Reality Applications for Automated Driving: 2009–2020. Presence Teleoperators Virtual Environ. 2021, 28, 87–126. [Google Scholar] [CrossRef]
- Rouchitsas, A.; Alm, H. External Human–Machine Interfaces for Autonomous Vehicle-to-Pedestrian Communication: A Review of Empirical Work. Front. Psychol. 2019, 10, 2757. Available online: https://www.frontiersin.org/articles/10.3389/fpsyg.2019.02757 (accessed on 2 August 2022). [CrossRef]
- Gabbard, J.L.; Fitch, G.M.; Kim, H. Behind the Glass: Driver Challenges and Opportunities for AR Automotive Applications. Proc. IEEE 2014, 102, 124–136. [Google Scholar] [CrossRef]
- Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Med. 2009, 6, e1000097. [Google Scholar] [CrossRef] [Green Version]
- Rheu, M.; Shin, J.Y.; Peng, W.; Huh-Yoo, J. Systematic Review: Trust-Building Factors and Implications for Conversational Agent Design. Int. J. Hum. Comput. Interact. 2021, 37, 81–96. [Google Scholar] [CrossRef]
- Colley, M.; Bräuner, C.; Lanzer, M.; Walch, M.; Baumann, M.; Rukzio, E. Effect of Visualization of Pedestrian Intention Recognition on Trust and Cognitive Load. In Proceedings of the 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Washington, DC, USA, 21–22 September 2020. [Google Scholar] [CrossRef]
- Colley, M.; Eder, B.; Rixen, J.O.; Rukzio, E. Effects of Semantic Segmentation Visualization on Trust, Situation Awareness, and Cognitive Load in Highly Automated Vehicles. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama Japan, 8–13 May 2021; pp. 1–11. [Google Scholar] [CrossRef]
- Colley, M.; Gruler, L.; Woide, M.; Rukzio, E. Investigating the Design of Information Presentation in Take-Over Requests in Automated Vehicles. In Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction, Toulouse & Virtual France, 27 September 2021; pp. 1–15. [Google Scholar] [CrossRef]
- Currano, R.; Park, S.Y.; Moore, D.J.; Lyons, K.; Sirkin, D. Little Road Driving HUD: Heads-Up Display Complexity Influences Drivers’ Perceptions of Automated Vehicles. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–15. [Google Scholar] [CrossRef]
- Detjen, H.; Salini, M.; Kronenberger, J.; Geisler, S.; Schneegass, S. Towards Transparent Behavior of Automated Vehicles: Design and Evaluation of HUD Concepts to Support System Predictability Through Motion Intent Communication. In Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction, Toulouse & Virtual France, 27 September 2021; pp. 1–12. [Google Scholar] [CrossRef]
- Ebnali, M.; Fathi, R.; Lamb, R.; Pourfalatoun, S.; Motamedi, S. Using Augmented Holographic UIs to Communicate Automation Reliability in Partially Automated Driving. Proceeding of the AutomationXP@ CHI, Honolulu, HI, USA, 25–30 April 2020. [Google Scholar]
- Faria, N.D.O.; Merenda, C.; Greatbatch, R.; Tanous, K.; Suga, C.; Akash, K.; Misu, T.; Gabbard, J. The Effect of Augmented Reality Cues on Glance Behavior and Driver-Initiated Takeover on SAE Level 2 Automated-Driving. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2021, 65, 1342–1346. [Google Scholar] [CrossRef]
- Fu, W.-T.; Gasper, J.; Kim, S.-W. Effects of an in-car augmented reality system on improving safety of younger and older drivers. In Proceedings of the 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Adelaide, Australia, 1–4 October 2013; pp. 59–66. [Google Scholar] [CrossRef]
- Hwang, Y.; Park, B.-J.; Kim, K.-H. The Effects of Augmented-Reality Head-Up Display System Usage on Drivers? Risk Perception and Psychological Change. Etri J. 2016, 38, 757–766. [Google Scholar] [CrossRef]
- Kim, H.; Gabbard, J.L. Assessing Distraction Potential of Augmented Reality Head-Up Displays for Vehicle Drivers. Hum. Factors 2019, 64, 852–865. [Google Scholar] [CrossRef]
- Lindemann, P.; Lee, T.-Y.; Rigoll, G. Catch My Drift: Elevating Situation Awareness for Highly Automated Driving with an Explanatory Windshield Display User Interface. Multimodal Technol. Interact. 2018, 2, 71. [Google Scholar] [CrossRef] [Green Version]
- Merenda, C.; Kim, H.; Tanous, K.; Gabbard, J.L.; Feichtl, B.; Misu, T.; Suga, C. Augmented Reality Interface Design Approaches for Goal-directed and Stimulus-driven Driving Tasks. IEEE Trans. Vis. Comput. Graph. 2018, 24, 2875–2885. [Google Scholar] [CrossRef]
- Oliveira, L.; Burns, C.; Luton, J.; Iyer, S.; Birrell, S. The influence of system transparency on trust: Evaluating interfaces in a highly automated vehicle. Transp. Res. Part F Traffic Psychol. Behav. 2020, 72, 280–296. [Google Scholar] [CrossRef]
- Pfannmüller, L.; Kramer, M.; Senner, B.; Bengler, K. A Comparison of Display Concepts for a Navigation System in an Automotive Contact Analog Head-up Display. Procedia Manuf. 2015, 3, 2722–2729. [Google Scholar] [CrossRef]
- Phan, M.T.; Thouvenin, I.; Fremont, V. Enhancing the driver awareness of pedestrian using augmented reality cues. In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016; pp. 1298–1304. [Google Scholar] [CrossRef] [Green Version]
- Rusch, M.L.; Schall, M.C.; Gavin, P.; Lee, J.D.; Dawson, J.D.; Vecera, S.; Rizzo, M. Directing driver attention with augmented reality cues. Transp. Res. Part F Traffic Psychol. Behav. 2013, 16, 127–137. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Schall, M.C.; Rusch, M.L.; Lee, J.D.; Dawson, J.D.; Thomas, G.; Aksan, N.; Rizzo, M. Augmented reality cues and elderly driver hazard perception. Hum. Factors 2013, 55, 643–658. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Schewe, F.; Vollrath, M. Visualizing the autonomous vehicle’s maneuvers—Does an ecological interface help to increase the hedonic quality and safety? Transp. Res. Part F Traffic Psychol. Behav. 2021, 79, 11–22. [Google Scholar] [CrossRef]
- Schneider, T.; Hois, J.; Rosenstein, A.; Ghellal, S.; Theofanou-Fülbier, D.; Gerlicher, A.R. ExplAIn Yourself! Transparency for Positive UX in Autonomous Driving. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–12. [Google Scholar] [CrossRef]
- Schwarz, F.; Fastenmeier, W. Augmented reality warnings in vehicles: Effects of modality and specificity on effectiveness. Accid. Anal. Prev. 2017, 101, 55–66. [Google Scholar] [CrossRef]
- Schwarz, F.; Fastenmeier, W. Visual advisory warnings about hidden dangers: Effects of specific symbols and spatial referencing on necessary and unnecessary warnings. Appl. Ergon. 2018, 72, 25–36. [Google Scholar] [CrossRef]
- Wintersberger, P.; von Sawitzky, T.; Frison, A.-K.; Riener, A. Traffic Augmentation as a Means to Increase Trust in Automated Driving Systems. In Proceedings of the 12th Biannual Conference on Italian SIGCHI Chapter, Cagliari, Italy, 18–20 September 2017; pp. 1–7. [Google Scholar] [CrossRef]
- Wu, X.; Merenda, C.; Misu, T.; Tanous, K.; Suga, C.; Gabbard, J.L. Drivers’ Attitudes and Perceptions towards A Driving Automation System with Augmented Reality Human-Machine Interfaces. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October 19–13 November 2020; pp. 1978–1983. [Google Scholar] [CrossRef]
- Doll, S. The top five best-equipped countries to support autonomous vehicles—Who’s leading the self-driving revolution? Electrek, 4 March 2022. Available online: https://electrek.co/2022/03/04/the-top-five-best-equipped-countries-to-support-autonomous-vehicles-whos-leading-the-self-driving-revolution/ (accessed on 3 August 2022).
- Klüver, M.; Herrigel, C.; Heinrich, C.; Schöner, H.-P.; Hecht, H. The behavioral validity of dual-task driving performance in fixed and moving base driving simulators. Transp. Res. Part F Traffic Psychol. Behav. 2016, 37, 78–96. [Google Scholar] [CrossRef]
- Mullen, N.; Charlton, J.; Devlin, A.; Bedard, M. Simulator validity: Behaviours observed on the simulator and on the road. In Handbook of Driving Simulation for Engineering, Medicine and Psychology; CRC Press: Boca Raton, FL, USA, 2011; pp. 1–18. [Google Scholar]
Construct | Name of Subjective Measure | N (%) |
---|---|---|
Acceptance | 11 (35) | |
Van der Laan Acceptance scale | 4 (13) | |
Technology Acceptance Model | 4 (13) | |
AttrakDiff Questionnaire | 2 (6) | |
Autonomous Vehicle Acceptance Model | 1 (3) | |
Trust * | 7 (23) | |
Trust in Automation | 7 (23) | |
Trust on Adopting an Autonomous Vehicle | 1 (3) | |
User Experience | 7 (23) | |
System Usability Scale | 4 (13) | |
User Experience Questionnaire | 3 (10) | |
Situation Awareness | 5 (16) | |
Situation Awareness Global Assessment Technique | 2 (6) | |
Situation Awareness Rating Technique | 3 (10) | |
Workload | 5 (16) | |
NASA-TLX | 5 (16) | |
Affective Driving | 2 (6) | |
Self-Assessment Manikin | 2 (6) | |
Driving Behavior | 2 (6) | |
Driving Behavior Determinants Questionnaire | 1 (3) | |
Multidimensional Driving Style Inventory | 1 (3) | |
Anxiety | 1 (3) | |
STATE Anxiety Questionnaire | 1 (3) | |
Custom Questionnaire | -- | 17 (55) |
Construct | Objective Measure | N (%) |
---|---|---|
Driving Behavior | 16 (52) | |
Braking performance | 10 (32) | |
Takeover performance | 5 (16) | |
Headway performance | 3 (10) | |
Collisions | 3 (10) | |
Lateral performance | 3 (10) | |
Longitudinal performance | 2 (6) | |
Eye-tracking | 11 (35) | |
Gaze time | 8 (26) | |
Gaze frequency | 5 (16) | |
Gaze response | 4 (13) | |
Gaze angle | 1 (3) | |
Reaction Time | 5 (16) | |
Button pressing | 4 (13) | |
Verbal response rate | 1 (3) | |
Situation Awareness | 5 (16) | |
Question-response accuracy | 5 (16) | |
Physiological | 1 (3) | |
Heart-rate variability | 1 (3) |
Experimental Procedure | N | Analysis |
---|---|---|
Control/AR Design | 5 | t-test One-way ANOVA Wilcoxon Signed Rank Test Pearson Correlation Multiple Regression |
x Age x distraction | 1 | Mixed ANOVA |
x Reliability | 3 | t-test Linear Mixed Effect Model |
x Information Timing (during/app usage after) | 1 | Wilcoxon Signed Rank Test Mann–Whitney Test |
x Traffic density x Interaction complexity | 2 | Repeated Measures ANOVA Cumulative Link Model |
Control/2+ AR Designs | 6 | t-test One-way ANOVA Two-way ANOVA Repeated Measures ANOVA Cochran’s Q Friedman’s Test Kruskal–Wallis Test Wilcoxon Signed Rank Test NPAV Pearson Correlation |
x Congruent between maneuver and the situation | 1 | Mixed ANOVA |
x Driving Scenario | 1 | Linear Mixed Effect Model |
x Distance | 1 | Two-way Repeated Measures ANOVA Durbin’s Chi-square Test |
AR Design/AR Design | 1 | Linear Mixed Effect Model |
x Critical situation | 1 | t-test |
x Visibility | 1 | Wilcoxon Signed Rank Test Mann–Whitney Test |
AR Design/2+ Designs | 2 | t-test One-way ANOVA Two-way Repeated Measures ANOVA Pearson’s Chi-square Test |
x Distance | 1 | Two-way Repeated Measures ANOVA |
x Urgency | 1 | Two-way Repeated Measures ANOVA |
Display Modality | ||
Control/Tablet/AR x Visual Design | 2 | Repeated Measures ANOVA Friedman’s Test NPAV |
HDD/AR x Driving scenario | 1 | t-test Two-way Repeated Measures ANOVA |
Baseline/HDD/AR | 1 | t-test Repeated Measures ANOVA |
Alert modality (AR/Auditory) x Information type | 2 | Two-way ANOVA Friedman’s Test Linear Mixed Effect Model Logistic Regression |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kettle, L.; Lee, Y.-C. Augmented Reality for Vehicle-Driver Communication: A Systematic Review. Safety 2022, 8, 84. https://doi.org/10.3390/safety8040084
Kettle L, Lee Y-C. Augmented Reality for Vehicle-Driver Communication: A Systematic Review. Safety. 2022; 8(4):84. https://doi.org/10.3390/safety8040084
Chicago/Turabian StyleKettle, Liam, and Yi-Ching Lee. 2022. "Augmented Reality for Vehicle-Driver Communication: A Systematic Review" Safety 8, no. 4: 84. https://doi.org/10.3390/safety8040084
APA StyleKettle, L., & Lee, Y. -C. (2022). Augmented Reality for Vehicle-Driver Communication: A Systematic Review. Safety, 8(4), 84. https://doi.org/10.3390/safety8040084