Next Article in Journal
A 1-Dimensional Physiological Signal Prediction Method Based on Composite Feature Preprocessing and Multi-Scale Modeling
Previous Article in Journal
A Novel Unsupervised Structural Damage Detection Method Based on TCN-GAT Autoencoder
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

From Black Box to Transparency: The Impact of Multi-Level Visualization on User Trust in Autonomous Driving

1
School of Design Art & Media, Nanjing University of Science and Technology, Nanjing 210094, China
2
Nanjing Institute of Electronic Equipment, Nanjing 210007, China
3
College of Life and Ocean Science, Shenzhen University, Shenzhen 518057, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(21), 6725; https://doi.org/10.3390/s25216725
Submission received: 28 September 2025 / Revised: 26 October 2025 / Accepted: 31 October 2025 / Published: 3 November 2025
(This article belongs to the Section Vehicular Sensing)

Abstract

Autonomous systems’ “black-box” nature impedes user trust and adoption. To investigate explainable visualizations’ impact on trust and cognitive states, we conducted a within-subjects study with 29 participants performing high-fidelity driving tasks across three transparency conditions: black-box, standard, and enhanced visualization. Multimodal data analysis revealed that enhanced visualization significantly increased perceived usefulness by 28.5% (p < 0.001), improved functional trust, and decreased average pupil diameter by 15.3% (p < 0.05), indicating lower cognitive load. The black-box condition elicited minimal visual exploration, lowest subjective ratings, and “out-of-the-loop” behaviors. Fixation duration showed no significant difference between standard and enhanced conditions. These findings demonstrate that well-designed visualizations enable balanced trust calibration and cognitive efficiency, advocating “meaningful transparency” as a core design principle for effective human–machine collaboration in autonomous vehicle interfaces. This study provides empirical evidence that transparency enhances user experience and system performance.
Keywords: autonomous driving; human–machine trust; meaningful transparency; explainable visualization; cognitive load; eye-tracking autonomous driving; human–machine trust; meaningful transparency; explainable visualization; cognitive load; eye-tracking

Share and Cite

MDPI and ACS Style

Li, M.; Zhou, M.; Li, Y.; Wei, W.; Zhu, T.; Xu, X.; Ren, L.; Zhang, N.; Xu, R.; Li, J. From Black Box to Transparency: The Impact of Multi-Level Visualization on User Trust in Autonomous Driving. Sensors 2025, 25, 6725. https://doi.org/10.3390/s25216725

AMA Style

Li M, Zhou M, Li Y, Wei W, Zhu T, Xu X, Ren L, Zhang N, Xu R, Li J. From Black Box to Transparency: The Impact of Multi-Level Visualization on User Trust in Autonomous Driving. Sensors. 2025; 25(21):6725. https://doi.org/10.3390/s25216725

Chicago/Turabian Style

Li, Mengniu, Ming Zhou, Yajun Li, Wentao Wei, Tianlu Zhu, Xun Xu, Linyan Ren, Nuowen Zhang, Renhan Xu, and Jinye Li. 2025. "From Black Box to Transparency: The Impact of Multi-Level Visualization on User Trust in Autonomous Driving" Sensors 25, no. 21: 6725. https://doi.org/10.3390/s25216725

APA Style

Li, M., Zhou, M., Li, Y., Wei, W., Zhu, T., Xu, X., Ren, L., Zhang, N., Xu, R., & Li, J. (2025). From Black Box to Transparency: The Impact of Multi-Level Visualization on User Trust in Autonomous Driving. Sensors, 25(21), 6725. https://doi.org/10.3390/s25216725

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop