Next Article in Journal
Research on the Dynamic Characteristics of a Gas Purification Pipeline Robot in Goafs
Previous Article in Journal
Multi-Objective Structural Parameter Optimization for Stewart Platform via NSGA-III and Kolmogorov–Arnold Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

From Sensors to Insights: Interpretable Audio-Based Machine Learning for Real-Time Vehicle Fault and Emergency Sound Classification

1
Department of Computer Science and Information, Applied College, Taibah University, Medinah 42353, Saudi Arabia
2
King Salman Center for Disability Research, Riyadh 11614, Saudi Arabia
3
Department of Computers and Control Systems Engineering, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt
4
Department of Communications and Electronics Engineering, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt
5
Department of Computer Science, College of Computer Science and Engineering, Taibah University, Yanbu 46421, Saudi Arabia
6
Department of Computer Science, Faculty of Computers and Information, Assiut University, Assiut 71516, Egypt
7
Department of Computer Science, Faculty of Science, Tanta University, Tanta 31527, Egypt
8
Department of Electrical Engineering, College of Engineering, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
9
Department of Information Systems, College of Computer Science and Engineering, Taibah University, Yanbu 46421, Saudi Arabia
*
Author to whom correspondence should be addressed.
Machines 2025, 13(10), 888; https://doi.org/10.3390/machines13100888 (registering DOI)
Submission received: 19 August 2025 / Revised: 22 September 2025 / Accepted: 25 September 2025 / Published: 28 September 2025
(This article belongs to the Section Vehicle Engineering)

Abstract

Unrecognized mechanical faults and emergency sounds in vehicles can compromise safety, particularly for individuals with hearing impairments and in sound-insulated or autonomous driving environments. As intelligent transportation systems (ITSs) evolve, there is a growing need for inclusive, non-intrusive, and real-time diagnostic solutions that enhance situational awareness and accessibility. This study introduces an interpretable, sound-based machine learning framework to detect vehicle faults and emergency sound events using acoustic signals as a scalable diagnostic source. Three purpose-built datasets were developed: one for vehicular fault detection, another for emergency and environmental sounds, and a third integrating both to reflect real-world ITS acoustic scenarios. Audio data were preprocessed through normalization, resampling, and segmentation and transformed into numerical vectors using Mel-Frequency Cepstral Coefficients (MFCCs), Mel spectrograms, and Chroma features. To ensure performance and interpretability, feature selection was conducted using SHAP (explainability), Boruta (relevance), and ANOVA (statistical significance). A two-phase experimental workflow was implemented: Phase 1 evaluated 15 classical models, identifying ensemble classifiers and multi-layer perceptrons (MLPs) as top performers; Phase 2 applied advanced feature selection to refine model accuracy and transparency. Ensemble models such as Extra Trees, LightGBM, and XGBoost achieved over 91% accuracy and AUC scores exceeding 0.99. SHAP provided model transparency without performance loss, while ANOVA achieved high accuracy with fewer features. The proposed framework enhances accessibility by translating auditory alarms into visual/haptic alerts for hearing-impaired drivers and can be integrated into smart city ITS platforms via roadside monitoring systems.
Keywords: emergency sound recognition; feature selection; intelligent transportation systems (ITSs); machine learning; sound classification; vehicle fault detection emergency sound recognition; feature selection; intelligent transportation systems (ITSs); machine learning; sound classification; vehicle fault detection

Share and Cite

MDPI and ACS Style

Badawy, M.; Rashed, A.; Bamaqa, A.; Sayed, H.A.; Elagamy, R.; Almaliki, M.; Farrag, T.A.; Elhosseini, M.A. From Sensors to Insights: Interpretable Audio-Based Machine Learning for Real-Time Vehicle Fault and Emergency Sound Classification. Machines 2025, 13, 888. https://doi.org/10.3390/machines13100888

AMA Style

Badawy M, Rashed A, Bamaqa A, Sayed HA, Elagamy R, Almaliki M, Farrag TA, Elhosseini MA. From Sensors to Insights: Interpretable Audio-Based Machine Learning for Real-Time Vehicle Fault and Emergency Sound Classification. Machines. 2025; 13(10):888. https://doi.org/10.3390/machines13100888

Chicago/Turabian Style

Badawy, Mahmoud, Amr Rashed, Amna Bamaqa, Hanaa A. Sayed, Rasha Elagamy, Malik Almaliki, Tamer Ahmed Farrag, and Mostafa A. Elhosseini. 2025. "From Sensors to Insights: Interpretable Audio-Based Machine Learning for Real-Time Vehicle Fault and Emergency Sound Classification" Machines 13, no. 10: 888. https://doi.org/10.3390/machines13100888

APA Style

Badawy, M., Rashed, A., Bamaqa, A., Sayed, H. A., Elagamy, R., Almaliki, M., Farrag, T. A., & Elhosseini, M. A. (2025). From Sensors to Insights: Interpretable Audio-Based Machine Learning for Real-Time Vehicle Fault and Emergency Sound Classification. Machines, 13(10), 888. https://doi.org/10.3390/machines13100888

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop