An Approach Based on Recurrent Neural Networks and Interactive Visualization to Improve Explainability in AI Systems
Abstract
:1. Introduction
2. Materials and Methods
2.1. Review of Similar Works
2.2. Explainability to Predict and Understand the Performance of an AI Model
2.3. Problem and Relevance of the Study
2.4. Method
2.4.1. Data Collection
- Circuits;
- Constructor results;
- Constructor standings;
- Constructors;
- Driver standings;
- Drivers;
- Lap times;
- Pit stops;
- Qualifying;
- Races;
- Results;
- Seasons;
- Status.
2.4.2. Data Preprocessing
2.4.3. Construction of the Prediction Model
- LSTM Units: We set the LSTM layer to 128 units. LSTM units represent the network’s memory capacity and determine the model’s complexity and learnability. By choosing an appropriate number of LSTM units, we seek to balance the ability to capture complex patterns in the data without overfitting the model.
- Activation function: We use the activation function ReLU (Rectified Linear Unit) in the LSTM layer. The ReLU function is a nonlinear function that introduces nonlinearities into the network and helps capture nonlinear relationships in the data.
- input_shape parameter: We define the model input using the input_shape parameter, which is set to the shape of the training data (X_train.shape). This ensures that the model can correctly process the sequential features of the data.
- Dense layer: We add a dense (fully connected) layer to the model after the LSTM layer. This layer has only one unit since the goal is to make a single position prediction. The dense layer uses the default activation function, which in this case is linear.
2.4.4. Model Training
2.4.5. Application of Explainability Techniques
- F is the output function we want to derive concerning a variable x.
- y is an intermediate variable related to x through a function f.
- dF/dx is the derivative of F concerning x.
- dF/dy is the derivative of F concerning y.
- dy/dx is the derivative of y concerning x.
2.4.6. Model Evaluation and Explanations
2.4.7. Improvements and Refinements
3. Results
3.1. Model Construction
3.1.1. Construction of the Learning Model
3.1.2. Algorithm to Identify Pilot Performance
3.1.3. Algorithm for Model Explainability
3.2. Model Explainability
- Performance history predicts 0.5848, and its importance score is 0.4702. This indicates that Feature 1 plays a significant role in the model’s predictions.
- Technical characteristics prediction is 0.1847, while the importance score is −3.9850. The negative importance score suggests that Feature 2 hurts the model’s predictions.
- Climatic conditions have a prediction of 0.0530 and an importance score of −0.0088. Although the importance score is relatively low, there may still be a marginal contribution from Feature 3 to the final predictions.
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Lyu, S.; Liu, J. Convolutional Recurrent Neural Networks for Text Classification. J. Database Manag. 2021, 32, 65–82. [Google Scholar] [CrossRef]
- Balasubramaniam, N.; Kauppinen, M.; Hiekkanen, K.; Kujala, S. Transparency and Explainability of AI Systems: Ethical Guidelines in Practice. In Proceedings of the Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Birmingham, UK, 21 March 2022; Volume 13216. [Google Scholar] [CrossRef]
- Lee, C.H.; Cha, K.J. FAT-CAT—Explainability and Augmentation for an AI System: A Case Study on AI Recruitment-System Adoption. Int. J. Hum. Comput. Stud. 2023, 171, 102976. [Google Scholar] [CrossRef]
- Bopaiah, K.; Samuel, S. Strategy for Optimizing an F1 Car’s Performance Based on FIA Regulations. SAE Int. J. Adv. Curr. Prac. Mobil. 2020, 2, 2516–2530. [Google Scholar] [CrossRef]
- Lv, L.; Li, H.; Wu, Z.; Zeng, W.; Hua, P.; Yang, S. An Artificial Intelligence-Based Platform for Automatically Estimating Time-Averaged Wall Shear Stress in the Ascending Aorta. Eur. Heart J.-Digit. Health 2022, 3, 525–534. [Google Scholar] [CrossRef]
- Markus, A.F.; Kors, J.A.; Rijnbeek, P.R. The Role of Explainability in Creating Trustworthy Artificial Intelligence for Health Care: A Comprehensive Survey of the Terminology, Design Choices, and Evaluation Strategies. J. Biomed. Inform. 2021, 113, 103655. [Google Scholar] [CrossRef]
- Hamon, R.; Junklewitz, H.; Sanchez, I.; Malgieri, G.; De Hert, P. Bridging the Gap between AI and Explainability in the GDPR: Towards Trustworthiness-by-Design in Automated Decision-Making. IEEE Comput. Intell. Mag. 2022, 17, 72–85. [Google Scholar] [CrossRef]
- Amann, J.; Blasimme, A.; Vayena, E.; Frey, D.; Madai, V.I. Explainability for Artificial Intelligence in Healthcare: A Multidisciplinary Perspective. BMC Med. Inf. Decis. Mak. 2020, 20, 310. [Google Scholar] [CrossRef]
- Kuiper, O.; van den Berg, M.; van der Burgt, J.; Leijnen, S. Exploring Explainable AI in the Financial Sector: Perspectives of Banks and Supervisory Authorities. In Artificial Intelligence and Machine Learning. BNAIC/Benelearn 2021. Communications in Computer and Information Science; Springer: Cham, Switzerland, 2022; Volume 1530. [Google Scholar]
- Alsaigh, R.; Mehmood, R.; Katib, I. AI Explainability and Governance in Smart Energy Systems: A Review. Front. Energy Res. 2023, 11, 1071291. [Google Scholar] [CrossRef]
- Leventi-Peetz, A.M.; Östreich, T.; Lennartz, W.; Weber, K. Scope and Sense of Explainability for AI-Systems. In Intelligent Systems and Applications. IntelliSys 2021. Lecture Notes in Networks and Systems; Springer: Cham, Switzerland, 2022; Volume 294. [Google Scholar]
- Ren, B.; Zhang, Z.; Zhang, C.; Chen, S. Motion Trajectories Prediction of Lower Limb Exoskeleton Based on Long Short-Term Memory (LSTM) Networks. Actuators 2022, 11, 73. [Google Scholar] [CrossRef]
- Haimed, A.M.A.; Saba, T.; Albasha, A.; Rehman, A.; Kolivand, M. Viral Reverse Engineering Using Artificial Intelligence and Big Data COVID-19 Infection with Long Short-Term Memory (LSTM). Env. Technol. Innov. 2021, 22, 101531. [Google Scholar] [CrossRef]
- Zhou, Y.; Zheng, H.; Huang, X.; Hao, S.; Li, D.; Zhao, J. Graph Neural Networks: Taxonomy, Advances, and Trends. ACM Trans. Intell. Syst. Technol. 2022, 13, 1–54. [Google Scholar]
- Le, X.H.; Ho, H.V.; Lee, G.; Jung, S. Application of Long Short-Term Memory (LSTM) Neural Network for Flood Forecasting. Water 2019, 11, 1387. [Google Scholar] [CrossRef] [Green Version]
- Grover, R. Analysing the importance of qualifying in formula 1 using the fastf1 library in python. Int. J. Adv. Res. 2022, 10, 1138–1150. [Google Scholar] [CrossRef] [PubMed]
- García, A.; Martínez, B.; Ramírez, C. Machine Learning and Artificial Intelligence for Predictive Maintenance in Industrial Applications. Sensors 2022, 22, 677. [Google Scholar] [CrossRef]
- Satpathi, A.; Setiya, P.; Das, B.; Nain, A.S.; Jha, P.K.; Singh, S.; Singh, S. Comparative Analysis of Statistical and Machine Learning Techniques for Rice Yield Forecasting for Chhattisgarh, India. Sustainability 2023, 15, 2786. [Google Scholar] [CrossRef]
- Aversa, P.; Cabantous, L.; Haefliger, S. When Decision Support Systems Fail: Insights for Strategic Information Systems from Formula 1. J. Strateg. Inf. Syst. 2018, 27, 221–236. [Google Scholar] [CrossRef] [Green Version]
- Patil, A.; Jain, N.; Agrahari, R.; Hossari, M.; Orlandi, F.; Dev, S. Data-Driven Analysis of Formula 1 Car Races Outcome. In Artificial Intelligence and Cognitive Science. AICS 2022. Communications in Computer and Information Science; Springer: Cham, Switzerland, 2023; Volume 1662. [Google Scholar]
- Petróczy, D.G.; Csató, L. Revenue Allocation in Formula One: A Pairwise Comparison Approach. Int. J. Gen. Syst. 2021, 50, 243–261. [Google Scholar] [CrossRef]
- Weiss, T.; Chrosniak, J.; Behl, M. Towards Multi-Agent Autonomous Racing with the DeepRacing Framework. In Proceedings of the 2021 International Conference on Robotics and Automation (ICRA 2021)—Workshop Opportunities and Challenges with Autonomous Racing, online, 31 May 2021; Available online: https://linklab-uva.github.io/icra-autonomous-racing/ (accessed on 5 April 2023).
- Lapré, M.A.; Cravey, C. When Success Is Rare and Competitive: Learning from Others’ Success and My Failure at the Speed of Formula One. Manag. Sci. 2022, 68, 8741–8756. [Google Scholar] [CrossRef]
- Abdellah, A.R.; Mahmood, O.A.; Kirichek, R.; Paramonov, A.; Koucheryavy, A. Machine Learning Algorithm for Delay Prediction in IoT and Tactile Internet. Future Internet 2021, 13, 304. [Google Scholar] [CrossRef]
- Adamu, M.; Haruna, S.I.; Malami, S.I.; Ibrahim, M.N.; Abba, S.I.; Ibrahim, Y.E. Prediction of Compressive Strength of Concrete Incorporated with Jujube Seed as Partial Replacement of Coarse Aggregate: A Feasibility of Hammerstein–Wiener Model versus Support Vector Machine. Model. Earth Syst. Environ. 2022, 8, 3435–3445. [Google Scholar] [CrossRef]
- Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional Neural Networks: An Overview and Application in Radiology. Insights Imaging 2018, 9, 611–629. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Yamazaki, K.; Vo-Ho, V.K.; Bulsara, D.; Le, N. Spiking Neural Networks and Their Applications: A Review. Brain Sci. 2022, 12, 863. [Google Scholar] [CrossRef] [PubMed]
- Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent Advances in Convolutional Neural Networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef] [Green Version]
- Cuomo, S.; Di Cola, V.S.; Giampaolo, F.; Rozza, G.; Raissi, M.; Piccialli, F. Scientific Machine Learning Through Physics–Informed Neural Networks: Where We Are and What’s Next. J. Sci. Comput. 2022, 92, 88. [Google Scholar] [CrossRef]
- Abedin, B. Managing the Tension between Opposing Effects of Explainability of Artificial Intelligence: A Contingency Theory Perspective. Internet Res. 2022, 32, 425–453. [Google Scholar] [CrossRef]
- Sand, M.; Durán, J.M.; Jongsma, K.R. Responsibility beyond Design: Physicians’ Requirements for Ethical Medical AI. Bioethics 2022, 36, 162–169. [Google Scholar] [CrossRef]
- Yang, G.; Ye, Q.; Xia, J. Unbox the Black-Box for the Medical Explainable AI via Multi-Modal and Multi-Centre Data Fusion: A Mini-Review, Two Showcases and Beyond. Inf. Fusion 2022, 77, 29–52. [Google Scholar] [CrossRef]
- Villegas-Ch, W.; Palacios-Pacheco, X.; Luján-Mora, S. Artificial Intelligence as a Support Technique for University Learning. In Proceedings of the IEEE World Conference on Engineering Education (EDUNINE), Lima, Peru, 17–20 March 2019; pp. 1–6. [Google Scholar]
- Budiharto, W. Data Science Approach to Stock Prices Forecasting in Indonesia during Covid-19 Using Long Short-Term Memory (LSTM). J. Big Data 2021, 8, 47. [Google Scholar] [CrossRef]
- Ma, M.; Liu, C.; Wei, R.; Liang, B.; Dai, J. Predicting Machine’s Performance Record Using the Stacked Long Short-Term Memory (LSTM) Neural Networks. J. Appl. Clin. Med. Phys. 2022, 23, e13558. [Google Scholar] [CrossRef]
- ArunKumar, K.E.; Kalaga, D.V.; Mohan Sai Kumar, C.; Kawaji, M.; Brenza, T.M. Comparative Analysis of Gated Recurrent Units (GRU), Long Short-Term Memory (LSTM) Cells, Autoregressive Integrated Moving Average (ARIMA), Seasonal Autoregressive Integrated Moving Average (SARIMA) for Forecasting COVID-19 Trends. Alex. Eng. J. 2022, 61, 7585–7603. [Google Scholar] [CrossRef]
- Sen, S.; Sugiarto, D.; Rochman, A. Komparasi Metode Multilayer Perceptron (MLP) Dan Long Short Term Memory (LSTM) Dalam Peramalan Harga Beras. Ultimatics 2020, 12, 35–41. [Google Scholar] [CrossRef]
- Moch Farryz Rizkilloh; Sri Widiyanesti Prediksi Harga Cryptocurrency Menggunakan Algoritma Long Short Term Memory (LSTM). J. RESTI (Rekayasa Sist. Dan Teknol. Inf.) 2022, 6, 25–31. [CrossRef]
- Kratzert, F.; Klotz, D.; Brenner, C.; Schulz, K.; Herrnegger, M. Rainfall-Runoff Modelling Using Long Short-Term Memory (LSTM) Networks. Hydrol. Earth Syst. Sci. 2018, 22, 6005–6022. [Google Scholar] [CrossRef] [Green Version]
- Zuo, H.M.; Qiu, J.; Jia, Y.H.; Wang, Q.; Li, F.F. Ten-Minute Prediction of Solar Irradiance Based on Cloud Detection and a Long Short-Term Memory (LSTM) Model. Energy Rep. 2022, 8, 5146–5157. [Google Scholar] [CrossRef]
- Ho, C.H.; Park, I.; Kim, J.; Lee, J.B. PM2.5 Forecast in Korea Using the Long Short-Term Memory (LSTM) Model. Asia Pac. J. Atmos. Sci. 2022, 1, 1–14. [Google Scholar] [CrossRef]
- Fang, L.; Shao, D. Application of Long Short-Term Memory (LSTM) on the Prediction of Rainfall-Runoff in Karst Area. Front. Phys. 2022, 9, 790687. [Google Scholar] [CrossRef]
- Matam, B.R.; Duncan, H. Technical Challenges Related to Implementation of a Formula One Real Time Data Acquisition and Analysis System in a Paediatric Intensive Care Unit. J. Clin. Monit. Comput. 2018, 32, 559–569. [Google Scholar] [CrossRef] [Green Version]
- Laghrissi, F.E.; Douzi, S.; Douzi, K.; Hssina, B. Intrusion Detection Systems Using Long Short-Term Memory (LSTM). J. Big Data 2021, 8, 65. [Google Scholar] [CrossRef]
- Angelov, P.P.; Soares, E.A.; Jiang, R.; Arnold, N.I.; Atkinson, P.M. Explainable Artificial Intelligence: An Analytical Review. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2021, 11, e1424. [Google Scholar] [CrossRef]
- Molnar, C. Interpretable Machine Learning. A Guide for Making Black Box Models Explainable; Packt Publishing: Birmingham, UK, 2020. [Google Scholar]
- Tjoa, E.; Guan, C. A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI. IEEE Trans. Neural Netw Learn. Syst. 2021, 32, 4793–4813. [Google Scholar] [CrossRef]
- Lundberg, S.M.; Erion, G.; Chen, H.; DeGrave, A.; Prutkin, J.M.; Nair, B.; Katz, R.; Himmelfarb, J.; Bansal, N.; Lee, S.I. From Local Explanations to Global Understanding with Explainable AI for Trees. Nat. Mach. Intell. 2020, 2, 56–67. [Google Scholar] [CrossRef] [PubMed]
Characteristic | Datatype | Value Example |
---|---|---|
Driver | Categorical | Lewis Hamilton |
Circuit | Categorical | Circuit de Monaco |
Weather condition | Categorical | Sunny |
Classification result | Numeric | 1 |
Lap time | Numeric | 1:35.678 |
Pilot age | Numeric | 32 |
Pilot experience | Numeric | 5 years |
Circuit length | Numeric | 4381 km |
Number of curves | Numeric | 19 |
Maximum speed | Numeric | 330 km/h |
Column | Description |
---|---|
driverId | Pilot identification |
driverRef | Pilot reference |
number | Number assigned to the pilot in the season |
code | Unique code assigned to the pilot |
forename | Pilot’s name |
surname | Pilot’s last name |
dob | Pilot’s date of birth |
nationality | Pilot nationality |
url | Pilot Related URL |
Column | Description |
---|---|
qualifyId | Identification of the qualifying session |
raceId | Race identification |
driverId | Pilot identification |
constructorId | Identification of the manufacturer of the vehicle |
number | The number assigned to the pilot in the session |
position | Driver’s position in the classification |
q1, q2, q3 | Lap times in each qualifying round |
Column | Description |
---|---|
raceId | Race identification |
driverId | Pilot identification |
lap | Lap number |
position | Rider position on the lap |
time | Lap time |
milliseconds | Lap time in milliseconds |
Iteration | Driver Id | Number |
---|---|---|
0 | 1 | 44 |
1 | 2 | \N |
2 | 3 | 6 |
3 | 4 | 14 |
4 | 5 | \N |
Characteristic | Prediction | Importance |
---|---|---|
Performance history | 0.5848 | 0.4702 |
Technical characteristics | 0.1847 | −3.950 |
Climatic conditions | 0.0530 | −0.0088 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Villegas-Ch, W.; García-Ortiz, J.; Jaramillo-Alcazar, A. An Approach Based on Recurrent Neural Networks and Interactive Visualization to Improve Explainability in AI Systems. Big Data Cogn. Comput. 2023, 7, 136. https://doi.org/10.3390/bdcc7030136
Villegas-Ch W, García-Ortiz J, Jaramillo-Alcazar A. An Approach Based on Recurrent Neural Networks and Interactive Visualization to Improve Explainability in AI Systems. Big Data and Cognitive Computing. 2023; 7(3):136. https://doi.org/10.3390/bdcc7030136
Chicago/Turabian StyleVillegas-Ch, William, Joselin García-Ortiz, and Angel Jaramillo-Alcazar. 2023. "An Approach Based on Recurrent Neural Networks and Interactive Visualization to Improve Explainability in AI Systems" Big Data and Cognitive Computing 7, no. 3: 136. https://doi.org/10.3390/bdcc7030136
APA StyleVillegas-Ch, W., García-Ortiz, J., & Jaramillo-Alcazar, A. (2023). An Approach Based on Recurrent Neural Networks and Interactive Visualization to Improve Explainability in AI Systems. Big Data and Cognitive Computing, 7(3), 136. https://doi.org/10.3390/bdcc7030136