Exploring Local Explanation of Practical Industrial AI Applications: A Systematic Literature Review
Abstract
:1. Introduction
- Identifying and categorizing local explanation techniques for industrial AI applications based on different criteria categories, including XAI usage and purpose, industrial applications and AI models, and data types.
- Analyzing the advantages and disadvantages of different local explanation techniques in explaining complex models in the industrial context.
- Identifying the current challenges of local explanation for industrial AI applications and suggesting recommendations for addressing these challenges, such as developing more effective and efficient local explanation techniques and exploring the ethical implications of using these techniques.
- Summarizing the study’s main findings, identifying gaps and limitations in the current literature, and suggesting future research directions, which can guide future research efforts and lead to more reliable and trustworthy AI systems in industry.
2. Literature Review Methodology
- Identifying the research question: Our research question focused on understanding the different types of local explanation techniques used in industry, their benefits and limitations, and their effectiveness in explaining complex machine learning models. Our review aims to answer the following research questions:
- (Q1) What local explanation techniques are used in industrial applications?
- (Q2) How widespread are practical industrial applications of local explanation techniques?
- (Q3) What are the benefits and limitations of local explanation techniques for industrial applications?
- (Q4) How to build effective local explanation techniques in practical settings?
By answering these questions, we hope to provide a comprehensive overview of the current state of the art in local explanation techniques for practical industrial applications. - Identifying relevant literature: We conducted a comprehensive search of relevant academic and industry sources, including peer-reviewed journals, conference proceedings, technical reports, and gray literature. The search included studies published from 2020 to Mar 2023. We used a combination of keywords and controlled vocabulary terms related to machine learning, interpretability, local explanations, and industrial applications to identify relevant articles. We specifically examined academic resources such as ACM Digital Library, IEEE Xplore, ScienceDirect, Google Scholar, MDPI, and others. The search terms used include “local explanation techniques”, “model interpretability”, “explainable artificial intelligence”, “XAI”, “machine learning”, and “industrial applications”. We also manually searched relevant journals and conference proceedings to ensure comprehensive coverage of the literature.
- Screening and selection of studies: We used a two-stage screening process to identify articles for inclusion in our review. In the first stage, we screened titles and abstracts to identify potentially relevant articles. In the second stage, we screened the full text of articles to determine their eligibility based on our inclusion and exclusion criteria. We included studies focusing on local explanation techniques and their applications in industrial settings. Studies investigating the effectiveness and limitations of different explanation techniques and their comparative analysis were also considered. We excluded studies focusing on global explanations, theoretical aspects of model interpretability, or applications in non-industrial settings.
- Data extraction and quality assessment: Data from the selected papers were extracted, including study design, sample size, research question, methods, results, and limitations. We extracted data from the selected studies, including the authors, publication year, research question, methodology, datasets used, and main findings. The data were analyzed thematically to identify patterns, trends, and research gaps in the literature. We also performed a qualitative synthesis of the studies, highlighting the benefits and limitations of local explanation techniques and their effectiveness in industrial applications.
- Data synthesis and analysis: We synthesized the data from the selected articles using a thematic analysis approach. We identified common themes and patterns across the articles and summarized the findings in a narrative synthesis.
- Interpretation and reporting of results: We interpreted the results of our review in light of our research question and the existing literature. We reported our findings in a structured manner, highlighting the key themes and patterns that emerged from our analysis.
3. Survey Results on Local Explanation in Industrial AI Applications
3.1. Quantitative Analysis
3.1.1. Analysis Based on Usage and Purpose
3.1.2. Analysis Based on Industrial Applications and AI Model Implementation
3.1.3. Analysis Based on Data Types
3.2. Qualitative Analysis
3.2.1. Autonomous Systems and Robotics
3.2.2. Energy and Building Management
3.2.3. Environment
3.2.4. Finance
3.2.5. Healthcare
3.2.6. Industrial Engineering
3.2.7. Cybersecurity
3.2.8. Smart Agriculture
3.3. Current Challenges and Recommendations of Local Explanation for Industrial Applications
3.3.1. Challenges
3.3.2. Recommendations
- Data quality assurance: Ensuring high-quality datasets is essential to minimize the impact of challenges associated with local explanation techniques. By performing data cleaning, normalization, and preprocessing, we can ensure that the datasets are high quality and minimize the risks of producing unreliable and inaccurate explanations [149].
- Model validation: Thorough model validation and testing should be carried out to ensure that the model is accurate and reliable [145,146]. Our study suggests involving end-users in the interpretability process, prioritizing the evaluation of the model’s interpretability according to the context of use, and providing clear explanations of the model’s limitations and assumptions to enhance transparency and trust in the model [150].
- Appropriate choice of explanation techniques: The efficacy of local explanation strategies varies according to model type, dataset, and task. Thus, choosing the most appropriate technique for the application is crucial. Researchers can also develop hybrid models that combine black-box and white-box models’ strengths to achieve high performance and interpretability [148].
- Human-in-the-loop: A human-in-the-loop approach can improve the quality of local explanations and enhance trust in the model. By including human experts in the decision-making process, we can ensure that the local explanations are relevant and accurate for the intended use case [151].
4. Discussion of Gaps and Limitations in the Current Literature
4.1. Discussion of Survey Results and Gaps
4.2. Analysis of Potential Biases and Limitations of the Study
5. Conclusions and Future Directions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
AI | Artificial intelligence |
XAI | Explainable artificial intelligence |
ML | Machine learning |
DL | Deep learning |
DQN | Deep Q-network |
VGG | Visual geometry group |
SHAP | Shapley additive explanations |
LIME | Local interpretable model-agnostic explanations |
Grad-CAM | Gradient-weighted class activation mapping |
RF | Random forest |
DHS | District heating systems |
XAI-FDD | Explainable artificial intelligence-fault detection and diagnosis |
DNN | Deep neural networks |
DFNN | Dynamic fuzzy neural networks |
LightGBM | Light gradient-boosting machine |
LSTM | Long short-term memory |
DDPG | Deep deterministic policy gradient |
A2C | Advantage actor–critic |
LRP | Layer-wise relevance propagation |
AF | Atrial fibrillation |
ECG | Electrocardiogram |
FOG | Freezing of gait |
PD | Parkinson’s disease |
EEG | Electroencephalogram |
Deep CCXR | Deep COVID-19 CXR detection |
CAM | Class activation mapping |
RFC | Random forest classifier |
CNN | Convolutional neural network |
FDSs | Food delivery services |
RAMs | Regression activation maps |
SSPs | Siamese network and multi-view representation to forecast plant small secreted peptides |
AIDCC | Automatic and intelligent data collector and classifier |
FCM | Fuzzy cognitive maps |
OAK4XAI | Model towards out-of-box explainable artificial intelligence |
AgriComO | Agriculture computer ontology |
References
- Alex, D.T.; Hao, Y.; Armin, H.A.; Arun, D.; Lide, D.; Paul, R. Patient Facial Emotion Recognition and Sentiment Analysis Using Secure Cloud With Hardware Acceleration. In Computational Intelligence for Multimedia Big Data on the Cloud with Engineering Applications; University of Texas at San Antonio: San Antonio, TX, USA, 2018; pp. 61–89. [Google Scholar]
- Lee, S.M.; Seo, J.B.; Yun, J.; Cho, Y.-H.; Vogel-Claussen, J.; Schiebler, M.L.; Gefter, W.B.; Van Beek, E.J.; Goo, J.M.; Lee, K.S.; et al. Deep Learning Applications in Chest Radiography and Computed Tomography. J. Thorac. Imaging 2019, 34, 75–85. [Google Scholar] [CrossRef] [Green Version]
- Chen, R.; Yang, L.; Goodison, S.; Sun, Y. Deep-learning Approach to Identifying Cancer Subtypes Using High-dimensional Genomic Data. Bioinformatics 2020, 36, 1476–1483. [Google Scholar] [CrossRef] [PubMed]
- Byanjankar, A.; Heikkila, M.; Mezei, J. Predicting Credit Risk in Peer-to-Peer Lending: A Neural Network Approach. In Proceedings of the 2015 IEEE Symposium Series on Computational Intelligence, Cape Town, South Africa, 7–10 December 2015; pp. 719–725. [Google Scholar] [CrossRef]
- Chen, Y.-Q.; Zhang, J.; Ng, W.W.Y. Loan Default Prediction Using Diversified Sensitivity Undersampling. In Proceedings of the 2018 International Conference on Machine Learning and Cybernetics (ICMLC), Chengdu, China, 15–18 July 2018; pp. 240–245. [Google Scholar] [CrossRef]
- Zhang, Z.; Neill, D.B. Identifying Significant Predictive Bias in Classifiers. arXiv 2016, arXiv:1611.08292. Available online: http://arxiv.org/abs/1611.08292 (accessed on 20 February 2023).
- Hester, N.; Gray, K. For Black men, Being Tall Increases Threat Stereotyping and Police Stops. Proc. Nat. Acad. Sci. USA 2018, 115, 2711–2715. [Google Scholar] [CrossRef] [Green Version]
- Parra, G.D.L.T.; Rad, P.; Choo, K.-K.R.; Beebe, N. Detecting Internet of Things Attacks Using Distributed Deep Learning. J. Netw. Comput. Appl. 2020, 163, 102662. [Google Scholar] [CrossRef]
- Chacon, H.; Silva, S.; Rad, P. Deep Learning Poison Data Attack Detection. In Proceedings of the 2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI), Portland, OR, USA, 4–6 November 2019; pp. 971–978. [Google Scholar] [CrossRef]
- Dam, H.K.; Tran, T.; Ghose, A. Explainable Software Analytics. In Proceedings of the ICSE-NIER ’18: Proceedings of the 40th International Conference on Software Engineering: New Ideas and Emerging Results, Gothenburg, Sweden, 27 May–3 June 2018; pp. 53–56. [Google Scholar] [CrossRef] [Green Version]
- Scott, A.C.; Clancey, W.J.; Davis, R.; Shortliffe, E.H. Explanation Capabilities of Production-Based Consultation Systems; Technical Report; Stanford University: Stanford, CA, USA, 1977. [Google Scholar]
- Swartout, W.R. Explaining and Justifying Expert Consulting Programs. In Computer-Assisted Medical Decision Making. Computers and Medicine; Reggia, J.A., Tuhrim, S., Eds.; Springer: New York, NY, USA, 1985. [Google Scholar] [CrossRef]
- Wachter, S.; Mittelstadt, B.; Russell, C. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR. Harv. J. Law Technol. 2018, 31, 842–861. Available online: https://ssrn.com/abstract=3063289 (accessed on 20 February 2023). [CrossRef] [Green Version]
- Adadi, A.; Berrada, M. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 2018, 6, 52138–52160. [Google Scholar] [CrossRef]
- Omeiza, D.; Webb, H.; Jirotka, M.; Kunze, L. Explanations in Autonomous Driving: A Survey. IEEE Trans. Intell. Transp. Syst. 2022, 23, 10142–10162. [Google Scholar] [CrossRef]
- Wang, S.; Atif Qureshi, M.; Miralles-Pechuán, L.; Reddy Gadekallu, T.; Liyanage, M. Explainable AI for B5G/6G: Technical Aspects, Use Cases, and Research Challenges. arXiv 2021, arXiv:2112.04698. [Google Scholar]
- Atakishiyev, S.; Salameh, M.; Yao, H.; Goebel, R. Explainable Artificial Intelligence for Autonomous Driving: A Comprehensive Overview and Field Guide for Future Research Directions. arXiv 2021, arXiv:2112.11561. [Google Scholar] [CrossRef]
- Senevirathna, T.; Salazar, Z.; La, V.H.; Marchal, S.; Siniarski, B.; Liyanage, M.; Wang, S. A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions. arXiv 2022, arXiv:2204.12822. [Google Scholar] [CrossRef]
- Sakai, T.; Nagai, T. Explainable Autonomous Robots: A Survey and Perspective. Adv. Robot. 2022, 36, 219–238. [Google Scholar] [CrossRef]
- Emaminejad, N.; Akhavian, R. Trustworthy AI and Robotics: Implications for the AEC Industry. Autom. Constr. 2022, 139, 104298. [Google Scholar] [CrossRef]
- Alimonda, N.; Guidotto, L.; Malandri, L.; Mercorio, F.; Mezzanzanica, M.; Tosi, G. A Survey on XAI for Cyber Physical Systems in Medicine. In Proceedings of the 2022 IEEE International Conference on Metrology for Extended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE), Rome, Italy, 26–28 October 2022; pp. 265–270. [Google Scholar] [CrossRef]
- Machlev, R.; Heistrene, L.; Perl, M.; Levy, K.Y.; Belikov, J.; Mannor, S.; Levron, Y. Explainable Artificial Intelligence (XAI) Techniques for Energy and Power Systems: Review, Challenges and Opportunities. Energy AI 2022, 9, 100169. [Google Scholar] [CrossRef]
- Zhang, Z.; Al Hamadi, H.; Damiani, E.; Yeun, C.Y.; Taher, F. Explainable Artificial Intelligence Applications in Cyber Security: State-of-the-Art in Research. IEEE Access 2022, 10, 93104–93139. [Google Scholar] [CrossRef]
- Capuano, N.; Fenza, G.; Loia, V.; Stanzione, C. Explainable Artificial Intelligence in CyberSecurity: A Survey. IEEE Access 2022, 10, 93575–93600. [Google Scholar] [CrossRef]
- Sheu, R.-K.; Pardeshi, M.S. A Survey on Medical Explainable AI (XAI): Recent Progress, Explainability Approach, Human Interaction and Scoring System. Sensors 2022, 22, 8068. [Google Scholar] [CrossRef]
- Owens, E.; Sheehan, B.; Mullins, M.; Cunneen, M.; Ressel, J.; Castignani, G. Explainable Artificial Intelligence (XAI) in Insurance. Risks 2022, 10, 230. [Google Scholar] [CrossRef]
- Ahmed, I.; Jeon, G.; Piccialli, F. From Artificial Intelligence to Explainable Artificial Intelligence in Industry 4.0: A Survey on What, How, and Where. IEEE Trans. Ind. Inform. 2022, 18, 5031–5042. [Google Scholar] [CrossRef]
- Di Martino, F.; Delmastro, F. Explainable AI for Clinical and Remote Health Applications: A Survey on Tabular and Time Series Data. Artif. Intell. Rev. 2022, 56, 5261–5315. [Google Scholar] [CrossRef]
- Weber, P.; Carl, K.V.; Hinz, O. Applications of Explainable Artificial Intelligence in Finance—A Systematic Review of Finance, Information Systems, and Computer Science literature. Manag. Rev. Q. 2023, 1–41. [Google Scholar] [CrossRef]
- Chaddad, A.; Peng, J.; Xu, J.; Bouridane, A. Survey of Explainable AI Techniques in Healthcare. Sensors 2023, 23, 634. [Google Scholar] [CrossRef] [PubMed]
- Nazir, S.; Dickson, D.M.; Akram, M.U. Survey of Explainable Artificial Intelligence Techniques for Biomedical Imaging with Deep Neural Networks. Comput. Biol. Med. 2023, 156, 106668. [Google Scholar] [CrossRef]
- Das, A.; Rad, P. Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey. ArXiv 2020, arXiv:2006.11371. [Google Scholar]
- Linardatos, P.; Papastefanopoulos, V.; Kotsiantis, S. Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy 2021, 23, 18. [Google Scholar] [CrossRef]
- Islam, M.R.; Ahmed, M.U.; Barua, S.; Begum, S. A Systematic Review of Explainable Artificial Intelligence in Terms of Different Application Domains and Tasks. Appl. Sci. 2022, 12, 1353. [Google Scholar] [CrossRef]
- Kok, I.; Okay, F.Y.; Muyanli, O.; Ozdemir, S. Explainable Artificial Intelligence (XAI) for Internet of Things: A Survey. arXiv 2022, arXiv:2206.04800. [Google Scholar]
- Molnar, C. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. Chapter 6. Available online: https://christophm.github.io/interpretable-ml-book (accessed on 23 February 2023).
- Zhang, K.; Xu, P.; Zhang, J. Explainable AI in Deep Reinforcement Learning Models: A SHAP Method Applied in Power System Emergency Control. In Proceedings of the 2020 IEEE 4th Conference on Energy Internet and Energy System Integration (EI2), Wuhan, China, 30 October–1 November 2020; pp. 711–716. [Google Scholar] [CrossRef]
- Renda, A.; Ducange, P.; Marcelloni, F.; Sabella, D.; Filippou, M.C.; Nardini, G.; Stea, G.; Virdis, A.; Micheli, D.; Rapone, D.; et al. Federated Learning of Explainable AI Models in 6G Systems: Towards Secure and Automated Vehicle Networking. Information 2022, 13, 395. [Google Scholar] [CrossRef]
- Sequeira, P.; Gervasio, M. Interestingness Elements for Explainable Reinforcement Learning: Understanding Agents’ Capabilities and Limitations. arXiv 2019, arXiv:1912.09007. [Google Scholar] [CrossRef]
- He, L.; Aouf, N.; Song, B. Explainable Deep Reinforcement Learning for UAV Autonomous Path Planning. Aerosp. Sci. Technol. 2021, 118, 107052. [Google Scholar] [CrossRef]
- Zhang, Z.; Tian, R.; Sherony, R.; Domeyer, J.; Ding, Z. Attention-Based Interrelation Modeling for Explainable Automated Driving. In IEEE Transactions on Intelligent Vehicles; IEEE: Piscataway, NJ, USA, 2022. [Google Scholar] [CrossRef]
- Cui, Z.; Li, M.; Huang, Y.; Wang, Y.; Chen, H. An Interpretation Framework for Autonomous Vehicles Decision-making via SHAP and RF. In Proceedings of the 2022 6th CAA International Conference on Vehicular Control and Intelligence (CVCI), Nanjing, China, 28–30 October 2022; pp. 1–7. [Google Scholar] [CrossRef]
- Nahata, R.; Omeiza, D.; Howard, R.; Kunze, L. Assessing and Explaining Collision Risk in Dynamic Environments for Autonomous Driving Safety. In Proceedings of the 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), Indianapolis, IN, USA, 19–22 September 2021; pp. 223–230. [Google Scholar] [CrossRef]
- Kim, M.; Jun, J.-A.; Song, Y.; Pyo, C.S. Explanation for Building Energy Prediction. In Proceedings of the 2020 International Conference on Information and Communication Technology Convergence, Jeju, Republic of Korea, 21–23 October 2020; pp. 1168–1170. [Google Scholar] [CrossRef]
- Arjunan, P.; Poolla, K.; Miller, C. Energystar++: Towards More Accurate and Explanatory Building Energy Benchmarking. Appl. Energy 2020, 276, 115413. [Google Scholar] [CrossRef]
- Movahedi, A.; Derrible, S. Interrelated Patterns of Electricity, Gas, and Water Consumption in Large-scale Buildings. Engrxiv 2020, 1–22. [Google Scholar] [CrossRef] [Green Version]
- Kuzlu, M.; Cali, U.; Sharma, V.; Güler, Ö. Gaining Insight Into Solar Photovoltaic Power Generation Forecasting Utilizing Explainable Artificial Intelligence Tools. IEEE Access 2020, 8, 187814–187823. [Google Scholar] [CrossRef]
- Chakraborty, D.; Alam, A.; Chaudhuri, S.; Başağaoğlu, H.; Sulbaran, T.; Langar, S. Scenario-based Prediction of Climate Change Impacts on Building Cooling Energy Consumption with Explainable Artificial Intelligence. Appl. Energy 2021, 291, 116807. [Google Scholar] [CrossRef]
- Golizadeh, A.Y.; Aslansefat, K.; Zhao, X.; Sadati, S.; Badiei, A.; Xiao, X.; Shittu, S.; Fan, Y.; Ma, X. Hourly Performance Forecast of a Dew point Cooler Using Explainable Artificial Intelligence and Evolutionary Optimisations by 2050. Appl. Energy 2021, 281, 116062. [Google Scholar] [CrossRef]
- Lu, Y.; Murzakhanov, I.; Chatzivasileiadis, S. Neural Network Interpretability for Forecasting of Aggregated Renewable Generation. In Proceedings of the 2021 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm), Aachen, Germany, 25–28 October 2021; pp. 282–288. [Google Scholar] [CrossRef]
- Gao, Y.; Ruan, Y. Interpretable Deep Learning Model for Building Energy Consumption Prediction Based on Attention Mechanism. Energy Build 2021, 252, 111379. [Google Scholar] [CrossRef]
- Zdravković, M.; Ćirić, I.; Ignjatović, M. Towards Explainable AI-assisted Operations in District Heating Systems. IIFAC-PapersOnLine 2021, 54, 390–395. [Google Scholar] [CrossRef]
- Moraliyage, H.; Dahanayake, S.; De Silva, D.; Mills, N.; Rathnayaka, P.; Nguyen, S.; Alahakoon, D.; Jennings, A. A Robust Artificial Intelligence Approach with Explainability for Measurement and Verification of Energy Efficient Infrastructure for Net Zero Carbon Emissions. Sensors 2022, 22, 9503. [Google Scholar] [CrossRef]
- Arjunan, P.; Poolla, K.; Miller, C. BEEM: Data-driven Building Energy Benchmarking for Singapore. Energy Build 2022, 260, 111869. [Google Scholar] [CrossRef]
- Geyer, P.; Singh, M.M.; Chen, X. Explainable AI for Engineering Design: A Unified Approach of Systems Engineering and Component-based Deep Learning. arXiv 2022, arXiv:2108.13836. [Google Scholar] [CrossRef]
- Grzeszczyk, T.A.; Grzeszczyk, M.K. Justifying Short-term Load Forecasts Obtained with the Use of Neural Models. Energies 2022, 15, 1852. [Google Scholar] [CrossRef]
- Li, M.; Wang, Y. Power Load Forecasting and Interpretable Models based on GS_XGBoost and SHAP. J. Phys. Conf. Ser. 2022, 2195, 012028. [Google Scholar] [CrossRef]
- Moon, J.; Park, S.; Rho, S.; Hwang, E. Interpretable Short-term Electrical Load Forecasting Scheme Using Cubist. Comput. Intell Neurosci. 2022, 2022, 1–20. [Google Scholar] [CrossRef] [PubMed]
- Wenninger, S.; Kaymakci, C.; Wiethe, C. Explainable Long-term Building Energy Consumption Prediction using Qlattice. Appl. Energy 2022, 308, 118300. [Google Scholar] [CrossRef]
- Zdravković, M.; Ćirić, I.; Ignjatović, M. Explainable Heat Demand Forecasting for the Novel Control Strategies of District Heating Systems. Annu. Rev. Control 2022, 53, 405–413. [Google Scholar] [CrossRef]
- Srinivasan, S.; Arjunan, P.; Jin, B.; Sangiovanni-Vincentelli, A.L.; Sultan, Z.; Poolla, K. Explainable AI for Chiller Fault-detection Systems: Gaining Human Trust. Computer 2021, 54, 60–68. [Google Scholar] [CrossRef]
- Wastensteiner, J.; Weiss, T.M.; Haag, F.; Hopf, K. Explainable AI for Tailored Electricity Consumption Feedback–an Experimental Evaluation of Visualizations. arXiv 2022, arXiv:2208.11408. [Google Scholar] [CrossRef]
- Sim, T.; Choi, S.; Kim, Y.; Youn, S.H.; Jang, D.-J.; Lee, S.; Chun, C.-J. eXplainable AI (XAI)-Based Input Variable Selection Methodology for Forecasting Energy Consumption. Electronics 2022, 11, 2947. [Google Scholar] [CrossRef]
- Graham, G.; Csicsery, N.; Stasiowski, E.; Thouvenin, G.; Mather, W.H.; Ferry, M.; Cookson, S.; Hasty, J. Genome-scale Transcriptional Dynamics and Environmental Biosensing. Proc. Natl. Acad. Sci. USA 2020, 117, 3301–3306. [Google Scholar] [CrossRef] [PubMed]
- Gao, S.; Wang, Y. Explainable Deep Learning Powered Building Risk Assessment Model for Proactive Hurricane Response. Risk Anal. 2022, 1–13. [Google Scholar] [CrossRef]
- Masahiro, R.; Boyan, A.; Stefano, M.; Jamie, M.K.; Blas, M. Benito, F.H. Explainable Artificial Intelligence Enhances the Ecological Interpretability of Black-box Species Distribution Models. Ecography 2020, 44, 199–205. [Google Scholar] [CrossRef]
- Dikshit, A.; Pradhan, B. Interpretable and Explainable AI (XAI) Model for Spatial Drought Prediction. Sci. Total Environ. 2021, 801, 149797. [Google Scholar] [CrossRef]
- Kim, M.; Kim, D.; Jin, D.; Kim, G. Application of Explainable Artificial Intelligence (XAI) in Urban Growth Modeling: A Case Study of Seoul Metropolitan Area, Korea. Land 2023, 12, 420. [Google Scholar] [CrossRef]
- Gramegna, A.; Giudici, P. Why to Buy Insurance? An Explainable Artificial Intelligence Approach. Risks 2020, 8, 137. [Google Scholar] [CrossRef]
- Benhamou, E.; Ohana, J.-J.; Saltiel, D.; Guez, B.; Ohana, S. Explainable AI (XAI) Models Applied to Planning in Financial Markets. Université Paris-Dauphine Research Paper No. 3862437. 2021. Available online: https://ssrn.com/abstract=3862437 (accessed on 2 February 2023). [CrossRef]
- Gite, S.; Khatavkar, H.; Kotecha, K.; Srivastava, S.; Maheshwari, P.; Pandey, N. Explainable Stock Prices Prediction from Financial News Articles using Sentiment Analysis. PeerJ. Comput. Sci. 2021, 7, e340. [Google Scholar] [CrossRef] [PubMed]
- Babaei, G.; Giudici, P. Which SME is Worth an Investment? An Explainable Machine Learning Approach. 2021. Available online: http://dx.doi.org/10.2139/ssrn.3810618 (accessed on 2 February 2023).
- de Lange, P.E.; Melsom, B.; Vennerod, C.B.; Westgaard, S. Explainable AI for Credit Assessment in Banks. J. Risk Financ. Manag. 2022, 15, 556. [Google Scholar] [CrossRef]
- Bussmann, N.; Giudici, P.; Marinelli, D.; Papenbrock, J. Explainable AI in Fintech Risk Management. Front. Artif. Intell. 2020, 3, 26. [Google Scholar] [CrossRef]
- Kumar, S.; Vishal, M.; Ravi, V. Explainable Reinforcement Learning on Financial Stock Trading using SHAP. arXiv 2022, arXiv:2208.08790. [Google Scholar] [CrossRef]
- Pawar, U.; O’Shea, D.; Rea, S.; O’Reilly, R. Incorporating Explainable Artificial Intelligence (XAI) to Aid the Understanding of Machine Learning in the Healthcare Domain. In Proceedings of the The 28th Irish Conference on Artificial Intelligence and Cognitive ScienceAt: Technological University Dublin, Dublin, Ireland, 7–8 December 2020; Volume 2771, pp. 169–180. [Google Scholar]
- Dissanayake, T.; Fernando, T.; Denman, S.; Sridharan, S.; Ghaemmaghami, H.; Fookes, C. A Robust Interpretable Deep Learning Classifier for Heart Anomaly Detection without Segmentation. IEEE J. Biomed. Health Inform. 2021, 25, 2162–2171. [Google Scholar] [CrossRef]
- Cecilia, P.; Alan, P.; Dino, P. Doctor XAI: An Ontology-based Approach to Black-box Sequential Data Classification Explanations. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* ’20), Association for Computing Machinery, New York, NY, USA, 27–30 January 2020; pp. 629–639. [Google Scholar] [CrossRef]
- Naik, H.; Goradia, P.; Desai, V.; Desai, Y.; Iyyanki, M. Explainable Artificial Intelligence (XAI) for Population Health Management—An Appraisal. Eur. J. Electr. Eng. Comput. Sci. 2021, 5, 64–76. [Google Scholar] [CrossRef]
- Beebe-Wang, N.; Okeson, A.; Althoff, T.; Lee, S.-I. Efficient and Explainable Risk Assessments for Imminent dementia in an Aging Cohort Study. IEEE J. Biomed. Health Inform. 2021, 25, 2409–2420. [Google Scholar] [CrossRef]
- Kim, S.-H.; Jeon, E.-T.; Yu, S.; Oh, K.; Kim, C.K.; Song, T.-J.; Kim, Y.-J.; Heo, S.H.; Park, K.-Y.; Kim, J.-M.; et al. Interpretable Machine Learning for Early Neurological Deterioration Prediction in Atrial Fibrillation-related Stroke. Sci. Rep. 2021, 11, 20610. [Google Scholar] [CrossRef] [PubMed]
- Rashed-Al-Mahfuz, M.; Haque, A.; Azad, A.; Alyami, S.A.; Quinn, J.M.; Moni, M.A. Clinically Applicable Machine Learning Approaches to Identify Attributes of Chronic kidney disease (ckd) for use in low-cost diagnostic screening. IEEE J. Transl. Eng. Health Med. 2021, 9, 4900511. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Y.; Yang, D.; Liu, Z.; Chen, C.; Ge, M.; Li, X.; Luo, T.; Wu, Z.; Shi, C.; Wang, B.; et al. An Explainable Supervised Machine Learning Predictor of Acute Kidney Injury After Adult Deceased Donor Liver Transplantation. J. Transl. Med. 2021, 19, 1–15. [Google Scholar] [CrossRef]
- Mousavi, S.; Afghah, F.; Acharya, U.R. HAN-ECG: An Interpretable Atrial Fibrillation Detection Model Using Hierarchical Attention Networks. Comput. Biol. Med. 2020, 127, 104057. [Google Scholar] [CrossRef]
- Ivaturi, P.; Gadaleta, M.; Pandey, A.C.; Pazzani, M.; Steinhubl, S.R.; Quer, G. A Comprehensive Explanation Framework for Biomedical Time Series Classification. IEEE J. Biomed. Health Inform. 2021, 25, 2398–2408. [Google Scholar] [CrossRef] [PubMed]
- Shashikumar, S.P.; Josef, C.S.; Sharma, A.; Nemati, S. DeepAISE an Interpretable and Recurrent Neural Survival Model for Early Prediction of Sepsis. Artif. Intell. Med. 2021, 113, 102036. [Google Scholar] [CrossRef] [PubMed]
- Filtjens, B.; Ginis, P.; Nieuwboer, A.; Afzal, M.R.; Spildooren, J.; Vanrumste, B.; Slaets, P. Modelling and Identification of Characteristic Kinematic Features Preceding Freezing of Gait with Convolutional Neural Networks and Layer-wise Relevance Propagation. BMC Med. Inform. Decis. Mak. 2021, 21, 341. [Google Scholar] [CrossRef]
- Dutt, M.; Redhu, S.; Goodwin, M.; Omlin, C.W. SleepXAI: An Explainable Deep Learning Approach for Multi-class Sleep Stage Identification. Appl. Intell. 2022, 1–14. [Google Scholar] [CrossRef]
- Brunese, L.; Mercaldo, F.; Reginelli, A.; Santone, A. Explainable Deep Learning for Pulmonary Disease and Coronavirus COVID-19 Detection from X-rays. Comput. Methods Programs Biomed. 2020, 196, 105608. [Google Scholar] [CrossRef]
- Yang, G.; Ye, Q.; Xia, J. Unbox the Black-box for the Medical Explainable AI via Multi-modal and Multi-centre Data Fusion: A Minireview, two Showcases and Beyond. Inf. Fusion 2022, 77, 29–52. [Google Scholar] [CrossRef]
- Singh, A.; Balaji, J.J.; Rasheed, M.A.; Jayakumar, V.; Raman, R.; Lakshminarayanan, V. Evaluation of Explainable Deep Learning Methods for Ophthalmic Diagnosis. Clin. Ophthalmol. 2021, 15, 2573–2581. [Google Scholar] [CrossRef] [PubMed]
- Xu, F.; Jiang, L.; He, W.; Huang, G.; Hong, Y.; Tang, F.; Lv, J.; Lin, Y.; Qin, Y.; Lan, R.; et al. The Clinical Value of Explainable Deep Learning for Diagnosing Fungal Keratitis Using in Vivo Confocal Microscopy Images. Front. Med. 2021, 8, 797616. [Google Scholar] [CrossRef] [PubMed]
- Chetoui, M.; Akhloufi, M.A.; Yousefi, B.; Bouattane, E.M. Explainable COVID-19 Detection on Chest X-rays Using an End-to-end Deep Convolutional Neural Network Architecture. Big Data Cogn. Comput. 2021, 5, 73. [Google Scholar] [CrossRef]
- Barata, C.; Celebi, M.E.; Marques, J.S. Explainable Skin Lesion Diagnosis Using Taxonomies. Pattern Recognit. 2021, 110, 107413. [Google Scholar] [CrossRef]
- Singh, R.K.; Pandey, R.; Babu, R.N. COVIDScreen: Explainable Deep Learning Framework for Differential Diagnosis of COVID-19 using Chest Xrays. Neural. Comput. Appl. 2021, 33, 8871–8892. [Google Scholar] [CrossRef]
- Shi, W.; Tong, L.; Zhu, Y.; Wang, M.D. COVID-19 Automatic Diagnosis with Radiographic Imaging: Explainable Attention Transfer Deep Neural Networks. IEEE J. Biomed. Health Inform. 2021, 25, 2376–2387. [Google Scholar] [CrossRef]
- Figueroa, K.C.; Song, B.; Sunny, S.; Li, S.; Gurushanth, K.; Mendonca, P.; Mukhia, N.; Patrick, S.; Gurudath, S.; Raghavan, S.; et al. Interpretable Deep Learning Approach for Oral Cancer Classification using Guided Attention Inference Network. J. Biomed. Opt. 2022, 27, 015001. [Google Scholar] [CrossRef]
- Malhotra, A.; Mittal, S.; Majumdar, P.; Chhabra, S.; Thakral, K.; Vatsa, M.; Singh, R.; Chaudhury, S.; Pudrod, A.; Agrawal, A.; et al. Multi-task Driven Explainable Diagnosis of COVID-19 using Chest X-ray Images. Pattern Recognit. 2022, 122, 108243. [Google Scholar] [CrossRef]
- Civit-Masot, J.; Bañuls-Beaterio, A.; Domínguez-Morales, M.; Rivas-Pérez, M.; Muñoz-Saavedra, L.; Corral, J.M.R. Non-small Cell Lung Cancer diagnosis aid with Histopathological Images using Explainable Deep Learning Techniques. Comput. Methods Programs Biomed. 2022, 226, 107108. [Google Scholar] [CrossRef]
- Kim, D.; Chung, J.; Choi, J.; Succi, M.D.; Conklin, J.; Longo, M.G.F.; Ackman, J.B.; Little, B.P.; Petranovic, M.; Kalra, M.K.; et al. Accurate Auto-labeling of Chest X-ray Images based on Quantitative Similarity to an Explainable AI Model. Nat. Commun. 2022, 13, 1867. [Google Scholar] [CrossRef]
- Aldhahi, W.; Sull, S. Uncertain-CAM: Uncertainty-Based Ensemble Machine Voting for Improved COVID-19 CXR Classification and Explainability. Diagnostics 2023, 13, 441. [Google Scholar] [CrossRef]
- Mercaldo, F.; Belfiore, M.P.; Reginelli, A.; Brunese, L.; Santone, A. Coronavirus COVID-19 Detection by Means of Explainable Deep Learning. Sci. Rep. 2023, 13, 462. [Google Scholar] [CrossRef] [PubMed]
- Oztekin, F.; Katar, O.; Sadak, F.; Yildirim, M.; Cakar, H.; Aydogan, M.; Ozpolat, Z.; Talo Yildirim, T.; Yildirim, O.; Faust, O.; et al. An Explainable Deep Learning Model to Prediction Dental Caries Using Panoramic Radiograph Images. Diagnostics 2023, 13, 226. [Google Scholar] [CrossRef] [PubMed]
- Naz, Z.; Khan, M.U.G.; Saba, T.; Rehman, A.; Nobanee, H.; Bahaj, S.A. An Explainable AI-Enabled Framework for Interpreting Pulmonary Diseases from Chest Radiographs. Cancers 2023, 15, 314. [Google Scholar] [CrossRef]
- Mukhtorov, D.; Rakhmonova, M.; Muksimova, S.; Cho, Y.-I. Endoscopic Image Classification Based on Explainable Deep Learning. Sensors 2023, 23, 3176. [Google Scholar] [CrossRef] [PubMed]
- Grezmak, J.; Zhang, J.; Wang, P.; Loparo, K.A.; Gao, R.X. Interpretable Convolutional Neural Network Through Layer-wise Relevance Propagation for Machine Fault Diagnosis. IEEE Sen. J. 2020, 20, 3172–3181. [Google Scholar] [CrossRef]
- Serradilla, O.; Zugasti, E.; Cernuda, C.; Aranburu, A.; de Okariz, J.R.; Zurutuza, U. Interpreting Remaining Useful Life Estimations Combining Explainable Artificial Intelligence and Domain Knowledge in Industrial Machinery. In Proceedings of the 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar] [CrossRef]
- Oh, C.; Jeong, J. VODCA: Verification of Diagnosis Using CAM-Based Approach for Explainable Process Monitoring. Sensors 2020, 20, 6858. [Google Scholar] [CrossRef]
- Abid, F.B.; Sallem, M.; Braham, A. Robust Interpretable Deep Learning for Intelligent Fault Diagnosis of Induction Motors. IEEE Trans. Instrum. Meas. 2020, 69, 3506–3515. [Google Scholar] [CrossRef]
- Hong, C.W.; Lee, C.; Lee, K.; Ko, M.-S.; Kim, D.E.; Hur, K. Remaining Useful Life Prognosis for Turbofan Engine Using Explainable Deep Neural Networks with Dimensionality Reduction. Sensors 2020, 20, 6626. [Google Scholar] [CrossRef]
- Kim, M.S.; Yun, J.P.; Park, P. An Explainable Convolutional Neural Network for Fault Diagnosis in Linear Motion Guide. IEEE Trans. Ind. Inform. 2021, 17, 4036–4045. [Google Scholar] [CrossRef]
- Darian, M.O.; Gilbert-Rainer, G. Stable and Explainable Deep Learning Damage Prediction for Prismatic Cantilever Steel Beam. Comput. Ind. 2021, 125, 103359. [Google Scholar] [CrossRef]
- Liu, C.; Qin, C.; Shi, X.; Wang, Z.; Zhang, G.; Han, Y. TScatNet: An Interpretable Cross-Domain Intelligent Diagnosis Model with Antinoise and Few-Shot Learning Capability. IEEE Trans. Instrum. Meas. 2021, 70, 3506110. [Google Scholar] [CrossRef]
- Brito, L.C.; Susto, G.A.; Brito, J.N.; Duarte, M.A. An Explainable Artificial Intelligence Approach for Unsupervised Fault Detection and Diagnosis in Rotating Machinery. Mech. Syst. Signal Process. 2022, 163, 108105. [Google Scholar] [CrossRef]
- Li, T.; Zhao, Z.; Sun, C.; Cheng, L.; Chen, X.; Yan, R.; Gao, R.X. WaveletKernelNet: An Interpretable Deep Neural Network for Industrial Intelligent Diagnosis. IEEE Trans. Syst. Man. Cybern. Syst. 2022, 52, 2302–2312. [Google Scholar] [CrossRef]
- Brusa, E.; Cibrario, L.; Delprete, C.; Di Maggio, L.G. Explainable AI for Machine Fault Diagnosis: Understanding Features’ Contribution in Machine Learning Models for Industrial Condition Monitoring. Appl. Sci. 2023, 13, 2038. [Google Scholar] [CrossRef]
- Chen, H.-Y.; Lee, C.-H. Vibration Signals Analysis by Explainable Artificial Intelligence (XAI) Approach: Application on Bearing Faults Diagnosis. IEEE Access 2020, 8, 134246–134256. [Google Scholar] [CrossRef]
- Sun, K.H.; Huh, H.; Tama, B.A.; Lee, S.Y.; Jung, J.H.; Lee, S. Vision-Based Fault Diagnostics Using Explainable Deep Learning With Class Activation Maps. IEEE Access 2020, 8, 129169–129179. [Google Scholar] [CrossRef]
- Wang, M.; Zheng, K.; Yang, Y.; Wang, X. An Explainable Machine Learning Framework for Intrusion Detection Systems. IEEE Access 2020, 8, 73127–73141. [Google Scholar] [CrossRef]
- Alenezi, R.; Ludwig, S.A. Explainability of Cybersecurity Threats Data Using SHAP. In Proceedings of the 2021 IEEE Symposium Series on Computational Intelligence (SSCI), Orlando, FL, USA, 5–7 December 2021; pp. 1–10. [Google Scholar] [CrossRef]
- Roshan, K.; Zafar, A. Utilizing XAI Technique to Improve Autoencoder based Model for Computer Network Anomaly Detection with Shapley Additive explanation (SHAP). arXiv 2021, arXiv:2112.08442. [Google Scholar] [CrossRef]
- Karn, R.R.; Kudva, P.; Huang, H.; Suneja, S.; Elfadel, I.M. Cryptomining Detection in Container Clouds Using System Calls and Explainable Machine Learning. IEEE Trans. Parallel Distrib. Syst. 2021, 32, 674–691. [Google Scholar] [CrossRef]
- Le, T.-T.-H.; Kim, H.; Kang, H.; Kim, H. Classification and Explanation for Intrusion Detection System Based on Ensemble Trees and SHAP Method. Sensors 2022, 22, 1154. [Google Scholar] [CrossRef]
- El Houda, Z.A.; Brik, B.; Senouci, S.-M. A Novel IoT-Based Explainable Deep Learning Framework for Intrusion Detection Systems. IEEE Internet Things Mag. 2022, 5, 20–23. [Google Scholar] [CrossRef]
- Oseni, A.; Moustafa, N.; Creech, G.; Sohrabi, N.; Strelzoff, A.; Tari, Z.; Linkov, I. An Explainable Deep Learning Framework for Resilient Intrusion Detection in IoT-Enabled Transportation Networks. IEEE Trans. Intell. Transp. Syst. 2023, 24, 1000–1014. [Google Scholar] [CrossRef]
- Zolanvari, M.; Yang, Z.; Khan, K.; Jain, R.; Meskin, N. TRUST XAI: Model-Agnostic Explanations for AI With a Case Study on IIoT Security. IEEE Internet Things J. 2023, 10, 2967–2978. [Google Scholar] [CrossRef]
- Viana, C.M.; Santos, M.; Freire, D.; Abrantes, P.; Rocha, J. Evaluation of the factors Explaining the Use of Agricultural Land: A Machine Learning and Model-Agnostic Approach. Ecol. Indic. 2021, 131, 108200. [Google Scholar] [CrossRef]
- Ryo, M. Explainable Artificial Intelligence and Interpretable Machine Learning for Agricultural Data Analysis. Artif. Intell. Agric. 2022, 6, 257–265. [Google Scholar] [CrossRef]
- Adak, A.; Pradhan, B.; Shukla, N.; Alamri, A. Unboxing Deep Learning Model of Food Delivery Service Reviews Using Explainable Artificial Intelligence (XAI) Technique. Foods 2022, 11, 2019. [Google Scholar] [CrossRef] [PubMed]
- Cartolano, A.; Cuzzocrea, A.; Pilato, G.; Grasso, G.M. Explainable AI at Work! What Can It Do for Smart Agriculture? In Proceedings of the 2022 IEEE Eighth International Conference on Multimedia Big Data (BigMM), Naples, Italy, 5–7 December 2022; pp. 87–93. [Google Scholar] [CrossRef]
- Wolanin, A.; Mateo-García, G.; Camps-Valls, G.; Gómez-Chova, L.; Meroni, M.; Duveiller, G.; Guanter, L. Estimating and understanding crop yields with explainable deep learning in the Indian Wheat Belt. Environ. Res. Lett. 2020, 15, 024019. [Google Scholar] [CrossRef]
- Kawakura, S.; Hirafuji, M.; Ninomiya, S.; Shibasaki, R. Analyses of Diverse Agricultural Worker Data with Explainable Artificial Intelligence: XAI based on SHAP, LIME, and LightGBM. Eur. J. Agric. Food Sci. 2022, 4, 11–19. [Google Scholar] [CrossRef]
- Li, Z.; Jin, J.; Wang, Y.; Long, W.; Ding, Y.; Hu, H.; Wei, L. ExamPle: Explainable Deep Learning Framework for the Prediction of Plant Small Secreted Peptides. Bioinformatics 2023, 39, btad108. [Google Scholar] [CrossRef]
- Kundu, N.; Rani, G.; Dhaka, V.S.; Gupta, K.; Nayak, S.C.; Verma, S.; Ijaz, M.F.; Woźniak, M. IoT and Interpretable Machine Learning Based Framework for Disease Prediction in Pearl Millet. Sensors 2021, 21, 5386. [Google Scholar] [CrossRef]
- Kawakura, S.; Hirafuji, M.; Ninomiya, S.; Shibasaki, R. Visual Analysis of Agricultural Workers using Explainable Artificial Intelligence (XAI) on Class Activation Map (CAM) with Characteristic Point Data Output from OpenCV-based Analysis. Eur. J. Artif. Intell. Mach. Learn. 2022, 2, 1–8. [Google Scholar] [CrossRef]
- Apostolopoulos, I.D.; Athanasoula, I.; Tzani, M.; Groumpos, P.P. An Explainable Deep Learning Framework for Detecting and Localising Smoke and Fire Incidents: Evaluation of Grad-CAM++ and LIME. Mach. Learn. Knowl. Extr. 2022, 4, 1124–1135. [Google Scholar] [CrossRef]
- Ngo, Q.H.; Kechadi, T.; Le-Khac, N.A. OAK4XAI: Model Towards Out-of-Box eXplainable Artificial Intelligence for Digital Agriculture. In Artificial Intelligence XXXIX: 42nd SGAI International Conference on Artificial Intelligence, AI 2022, Cambridge, UK, 13–15 December 2022; Springer International Publishing: Cham, Switzerland, 2022; pp. 238–251. [Google Scholar]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. Why Should I Trust You? In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining-KDD’16, San Francisco, CA, USA, 13–17 August 2016; ACM Press: New York, NY, USA, 2016; pp. 1135–1144. [Google Scholar]
- Lundberg, S.M.; Lee, S.I. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems; Curran Associates Inc.: Red Hook, NY, USA, 2017; pp. 4765–4774. [Google Scholar]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. Anchors: High-Precision Model-Agnostic Explanations. In Proceedings of the AAAI Conference on Artificial Intelligence; AAAI Press: Palo Alto, CA, USA, 2018; Volume 32, ISBN 978-1-5108-6096-4. [Google Scholar] [CrossRef]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar] [CrossRef] [Green Version]
- Arras, L.; Horn, F.; Montavon, G.; Müller, K.R.; Samek, W. “What is Relevant in a Text Document?”: An Interpretable Machine Learning Approach. PLoS ONE 2017, 12, e0181142. [Google Scholar] [CrossRef] [Green Version]
- Arrieta, A.B.; Díaz-Rodríguez, N.; Ser, J.D.; Bennetot, A.; Tabik, S.; Barbado, A.; Garcia, S.; Gil-Lopez, S.; Molina, D.; Benjamins, R.; et al. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges Toward Responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef] [Green Version]
- David, G. Broad Agency Announcement Explainable Artificial Intelligence (XAI); Technical report; Defense Advanced Research Projects Agency Information Innovation Office: Arlington, VA, USA, 2016; p. 22203-2114. [Google Scholar]
- Gunning, D.; Aha, D. DARPA’s Explainable Artificial Intelligence (XAI) Program. AI Mag. 2019, 40, 44–58. [Google Scholar] [CrossRef]
- Gunning, D.; Vorm, E.; Wang, Y.; Turek, M. DARPA’s Explainable AI (XAI) Program: A Retrospective. Authorea 2021, 2, e61. [Google Scholar] [CrossRef]
- Schoonderwoerd, T.A.; Jorritsma, W.; Neerincx, M.A.; Van Den Bosch, K. Human-centered XAI: Developing Design Patterns for Explanations of Clinical Decision Support Systems. Int. J. Hum.-Comput. Stud. 2021, 154, 102684. [Google Scholar] [CrossRef]
- Rudin, C. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. Nat. Mach. Intell. 2019, 1, 206–215. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Burkart, N.; Huber, M.F. A Survey on the Explainability of Supervised Machine Learning. J. Artif. Intell. Res. 2021, 70, 245–317. [Google Scholar] [CrossRef]
- Doshi-Velez, F.; Kim, B. Towards A Rigorous Science of Interpretable Machine Learning. arXiv 2017, arXiv:1702.08608. [Google Scholar]
- Koh, P.W.; Liang, P. Understanding Black-box Predictions via Influence Functions. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; Volume 70, pp. 1885–1894. [Google Scholar]
- Goyal, A.; He, K.; Bengio, Y. Understanding and improving deep learning techniques for image recognition. arXiv 2021, arXiv:2104.08821. [Google Scholar] [CrossRef]
- Holzinger, A.; Kieseberg, P.; Weippl, E.; Tjoa, A.M. Current Advances, Trends and Challenges of Machine Learning and Knowledge Extraction: From Machine Learning to Explainable AI. In Machine Learning and Knowledge Extraction. CD-MAKE; Holzinger, A., Kieseberg, P., Tjoa, A., Weippl, E., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2018; Volume 11015. [Google Scholar] [CrossRef]
- Hooker, G.; Erhan, D.; Kindermans, P.J. A Benchmark for Interpretability Methods in Deep Neural Networks. In Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, BC, Canada, 8–14 December 2019. [Google Scholar] [CrossRef]
- Wachter, S.; Mittelstadt, B.; Floridi, L. Why a Right to Explanation of Automated Decision-making Does not Exist in the General Data Protection Regulation. Int. Data Priv. Law 2018, 7, 76–99. [Google Scholar] [CrossRef] [Green Version]
Survey Paper | Year | Data Type | Industrial Sector | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
T | I | Ta | Te | ASR | EBM | E | F | H | IE | C | SA | ||
[14] | 2020 | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✔ | ✘ | ✘ | ✘ |
[15] | 2021 | ✘ | ✘ | ✘ | ✘ | ✔ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
[16] | 2021 | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✔ | ✘ |
[17] | 2021 | ✘ | ✘ | ✘ | ✔ | ✔ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
[18] | 2022 | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✔ | ✘ |
[19] | 2022 | ✘ | ✘ | ✘ | ✘ | ✔ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
[20] | 2022 | ✘ | ✘ | ✘ | ✘ | ✔ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
[21] | 2022 | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✔ | ✘ | ✘ | ✘ |
[22] | 2022 | ✘ | ✘ | ✘ | ✘ | ✘ | ✔ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
[23] | 2022 | ✘ | ✘ | ✘ | ✔ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✔ | ✘ |
[24] | 2022 | ✘ | ✔ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✔ | ✘ |
[25] | 2022 | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✔ | ✘ | ✘ | ✘ |
[26] | 2022 | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✔ | ✘ | ✘ | ✘ | ✘ |
[27] | 2022 | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✔ | ✘ | ✘ |
[28] | 2022 | ✔ | ✘ | ✔ | ✘ | ✘ | ✘ | ✘ | ✘ | ✔ | ✘ | ✘ | ✘ |
[29] | 2023 | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✔ | ✘ | ✘ | ✘ | ✘ |
[30] | 2023 | ✘ | ✔ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✔ | ✘ | ✘ | ✘ |
[31] | 2023 | ✘ | ✔ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✔ | ✘ | ✘ | ✘ |
Ours | 2023 | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
Industrial Sector | Data Type | Ref. | Year | AI Model and Dataset | Practical Local Explanation | |||
---|---|---|---|---|---|---|---|---|
AI Model | Dataset | XAI Method | Usage | Purpose | ||||
Autonomous systems and robotics | Time series | [37] | 2020 | Deep Q-Networks | Voltage load | SHAP | A | P |
[38] | 2022 | FL | QoE forecasting | FED-XAI | S | An | ||
Image | [39] | 2020 | Q-learning | Agent data | Interaction data | A | P | |
[40] | 2021 | DRL | Depth image, UAV states | SHAP-CAM | A | P | ||
[41] | 2022 | Faster R-CNN, ResNet-50 | BDD-OID, PSI | t-SNE | S | An | ||
Tabular | [42] | 2022 | RF | Processed dataset (D*) | SHAP | A | P | |
Text | [43] | 2021 | DT, RF | Lyft Level 5 | SHAP | A | P | |
Energy and building management | Time series | [44] | 2020 | Seq2seq | EnergyPlus | AM | A | P |
[45] | 2020 | MLRi, GBT | CBECS | SHAP | A | P | ||
[46] | 2020 | XGBoost | PLUTO | SHAP | A | P | ||
[47] | 2020 | RF | GEFCOM | SHAP, LIME, ELI5 | A | P | ||
[48] | 2021 | XGBoost | Historical climate | SHAP | A | P | ||
[49] | 2021 | DNN | IPCC’s SRES A2 climate | SHAP | A | P | ||
[50] | 2021 | DNN | Energy data | IG, DeepLIFT | A | P | ||
[51] | 2021 | LSTM | Energy consumption | AM | S | An | ||
[52] | 2021 | LSTM | Heat demand by FMEDH | LIME | A | P | ||
[53] | 2022 | XGBoost | UNICON dataset | SHAP | A | P | ||
[54] | 2022 | CatBoost | Energy disclosure | LIME | A | P | ||
[55] | 2022 | DNN | Enerate synthetic | LIME | A | P | ||
[56] | 2022 | LSTM | Electricity load | LIME | A | P | ||
[57] | 2022 | GS-XGBoost | Electricity consumption | SHAP | A | P | ||
[58] | 2022 | Cubist regression | Electricity two buildings | FI | S | An | ||
[59] | 2022 | QLattice | Residential building | FI | S | An | ||
[60] | 2022 | Bi-LSTM, CNN-LSTM | SCADA | LIME | A | P | ||
Tabular | [61] | 2021 | XGBoost | Singapore’s building | LIME | A | P | |
[62] | 2022 | RF, CNN, IT | CER | LIME, SHAP | A | P | ||
[63] | 2021 | XGBoost, SVR, LightGBM, LSTM | Energy data in Seoul | SHAP | A | P | ||
Environment | Time series | [64] | 2020 | XGBoost, LSTM-RNN | Escherichia coli | SHAP | A | P |
[65] | 2022 | DFNN | Building-level damage | LIME | A | P | ||
Image | [66] | 2020 | RF | Occurrence data | LIME | A | P | |
[67] | 2021 | LSTM, CNN-BiLSTM | Monthly rainfall data | SHAP | A | P | ||
[68] | 2023 | XGBoost | Land-cover, topographic | LIME | A | P | ||
Finance | Tabular | [69] | 2020 | XGBoost | Sports and travel | SHAP | A | P |
[70] | 2021 | GBDT | Daily observations | SHAP | A | P | ||
[71] | 2021 | LSTM | OHLC | LIME | A | P | ||
[72] | 2021 | XGBoost | Financial indicators | SHAP | A | P | ||
[73] | 2022 | LightGBM | Proprietary | SHAP | A | P | ||
Time series | [74] | 2020 | XGBoost | Credit risk | SHAP | A | P | |
[75] | 2022 | DQN | SENSEX, DJIA | SHAP | A | P | ||
Healthcare | Tabular | [76] | 2020 | DT | Cervical cancer risk | SHAP | A | P |
[77] | 2020 | Stacked LSTM-CNN-MLP | PhysioNet | SHAP, occlusion maps | A | P | ||
[78] | 2020 | DT | MIMIC-III | Doctor XAI | S | An | ||
[79] | 2021 | XGBoost | Diabetes | PDP, ICE, ALE, LIME, SHAP, Anchors | A | P | ||
[80] | 2021 | XGBoost | ROSMAP | SHAP | A | P | ||
[81] | 2021 | LightGBM | K-attention | SHAP | A | P | ||
[82] | 2021 | RF, GBDT, XGBoost | UCI CKD | SHAP | A | P | ||
[83] | 2021 | RF | Retrospective study | SHAP | A | P | ||
Time series | [84] | 2020 | Bi-LSTM ensemble | PhysioNet 2017 | Attention | S | An | |
[85] | 2021 | CNN | PhysioNet 2017 | LIME, guided saliency | A | P | ||
[86] | 2021 | WCPH-RNN | Retrospective study | Saliency | A | P | ||
[87] | 2021 | CNN | Gait dataset | LRP | A | P | ||
[88] | 2022 | CNN-CRF | Sleep-EDF | Grad-CAM | A | P | ||
Image | [89] | 2020 | VGG-16 | Chest X-ray | Grad-CAM | S | An | |
[90] | 2021 | ResNet-50 | CT | LIME, SHAP | A | P | ||
[91] | 2021 | Inception-v3 | Diagnosis of retinal images | GBP, SHAP | A | P | ||
[92] | 2021 | CNN | IVCM | Grad-CAM, guided Grad-CAM | S | An | ||
[93] | 2021 | EfficientNet | Chest X-ray images | Grad-CAM | S | An | ||
[94] | 2021 | CNN, LSTM | ISIC 2017 and 2018 | Grad-CAM | S | An | ||
[95] | 2021 | VGG, ResNet, DenseNet | COVID-19 chest X-ray | Grad-CAM | S | An | ||
[96] | 2021 | VGG-16 | Chest CT, X-ray image | Grad-CAM, GradCAM++, LRP | S | An | ||
[97] | 2022 | VGG-19 | Oral images | Grad-CAM | S | An | ||
[98] | 2022 | COMiT-Net | ChestXray-14, CheXpert | Grad-CAM | S | An | ||
[99] | 2022 | CNN | LC25000, NSCLC | Grad-CAM, OS | A | P | ||
[100] | 2022 | DNN | CheXpert, MIMIC, NIH | CAM | S | An | ||
[101] | 2023 | DNN | COVID-QU, QaTa-Cov19 | Uncertain-CAM | S | An | ||
[102] | 2023 | VGG-16 | CTs | Grad-CAM | A | P | ||
[103] | 2023 | EfficientNet, DenseNet, ResNet | Tooth areas | Grad-CAM | A | P | ||
[104] | 2023 | ResNet-50 | COVIDNet | LIME | A | P | ||
[105] | 2023 | ResNet152 | KVASIR | Grad-CAM | S | An | ||
Industrial engineering | Time series | [106] | 2020 | CNN | Machinery fault | LRP | A | P |
[107] | 2020 | RF | Bushings testbed | LIME | A | P | ||
[108] | 2020 | CNN, VAE | Ford Motor | CAM | A | P | ||
[109] | 2020 | Deep-SincNet | Motor currents | t-SNE, SincNet filters | A | P | ||
[110] | 2020 | CNN, LSTM, Bi-LSTM | C-MAPSS | SHAP | A | P | ||
[111] | 2021 | 1D-CNN | Normal and fault conditions | FG-CAM | S | An | ||
[112] | 2021 | DNN | Prismatic cantilever steel beam | LIME, SHAP | A | P | ||
[113] | 2021 | TScatNet | CWRU, DDS | t-SNE | S | An | ||
[114] | 2022 | kNN, OCSVM, etc. | Bearing, Gearbox Fault | SHAP, Local-DIFFI | A | P | ||
[115] | 2022 | WaveletKernelNet | Bearing, Gearbox Fault | CWConv layer | S | An | ||
[116] | 2023 | SVM, kNN | Bearings | SHAP | A | P | ||
Image | [117] | 2020 | CNN | Bearings | GradCAM | A | P | |
[118] | 2020 | CNN | Image fault diagnosis | CAM | A | P | ||
Cybersecurity | Tabular | [119] | 2020 | SVM | NSL-KDD | SHAP | A | P |
[120] | 2021 | RF, XGBoost, Sequence Model | ISCX-URL2016, CICMalDroid 2020 | SHAP, LIME | A | P | ||
[121] | 2021 | Autoencoder | CID-IDS2017 | SHAP | A | P | ||
[122] | 2021 | DT, ANN | Private dataset | SHAP, LIME | A | P | ||
[123] | 2022 | DT | NF-BoT-IoT-v2, NF-ToN-IoT-v2 | SHAP | A | P | ||
[124] | 2022 | DNN | UNSW-NB15 | SHAP | A | P | ||
[125] | 2023 | CNN | ToN_IoT | SHAP | A | P | ||
[126] | 2023 | ANN | WUSTL-IIoT, NSL-KDD | TRUST, LIME | A | P | ||
Smart agriculture | Tabular | [127] | 2021 | RF | Wheat, maize, olive groves | LIME | A | P |
[128] | 2022 | DT, RF | Maize crop yield | LIME | A | P | ||
[129] | 2022 | LSTM, Bi-LSTM, Bi-GRU-LSTM-CNN | ProductReview | SHAP, LIME | A | P | ||
[130] | 2022 | XGB, MLP, SVM | Crop Recommendation | SHAP, LIME | A | P | ||
Time series | [131] | 2020 | CNN | Meteorological, wheat yield data | RAM | A | P | |
[132] | 2022 | LightGBM | Diverse physical agricultural | SHAP, LIME | A | P | ||
[133] | 2023 | GRU | Plant SSPs | ISM | S | An | ||
Image | [134] | 2021 | ResNet-V2, VGG-19, VGG-16, Inception-V3 | Diseased leaves of pearl millet | Grad-CAM | A | P | |
[135] | 2022 | LightGBM | Agri-worker motion | ELI5, PDPbox, Skater | A | P | ||
[136] | 2022 | CNN | Fire and smoke | LIME, GradCAM++ | A | P | ||
Text | [137] | 2022 | OAK4XAI | Graph database | AgriComO | S | An |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Le, T.-T.-H.; Prihatno, A.T.; Oktian, Y.E.; Kang, H.; Kim, H. Exploring Local Explanation of Practical Industrial AI Applications: A Systematic Literature Review. Appl. Sci. 2023, 13, 5809. https://doi.org/10.3390/app13095809
Le T-T-H, Prihatno AT, Oktian YE, Kang H, Kim H. Exploring Local Explanation of Practical Industrial AI Applications: A Systematic Literature Review. Applied Sciences. 2023; 13(9):5809. https://doi.org/10.3390/app13095809
Chicago/Turabian StyleLe, Thi-Thu-Huong, Aji Teguh Prihatno, Yustus Eko Oktian, Hyoeun Kang, and Howon Kim. 2023. "Exploring Local Explanation of Practical Industrial AI Applications: A Systematic Literature Review" Applied Sciences 13, no. 9: 5809. https://doi.org/10.3390/app13095809
APA StyleLe, T.-T.-H., Prihatno, A. T., Oktian, Y. E., Kang, H., & Kim, H. (2023). Exploring Local Explanation of Practical Industrial AI Applications: A Systematic Literature Review. Applied Sciences, 13(9), 5809. https://doi.org/10.3390/app13095809