Explainable Artificial Intelligence (XAI) in Biomedical Research and Clinical Practice

A special issue of BioMedInformatics (ISSN 2673-7426).

Deadline for manuscript submissions: closed (31 December 2021) | Viewed by 25526

Special Issue Editors


E-Mail Website
Guest Editor
1. Institute of Clinical Pharmacology, Goethe-University, Theodor Stern Kai 7, 60590 Frankfurt am Main, Germany
2. Fraunhofer Institute for Translational Medicine and Pharmacology ITMP, Theodor-Stern-Kai 7, 60596 Frankfurt am Main, Germany
Interests: pharmacological data science; applied artificial intelligence; statistical parametric mapping; nonlinear-mixed effects modeling

E-Mail Website
Guest Editor
Department of Mathematics and Computer Science, Philipps University of Marburg, Hans-Meerwein-Strasse, 35032 Marburg, Germany
Interests: databionics; neural networks and artificial intelligence; data science; information extraction

Special Issue Information

Dear Colleagues,

Advanced computational methods of machine learning and related artificial intelligence are increasingly entering biomedical research and clinical practice. These processes are bidirectional. Computational methods are used to solve biomedical problems and biological systems are studied to develop and improve artificial intelligence methods, enabling a paradigm shift from hypothesis-driven research and clinical decision-making to data-driven approaches to discovering knowledge from biomedical data.

The shift from therapy-relevant decisions based on biomedical knowledge to black-box-like computer algorithms makes the decision-making increasingly incomprehensible to medical staff and patients. This has been recognized in the issuance of guidelines, e.g., by the European Union or DARPA (USA), which emphasize the need for computer-based decisions to be transparent and in a form that can be communicated in an understandable way to medical personnel and patients. To address this problem, the concept of explainable artificial intelligence (XAI) is attracting scientific interest. XAI uses a representation of human knowledge, usually (a subset of) predicate logic, for its reasoning, deduction, and classification (diagnosis).

In this Special Issue of Biomedinformatics, we invite contributions on the development and implementation of explainable artificial intelligence (XAI) algorithms in biomedical research and practice, focusing on, but not limited to, original research reports.

Prof. Dr. Jörn Lötsch
Prof. Alfred Ultsch
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. BioMedInformatics is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Computer-aided classification and subgroup detection
  • Personalized and precision medicine
  • Supervised and unsupervised machine learning
  • Symbolic machine learning
  • Understandable data mining
  • Explainable artificial intelligence
  • Biomedical knowledge representation
  • Biomedical knowledge discovery
  • Controlled hypothesis generation

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

20 pages, 5450 KiB  
Article
Explainable AI-Based Identification of Contributing Factors to the Mood State Change in Children and Adolescents with Pre-Existing Psychiatric Disorders in the Context of COVID-19-Related Lockdowns in Greece
by Charis Ntakolia, Dimitrios Priftis, Konstantinos Kotsis, Konstantina Magklara, Mariana Charakopoulou-Travlou, Ioanna Rannou, Konstantina Ladopoulou, Iouliani Koullourou, Emmanouil Tsalamanios, Eleni Lazaratou, Aspasia Serdari, Aliki Grigoriadou, Neda Sadeghi, Kenny Chiu and Ioanna Giannopoulou
BioMedInformatics 2023, 3(4), 1040-1059; https://doi.org/10.3390/biomedinformatics3040062 - 07 Nov 2023
Viewed by 691
Abstract
The COVID-19 pandemic and its accompanying restrictions have significantly impacted people’s lives globally. There is an increasing interest in examining the influence of this unprecedented situation on our mental well-being, with less attention towards the impact of the elongation of COVID-19-related measures on [...] Read more.
The COVID-19 pandemic and its accompanying restrictions have significantly impacted people’s lives globally. There is an increasing interest in examining the influence of this unprecedented situation on our mental well-being, with less attention towards the impact of the elongation of COVID-19-related measures on youth with a pre-existing psychiatric/developmental disorder. The majority of studies focus on individuals, such as students, adults, and youths, among others, with little attention being given to the elongation of COVID-19-related measures and their impact on a special group of individuals, such as children and adolescents with diagnosed developmental and psychiatric disorders. In addition, most of these studies adopt statistical methodologies to identify pair-wise relationships among factors, an approach that limits the ability to understand and interpret the impact of various factors. In response, this study aims to adopt an explainable machine learning approach to identify factors that explain the deterioration or amelioration of mood state in a youth clinical sample. The purpose of this study is to identify and interpret the impact of the greatest contributing features of mood state changes on the prediction output via an explainable machine learning pipeline. Among all the machine learning classifiers, the Random Forest model achieved the highest effectiveness, with 76% best AUC-ROC Score and 13 features. The explainability analysis showed that stress or positive changes derived from the imposing restrictions and COVID-19 pandemic are the top two factors that could affect mood state. Full article
Show Figures

Figure 1

12 pages, 1543 KiB  
Article
Explainable Machine Learning Models for Identification of Food-Related Lifestyle Factors in Chicken Meat Consumption Case in Northern Greece
by Dimitrios Chiras, Marina Stamatopoulou, Nikolaos Paraskevis, Serafeim Moustakidis, Irini Tzimitra-Kalogianni and Christos Kokkotis
BioMedInformatics 2023, 3(3), 817-828; https://doi.org/10.3390/biomedinformatics3030051 - 19 Sep 2023
Viewed by 1094
Abstract
A consumer’s decision-making process regarding the purchase of chicken meat is a multifaceted one, influenced by various food-related, personal, and environmental factors that interact with one another. The mediating effect of food lifestyle that bridges the gap between consumer food values and the [...] Read more.
A consumer’s decision-making process regarding the purchase of chicken meat is a multifaceted one, influenced by various food-related, personal, and environmental factors that interact with one another. The mediating effect of food lifestyle that bridges the gap between consumer food values and the environment, further shapes consumer behavior towards meat purchase and consumption. This research introduces the concept of Food-Related Lifestyle (FRL) and aims to identify and explain the factors associated with chicken meat consumption in Northern Greece using a machine learning pipeline. To achieve this, the Boruta algorithm and four widely recognized classifiers were employed, achieving a binary classification accuracy of up to 78.26%. The study primarily focuses on determining the items from the FRL tool that carry significant weight in the classification output, thereby providing valuable insights. Additionally, the research aims to interpret the significance of these selected factors in the decision-making process using the SHAP model. Specifically, it turns out that the freshness, safety, and nutritional value of chicken meat are essential considerations for consumers in their eating habits. Additionally, external factors like health crises and price fluctuations can have a significant impact on consumer choices related to chicken meat consumption. The findings contribute to a more nuanced understanding of consumer preferences, enabling the food industry to align its offerings and marketing efforts with consumer needs and desires. Ultimately, this work demonstrates the potential of AI in shaping the future of the food industry and informs strategies for effective decision-making. Full article
Show Figures

Figure 1

20 pages, 950 KiB  
Article
State-of-the-Art Explainability Methods with Focus on Visual Analytics Showcased by Glioma Classification
by Milot Gashi, Matej Vuković, Nikolina Jekic, Stefan Thalmann, Andreas Holzinger, Claire Jean-Quartier and Fleur Jeanquartier
BioMedInformatics 2022, 2(1), 139-158; https://doi.org/10.3390/biomedinformatics2010009 - 19 Jan 2022
Cited by 14 | Viewed by 3821
Abstract
This study aims to reflect on a list of libraries providing decision support to AI models. The goal is to assist in finding suitable libraries that support visual explainability and interpretability of the output of their AI model. Especially in sensitive application areas, [...] Read more.
This study aims to reflect on a list of libraries providing decision support to AI models. The goal is to assist in finding suitable libraries that support visual explainability and interpretability of the output of their AI model. Especially in sensitive application areas, such as medicine, this is crucial for understanding the decision-making process and for a safe application. Therefore, we use a glioma classification model’s reasoning as an underlying case. We present a comparison of 11 identified Python libraries that provide an addition to the better known SHAP and LIME libraries for visualizing explainability. The libraries are selected based on certain attributes, such as being implemented in Python, supporting visual analysis, thorough documentation, and active maintenance. We showcase and compare four libraries for global interpretations (ELI5, Dalex, InterpretML, and SHAP) and three libraries for local interpretations (Lime, Dalex, and InterpretML). As use case, we process a combination of openly available data sets on glioma for the task of studying feature importance when classifying the grade II, III, and IV brain tumor subtypes glioblastoma multiforme (GBM), anaplastic astrocytoma (AASTR), and oligodendroglioma (ODG), out of 1276 samples and 252 attributes. The exemplified model confirms known variations and studying local explainability contributes to revealing less known variations as putative biomarkers. The full comparison spreadsheet and implementation examples can be found in the appendix. Full article
Show Figures

Figure 1

15 pages, 1007 KiB  
Article
Quantified Explainability: Convolutional Neural Network Focus Assessment in Arrhythmia Detection
by Rui Varandas, Bernardo Gonçalves, Hugo Gamboa and Pedro Vieira
BioMedInformatics 2022, 2(1), 124-138; https://doi.org/10.3390/biomedinformatics2010008 - 17 Jan 2022
Cited by 3 | Viewed by 2799
Abstract
In clinical practice, every decision should be reliable and explained to the stakeholders. The high accuracy of deep learning (DL) models pose a great advantage, but the fact that they function as black-boxes hinders their clinical applications. Hence, explainability methods became important as [...] Read more.
In clinical practice, every decision should be reliable and explained to the stakeholders. The high accuracy of deep learning (DL) models pose a great advantage, but the fact that they function as black-boxes hinders their clinical applications. Hence, explainability methods became important as they provide explanation to DL models. In this study, two datasets with electrocardiogram (ECG) image representations of six heartbeats were built, one given the label of the last heartbeat and the other given the label of the first heartbeat. Each dataset was used to train one neural network. Finally, we applied well-known explainability methods to the resulting networks to explain their classifications. Explainability methods produced attribution maps where pixels intensities are proportional to their importance to the classification task. Then, we developed a metric to quantify the focus of the models in the heartbeat of interest. The classification models achieved testing accuracy scores of around 93.66% and 91.72%. The models focused around the heartbeat of interest, with values of the focus metric ranging between 8.8% and 32.4%. Future work will investigate the importance of regions outside the region of interest, besides the contribution of specific ECG waves to the classification. Full article
Show Figures

Figure 1

17 pages, 1586 KiB  
Article
Explainable Artificial Intelligence (XAI) in Biomedicine: Making AI Decisions Trustworthy for Physicians and Patients
by Jörn Lötsch, Dario Kringel and Alfred Ultsch
BioMedInformatics 2022, 2(1), 1-17; https://doi.org/10.3390/biomedinformatics2010001 - 22 Dec 2021
Cited by 35 | Viewed by 8517
Abstract
The use of artificial intelligence (AI) systems in biomedical and clinical settings can disrupt the traditional doctor–patient relationship, which is based on trust and transparency in medical advice and therapeutic decisions. When the diagnosis or selection of a therapy is no longer made [...] Read more.
The use of artificial intelligence (AI) systems in biomedical and clinical settings can disrupt the traditional doctor–patient relationship, which is based on trust and transparency in medical advice and therapeutic decisions. When the diagnosis or selection of a therapy is no longer made solely by the physician, but to a significant extent by a machine using algorithms, decisions become nontransparent. Skill learning is the most common application of machine learning algorithms in clinical decision making. These are a class of very general algorithms (artificial neural networks, classifiers, etc.), which are tuned based on examples to optimize the classification of new, unseen cases. It is pointless to ask for an explanation for a decision. A detailed understanding of the mathematical details of an AI algorithm may be possible for experts in statistics or computer science. However, when it comes to the fate of human beings, this “developer’s explanation” is not sufficient. The concept of explainable AI (XAI) as a solution to this problem is attracting increasing scientific and regulatory interest. This review focuses on the requirement that XAIs must be able to explain in detail the decisions made by the AI to the experts in the field. Full article
Show Figures

Figure 1

29 pages, 11546 KiB  
Article
High Expression of Caspase-8 Associated with Improved Survival in Diffuse Large B-Cell Lymphoma: Machine Learning and Artificial Neural Networks Analyses
by Joaquim Carreras, Yara Yukie Kikuti, Giovanna Roncador, Masashi Miyaoka, Shinichiro Hiraiwa, Sakura Tomita, Haruka Ikoma, Yusuke Kondo, Atsushi Ito, Sawako Shiraiwa, Kiyoshi Ando, Naoya Nakamura and Rifat Hamoudi
BioMedInformatics 2021, 1(1), 18-46; https://doi.org/10.3390/biomedinformatics1010003 - 21 Apr 2021
Cited by 15 | Viewed by 4639
Abstract
High expression of the anti-apoptotic TNFAIP8 is associated with poor survival of the patients with diffuse large B-cell lymphoma (DLBCL), and one of the functions of TNFAIP8 is to inhibit the pro-apoptosis Caspase-8. We aimed to analyze the immunohistochemical expression of Caspase-8 (active [...] Read more.
High expression of the anti-apoptotic TNFAIP8 is associated with poor survival of the patients with diffuse large B-cell lymphoma (DLBCL), and one of the functions of TNFAIP8 is to inhibit the pro-apoptosis Caspase-8. We aimed to analyze the immunohistochemical expression of Caspase-8 (active subunit p18; CASP8) in a series of 97 cases of DLBCL from Tokai University Hospital, and to correlate with other Caspase-8 pathway-related markers, including cleaved Caspase-3, cleaved PARP, BCL2, TP53, MDM2, MYC, Ki67, E2F1, CDK6, MYB and LMO2. After digital image quantification, the correlation with several clinicopathological characteristics of the patients showed that high protein expression of Caspase-8 was associated with a favorable overall and progression-free survival (Hazard Risks = 0.3; p = 0.005 and 0.03, respectively). Caspase-8 also positively correlated with cCASP3, MDM2, E2F1, TNFAIP8, BCL2 and Ki67. Next, the Caspase-8 protein expression was modeled using predictive analytics, and a high overall predictive accuracy (>80%) was obtained with CHAID decision tree, Bayesian network, discriminant analysis, C5 tree, logistic regression, and Artificial Intelligence Neural Network methods (both Multilayer perceptron and Radial basis function); the most relevant markers were cCASP3, E2F1, TP53, cPARP, MDM2, BCL2 and TNFAIP8. Finally, the CASP8 gene expression was also successfully modeled in an independent DLBCL series of 414 cases from the Lymphoma/Leukemia Molecular Profiling Project (LLMPP). In conclusion, high protein expression of Caspase-8 is associated with a favorable prognosis of DLBCL. Predictive modeling is a feasible analytic strategy that results in a solution that can be understood (i.e., explainable artificial intelligence, “white-box” algorithms). Full article
Show Figures

Graphical abstract

Review

Jump to: Research, Other

14 pages, 4797 KiB  
Review
Interpretable Medical Imagery Diagnosis with Self-Attentive Transformers: A Review of Explainable AI for Health Care
by Tin Lai
BioMedInformatics 2024, 4(1), 113-126; https://doi.org/10.3390/biomedinformatics4010008 - 08 Jan 2024
Cited by 1 | Viewed by 1142
Abstract
Recent advancements in artificial intelligence (AI) have facilitated its widespread adoption in primary medical services, addressing the demand–supply imbalance in healthcare. Vision Transformers (ViT) have emerged as state-of-the-art computer vision models, benefiting from self-attention modules. However, compared to traditional machine learning approaches, deep [...] Read more.
Recent advancements in artificial intelligence (AI) have facilitated its widespread adoption in primary medical services, addressing the demand–supply imbalance in healthcare. Vision Transformers (ViT) have emerged as state-of-the-art computer vision models, benefiting from self-attention modules. However, compared to traditional machine learning approaches, deep learning models are complex and are often treated as a “black box” that can cause uncertainty regarding how they operate. Explainable artificial intelligence (XAI) refers to methods that explain and interpret machine learning models’ inner workings and how they come to decisions, which is especially important in the medical domain to guide healthcare decision-making processes. This review summarizes recent ViT advancements and interpretative approaches to understanding the decision-making process of ViT, enabling transparency in medical diagnosis applications. Full article
Show Figures

Figure 1

Other

Jump to: Research, Review

17 pages, 2065 KiB  
Technical Note
Explainable Machine Learning (XAI) for Survival in Bone Marrow Transplantation Trials: A Technical Report
by Roberto Passera, Sofia Zompi, Jessica Gill and Alessandro Busca
BioMedInformatics 2023, 3(3), 752-768; https://doi.org/10.3390/biomedinformatics3030048 - 01 Sep 2023
Viewed by 1260
Abstract
Artificial intelligence is gaining interest among clinicians, but its results are difficult to be interpreted, especially when dealing with survival outcomes and censored observations. Explainable machine learning (XAI) has been recently extended to this context to improve explainability, interpretability and transparency for modeling [...] Read more.
Artificial intelligence is gaining interest among clinicians, but its results are difficult to be interpreted, especially when dealing with survival outcomes and censored observations. Explainable machine learning (XAI) has been recently extended to this context to improve explainability, interpretability and transparency for modeling results. A cohort of 231 patients undergoing an allogeneic bone marrow transplantation was analyzed by XAI for survival by two different uni- and multi-variate survival models, proportional hazard regression and random survival forest, having as the main outcome the overall survival (OS) and its main determinants, using the survex package for R. Both models’ performances were investigated using the integrated Brier score, the integrated Cumulative/Dynamic AUC and the concordance C-index. Global explanation for the whole cohort was performed using the time-dependent variable importance and the partial dependence survival plot. The local explanation for each single patient was obtained via the SurvSHAP(t) and SurvLIME plots and the ceteris paribus survival profile. The survex package common interface ensured a good feasibility of XAI for survival, and the advanced graphical options allowed us to easily explore, explain and compare OS results coming from the two survival models. Before the modeling results to be suitable for clinical use, understandability, clinical relevance and computational efficiency were the most important criteria ensured by this XAI for survival approach, in adherence to clinical XAI guidelines. Full article
Show Figures

Figure 1

Back to TopTop