XAI: Explainable Artificial Intelligence in Healthcare, Finance and Industrial Applications

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Evolutionary Algorithms and Machine Learning".

Deadline for manuscript submissions: closed (1 November 2022) | Viewed by 3246

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computational Sciences, Maharaja Ranjit Singh Punjab Technical University, Punjab 151001, India
Interests: feature extraction; image classification; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
University Institute of Engineering & Technology, Panjab University, Chandigarh 160014, India
Interests: image processing; medical image analysis; machine learning

E-Mail Website
Guest Editor
Department of CSE, IKG Punjab Technical University Mohali Campus-1, Punjab 160055, India
Interests: wireless sensor networks; network security

E-Mail Website
Guest Editor
University Institute of Engineering & Technology, Panjab University, Chandigarh 160014, India
Interests: network security; intrusion detection; forgery detection

Special Issue Information

Dear Colleagues,

Explainable artificial intelligence (XAI) is used to describe an AI model, its expected impact and potential biases. It helps to characterize model accuracy, fairness, transparency and outcomes in AI-powered decision making. Explainable AI is crucial for an organization in building trust and confidence when putting AI models into production. AI explainability also helps an organization to adopt a responsible approach to AI development.

There are many advantages to understanding how an AI-enabled system has led to a specific output. Explainability can help developers ensure that the system is working as expected, it might be necessary to meet regulatory standards, or it might be important in allowing those affected by a decision to challenge or change that outcome.

An explainable AI (XAI) program aims to create a suite of machine learning techniques that:

  • Produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and
  • Enable human users to understand, appropriately trust and effectively manage the emerging generation of artificially intelligent partners.

An XAI program is focused on the development of multiple systems by addressing problems in two areas: (1) machine-learning problems to classify events of interest in heterogeneous, multimedia data; and (2) machine-learning problems to construct decision policies for autonomous systems to perform a variety of simulated missions.

We invite authors from both industry and academia to submit original research and review articles that cover the success stories of XAI in enhancing data transparency and reusability, specifically for real-life problems like conversational AI, healthcare, finance, and other industrial applications. All received submissions will be sent out for peer review by at least three experts in the field and evaluated with respect to relevance to the Special Issue, level of innovation, depth of contribution and quality of presentation.

Dr. Munish Kumar
Prof. Dr. Ajay Mittal
Dr. Monika Sachdeva
Prof. Dr. Krishan Kumar Saluja
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • survey/review/theoretical foundation of XAI
  •  new algorithms in XAI
  •  model validation and model error estimation
  •  XAI models and applications for IoT
  •  XAI for data analytics and decision automation in IoT
  •  XAI for healthcare and transportation systems
  •  XAI for control systems
  •  XAI challenges and solutions
  •  application of XAI in cognitive computing

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 3355 KiB  
Article
Using Explainable AI (XAI) for the Prediction of Falls in the Older Population
by Yue Ting Tang and Roman Romero-Ortuno
Algorithms 2022, 15(10), 353; https://doi.org/10.3390/a15100353 - 28 Sep 2022
Cited by 2 | Viewed by 2171
Abstract
The prevention of falls in older people requires the identification of the most important risk factors. Frailty is associated with risk of falls, but not all falls are of the same nature. In this work, we utilised data from The Irish Longitudinal Study [...] Read more.
The prevention of falls in older people requires the identification of the most important risk factors. Frailty is associated with risk of falls, but not all falls are of the same nature. In this work, we utilised data from The Irish Longitudinal Study on Ageing to implement Random Forests and Explainable Artificial Intelligence (XAI) techniques for the prediction of different types of falls and analysed their contributory factors using 46 input features that included those of a previously investigated frailty index. Data of participants aged 65 years and older were fed into four random forest models (all falls or syncope, simple fall, complex fall, and syncope). Feature importance rankings were based on mean decrease in impurity, and Shapley additive explanations values were calculated and visualised. Female sex and a previous fall were found to be of high importance in all of the models, and polypharmacy (being on five or more regular medications) was ranked high in the syncope model. The more ‘accidental’ (extrinsic) nature of simple falls was demonstrated in its model, where the presence of many frailty features had negative model contributions. Our results highlight that falls in older people are heterogenous and XAI can provide new insights to help their prevention. Full article
Show Figures

Figure 1

Back to TopTop