Journal Menu► ▼ Journal Menu
Journal Browser► ▼ Journal Browser
Special Issue "Advances in Explainable Artificial Intelligence"
A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Artificial Intelligence".
Deadline for manuscript submissions: 28 September 2023 | Viewed by 21687
Special Issue Editors
Interests: machine learning; computational intelligence; game theory applications to machine learning and networking
Interests: machine learning; semantic web; information retrieval
Special Issue Information
Machine Learning (ML)-based Artificial Intelligence (AI) algorithms can learn from known examples of various abstract representations and models that, once applied to unknown examples, can perform classification, regression or forecasting tasks, to name a few.
Very often, these highly effective ML representations are difficult to understand; this holds true particularly for deep learning models, which can involve millions of parameters. However, for many applications, it is of utmost importance for the stakeholders to understand the decisions made by the system, in order to use them better. Furthermore, for decisions that affect an individual, the legislation might even advocate in the future a “right to an explanation”. Overall, improving the algorithms’ explainability may foster trust and social acceptance of AI.
The need to make ML algorithms more transparent and more explainable has generated several lines of research that form an area known as explainable Artificial Intelligence (XAI).
Among the goals of XAI are adding transparency to ML models by providing detailed information about why the system has reached a particular decision; designing more explainable and transparent ML models, while at the same time maintaining high performance levels; finding a way to evaluate the overall explainability and transparency of the models and quantifying their effectiveness for different stakeholders.
The objective of this Special Issue is to explore recent advances and techniques in the XAI area.
Research topics of interest include (but are not limited to):
- Devising machine learning models that are transparent-by-design;
- Planning for transparency, from data collection up to training, test, and production;
- Developing algorithms and user interfaces for explainability;
- Identifying and mitigating biases in data collection;
- Performing black-box model auditing and explanation;
- Detecting data bias and algorithmic bias;
- Learning causal relationships;
- Integrating social and ethical aspects of explainability;
- Integrating explainability into existing AI systems;
- Designing new explanation modalities;
- Exploring theoretical aspects of explanation and interpretability;
- Investigating the use of XAI in application sectors such as healthcare, bioinformatics, multimedia, linguistics, human–computer interaction, machine translation, autonomous vehicles, risk assessment, justice, etc.
Prof. Dr. Gabriele Gianini
Prof. Dr. Pierre-Edouard Portier
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
- machine learning
- deep learning
The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.
Title: Explainable Artificial Intelligence in Healthcare and Agriculture
Authors: Shakti kinger; Vrushali Kulkarni
Affiliation: Dr. Vishwanath Karad MIT World Peace University, Pune, India
Abstract: Deep learning methods have achieved remarkable success in various Artificial Intelligence applications like healthcare, finance, the autonomous vehicle which have highly influenced human life. But the black-box nature, lack of transparency, and inability to explain its decisions have created challenges for its adoption in high-risk applications. Explainable Al (XAI) is a suite of tools, techniques, and algorithms that aims to generate highly interpretable decisions while maintaining a high level of accuracy. Our work summarizes recent developments in this field, and we present two classification tasks wherein we make use of Grad-CAM++ (Gradient-weighted Class Activation Mapping) to explain predictions of deep learning models. Our first evaluation is performed in the Agriculture domain to explain the reasoning behind the classifications made by the plant disease detection model. Our second evaluation is in the health care domain where the impact of predictions made by the black box model needs to be understood before it can be used for prescribing treatment. We apply XAI to explain the bone X-ray classification model's feature mappings.