Special Issue "Explainable and Interpretable AI"
Deadline for manuscript submissions: 20 April 2023 | Viewed by 108
Interests: Artificial Intelligence; AI ethics; machine learning; assistive technologies
Over the last few years, the European Union and its Member States have released several documents to define an ethical framework for Artificial Intelligence (AI) and a draft for a future regulation (the AI Act). Guidelines such as the Trustworthy AI Guidelines from the High-Level Expert Group and national strategies by EU Member States promote the inclusion of ethical principles such as fairness, accountability, or transparency. Accordingly, the AI community has been moving toward the operationalization of responsible practices of AI design, development, and use. In particular, the extended use of deep neural networks within applications that are now classified as high-risk by the AI Act, such as the use of facial recognition in law enforcement or healthcare, has raised several ethical and legal concerns regarding their design, but also the social and environmental impact. Indeed, these AI-based systems are also known as black-box or opaque models due to their lack of interpretability.
Explainable and interpretable AI (XAI/IAI) is an active area of research that aims to contribute to build a culture of best practices to achieve responsible and trustworthy AI. It focuses on developing methods to better understand the process behind the algorithms, while establishing levels of explainability adapted to different user-oriented audiences, enhancing the decision-making power of AI stakeholders by providing tools for assessment, understandability, and interaction with AI-based systems.
This Special Issue aims to collect high-quality, original state-of-the-art papers that present novel research including the following non-exhaustive list of topics:
- Bias detection/evaluation/removal
- Ethical and legal aspects of XAI/IAI
- Evaluation metrics
- Human-understandable explanations
- Epistemic aspects of XAI
- White-box approaches
- Applications of XAI/IAI in different domains
Dr. Atia Cortés
Dr. Francisco Grimaldo
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
- bias detection/evaluation/removal
- ethical and legal aspects of XAI/IAI
- evaluation metrics
- human-understandable explanations
- epistemic aspects of XAI
- white-box approaches
- applications of XAI/IAI in different domains