Journal Menu► ▼ Journal Menu
Journal Browser► ▼ Journal Browser
Special Issue "Explainability Methods in Artificial Intelligence"
A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Evolutionary Algorithms and Machine Learning".
Deadline for manuscript submissions: 15 August 2023 | Viewed by 131
Special Issue Editor
Interests: sustainable software engineering; human–computer interfaces; assisted living; data mining; machine learning
Special Issues, Collections and Topics in MDPI journals
Special Issue in Information: Cloud Gamification
Special Issue in Electronics: Computational Intelligence for Physiological Sensors and Body Sensor Networks
Special Issue in Computers: Selected Papers from the 25th International Conference on Information and Software Technologies (ICIST 2019)
Special Issue in Information: Cloud Gamification 2019
Topical Collection in Electronics: Application of Advanced Computing, Control and Processing in Engineering
Special Issue in Remote Sensing: Advanced Theory, Methods, Technique and Applications for Remote Sensing Big Data
Special Issue in Sensors: Sustainable Computing Based on Internet of Things Empowered with Artificial Intelligence and Blockchain
Special Issue in Sensors: Artificial Intelligence in Medical Sensors II
Special Issue in Computers: Survey in Deep Learning for IoT Applications
Special Issue in Journal of Sensor and Actuator Networks: Machine Learning Techniques for Network Management: Foresight and Challenges
Special Issue in Algorithms: Machine Learning in Statistical Data Processing
Special Issue in Computers: Feature Papers in Computers 2023
Special Issue in Water: Empowering Future Generation of Water Industry through Microbiological Sensors
Topics: Software Engineering and Applications
Topics: AI-Enabled Sustainable Computing for Digital Infrastructures: Challenges and Innovations
Special Issue Information
As artificial intelligence (AI) systems are being deployed in more and more critical applications, there is an increasing need to understand how they make decisions and to ensure that they are trustworthy. Explainable AI (XAI) is a rapidly growing area of research that aims to make AI systems more transparent and interpretable, so that their decisions can be understood and trusted by human users.
In recent years, there has been a significant increase in the use of AI systems in various domains, such as healthcare, finance, autonomous systems, and many more. However, these AI systems are often based on complex and opaque models, such as deep neural networks, that are difficult for humans to interpret. This lack of transparency can make it difficult to understand why a particular decision was made, and it can also make it difficult to ensure that the system is making fair and unbiased decisions. To address these issues, researchers have started to develop methods for making AI systems more interpretable and transparent.
The field of XAI Is still in its ea”ly s’ages, and there is currently a lack of consensus on what exactly constitutes an explainable AI system. However, there are several different approaches that have been proposed to make AI systems more interpretable, including techniques for visualizing deep neural networks, methods for generating human-readable explanations of AI decisions, and approaches for evaluating the interpretability of AI models.
In addition to these technical approaches, there is also a growing body of research on the social and ethical implications of XAI, including studies on the trade-offs between model complexity and interpretability, and research on the integration of explainability methods with other AI tasks, such as fairness and robustness.
This Special Issue aims to provide a comprehensive overview of the latest developments and trends in XAI, and will cover a wide range of topics related to the transparency and interpretability of AI systems. We invite submissions from researchers working in areas such as machine learning, computer vision, natural language processing, and other fields that are relevant to explainable AI.
The goal of this Special Issue is to provide a forum for researchers to share their latest research findings and to promote further discussion and collaboration in this rapidly growing area of research.
Prof. Dr. Robertas Damaševičius
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
- techniques for visualizing and interpreting deep neural networks
- methods for generating human-readable explanations of AI decisions
- approaches for evaluating the interpretability of AI models
- research on the trade-offs between model complexity and interpretability
- integration of explainability methods with other AI tasks, such as fairness and robustness
- theoretical foundations and frameworks for explainable AI
- case studies and real-world applications of explainable AI
- human–AI interaction and explainability in human-in-the-loop systems
- natural language generation models and chatbots
- advancement in explainable AI in various domains, such as healthcare, autonomous systems, education, and finance
- surveys of explainable ai systems and applications
- future trends and open challenges in explainable AI