Topic Editors

Department of Applied Data Science, CPGE, San Jose State University, 1 Washington Sq, San Jose, CA 95192, USA
School of AI and Advanced Computing, Xi'an Jiaotong Liverpool University (XJTLU), Suzhou, China

Theories, Techniques, and Real-World Applications for Advancing Explainable AI

Abstract submission deadline
30 April 2026
Manuscript submission deadline
30 June 2026
Viewed by
33

Topic Information

Dear Colleagues,

Today, Artificial Intelligence (AI) is playing an increasingly significant role in many aspects of our daily life. The complexity and impact of AI systems are also increasing, making it challenging to comprehend the reasoning behind their decisions. Transparency, which promotes accountability and confidence in AI systems, is naturally in high demand. As a result, there has been a notable shift in emphasis toward Explainable AI (XAI), a new field of study focused on creating tools, methods, and frameworks that help humans, including other researchers, understand and interpret AI models more easily.

The goal of this thematic Issue is to compile state-of-the-art research that tackles the complex problems of improving explainability in AI. It aims to examine the theoretical underpinnings of XAI, provide cutting-edge methods for enhancing model transparency, and highlight practical uses where explainability plays a critical role in decision-making. It is anticipated that the papers in this Issue will offer significant insights into the science and practice of XAI, ranging from creating user-centric interpretability tools to creating reliable models that strike a balance between explainability and performance.

For applications like computer vision and natural language processing, this Topic aims to compile a wide range of viewpoints on how explainability can be included into different AI technologies including deep learning, unsupervised learning, and reinforcement learning. The ethical issues, legal requirements, and human aspects that need to be considered while implementing explainable models can also be highlighted in articles.

As AI continues to be embedded in the fabric of society, the need for systems that not only perform at a high level but also provide transparent and justifiable decisions has never been greater. Through this topical Issue, we aim to advance the dialogue on how explainable AI can evolve to meet these needs, ultimately fostering greater trust and wider adoption of AI technologies in socially sensitive contexts. The articles for this Topic are expected to inspire further exploration and innovation in the field of explainable AI, laying the groundwork for AI systems that are not only powerful but also comprehensible and accountable.

Some of the suggested areas of research for this call for papers include the following:

1. Theoretical Foundations of Explainable AI

  • Mechanistic interpretability.
  • Development of new models and frameworks for explainability in AI.
  • Theoretical analysis of interpretability and explainability across different AI frameworks such as deep learning, reinforcement learning, etc.
  • Mathematical and formal treatments of explainability, transparency, and interpretability.
  • Approaches to measuring and evaluating the effectiveness of explanations.

2. Explainable Machine Learning Models and Techniques

  • Novel methods for making black-box models interpretable.
  • Design of intrinsically interpretable models that balance performance with explainability.
  • Post hoc explanation methods, including feature importance, surrogate models, and decision trees for explaining complex models.
  • Techniques for providing counterfactual explanations or model debugging to enhance interpretability.

3. Human-Centered Explainability

  • User-centric approaches for designing explanations that are comprehensible, meaningful, and actionable to end-users.
  • Evaluation of human understanding and trust in AI systems based on different types of explanations.
  • Cognitive models for understanding how humans interpret AI explanations.
  • Interaction design and visualization techniques that facilitate human–AI collaboration through explanations.

4. The Intersection of Explainable AI with Ethics, Fairness, and Transparency

  • Ethical considerations in explainable AI systems.
  • Addressing bias, fairness, and discrimination when building explainable models.
  • Regulatory and policy frameworks for ensuring transparency in AI decision-making processes.
  • XAI’s role in enhancing accountability and trust in AI-based decision systems.

5. Real-World Applications of Explainable AI

  • Case studies and applications of XAI.
  • Challenges and solutions in integrating explainable AI into existing AI applications.
  • Evaluation of explainable AI in practice, including metrics, benchmarks, and user feedback in real-world deployments.
  • Impact of XAI in high-stakes decision-making domains, such as medical diagnostics, risk assessment, and judiciary.

6. Explainability in Wireless Sensor Networks

  • Improving Sensor Data Interpretation using XAI.
  • XAI for optimizing sensor deployment.
  • XAI techniques for advancing research in smart cities and systems.
  • XAI for improving fault diagnosis in sensor networks and distributed sensor systems.
  • Enhancing environmental monitoring, precision agriculture, and industrial automation using XAI.

7. Explainability for Unsupervised and Reinforcement Learning Models

  • Explaining unsupervised machine learning models and deep generative models.
  • Interpretability and explainability in reinforcement learning.
  • Improving the transparency of self-learning systems in dynamic environments.

8. Evaluation Methods for Explainable AI

  • Metrics and benchmarks to assess the effectiveness of AI explanations.
  • Evaluating the impact of interpretability on model performance and user trust.
  • Comparison of explanation techniques and models in terms of usability, accuracy, and user comprehension.

9. Integration of Explainable AI with Other Emerging AI Technologies

  • Explainability in AI systems that incorporate multi-modal data such as combining text, images, and sensor data.
  • Cross-disciplinary integration of explainable AI with other emerging fields like neuro-symbolic AI, quantum computing, or autonomous robotics.
  • Hybrid approaches that combine human expertise with machine learning insights for more transparent and accountable decision-making.

10. Challenges and Future Directions in Explainable AI

  • Open challenges in scaling explainable AI solutions to large, complex systems and data.
  • The future of explainability in AI: what is next for transparency and interpretability in rapidly advancing AI technologies?
  • Research on the trade-off between explainability and performance in various AI applications. We look forward to receiving your contributions.

Dr. Vishnu S. Pendyala
Dr. Affan Yasin
Topic Editors

Keywords

  • explainable AI (XAI)
  • interpretability
  • transparency
  • human-centered explainability
  • ethical considerations

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
AI
ai
3.1 7.2 2020 18.9 Days CHF 1600 Submit
Algorithms
algorithms
1.8 4.1 2008 18.9 Days CHF 1600 Submit
Electronics
electronics
2.6 5.3 2012 16.4 Days CHF 2400 Submit
Information
information
2.4 6.9 2010 16.4 Days CHF 1600 Submit
Sensors
sensors
3.4 7.3 2001 18.6 Days CHF 2600 Submit
Future Internet
futureinternet
2.8 7.1 2009 16.9 Days CHF 1600 Submit

Preprints.org is a multidisciplinary platform offering a preprint service designed to facilitate the early sharing of your research. It supports and empowers your research journey from the very beginning.

MDPI Topics is collaborating with Preprints.org and has established a direct connection between MDPI journals and the platform. Authors are encouraged to take advantage of this opportunity by posting their preprints at Preprints.org prior to publication:

  1. Share your research immediately: disseminate your ideas prior to publication and establish priority for your work.
  2. Safeguard your intellectual contribution: Protect your ideas with a time-stamped preprint that serves as proof of your research timeline.
  3. Boost visibility and impact: Increase the reach and influence of your research by making it accessible to a global audience.
  4. Gain early feedback: Receive valuable input and insights from peers before submitting to a journal.
  5. Ensure broad indexing: Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers

This Topic is now open for submission.
Back to TopTop