- 6.0Impact Factor
- 9.9CiteScore
- 28 daysTime to First Decision
Explainable Artificial Intelligence: Theoretical Foundations and Methodological Advances
This special issue belongs to the section “Learning“.
Special Issue Information
Dear Colleagues,
The integration of explainable artificial intelligence (XAI) into industrial process modelling and optimization is an indispensable driving force in the intelligent transformation of industries. Traditional black-box AI models often encounter limitations in industrial settings due to poor interpretability. These limitations include difficulties in gaining the trust of operators, an inability to trace decision-making processes and challenges in meeting regulatory compliance requirements. XAI, on the other hand, retains the advantages of AI, such as automatically learning industrial process characteristics, improving modelling accuracy and reducing reliance on prior knowledge, while also enhancing the transparency and credibility of model decisions.
Specifically, XAI clarifies the internal mechanisms and logical chains of AI model reasoning, improves the interpretable reasoning theory, and strengthens the logical consistency and theoretical unity among different XAI paradigms. Furthermore, XAI addresses the core theoretical challenges of black-box models by resolving issues such as poor traceability and verifiability through structured, quantifiable explanation frameworks. This lays a solid theoretical foundation for advancing AI. This Special Issue focuses on the theoretical foundations and methodological advances of XAI, aiming to improve its theoretical system and innovate core methods while reflecting the application value of theoretical achievements in practical scenarios.
The scope covers the entire theoretical and methodological spectrum of XAI, including the refinement of symbolic explainability theories, the optimization of explainable machine learning models, the integration of domain knowledge into XAI frameworks and the exploration of emerging XAI paradigms. Its application scenarios can be moderately extended to fields such as manufacturing and cyber–physical systems, with a core focus on theoretical depth, methodological innovation, and specific industrial implementations.
In this Special Issue, original research articles and reviews are welcome. Research areas may include (but are not limited to) the following:
- Symbolic explainability approaches
- Granular computing
- Fuzzy logic
- Bayesian networks
- Probabilistic graphical models
- Explainable machine learning models
- Random forests
- Gradient boosting trees
- k-Nearest neighbors
- XAI-based modelling for complex systems
- Interpretable AI-enhanced adaptive control methodologies
- XAI-enabled intelligent optimization methodologies
- Knowledge graph-driven explainable AI methods
- Emerging explainable AI approaches
- Neuro-symbolic explainable AI
- Federated explainable AI
- Large model explainable AI
Prof. Dr. Sheng Du
Dr. Javier Del Ser Lorente
Guest Editors
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Machine Learning and Knowledge Extraction is an international peer-reviewed open access monthly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
Keywords
- symbolic explainability approaches granular computing Fuzzy logic Bayesian networks Probabilistic graphical models
- explainable machine learning models Random forests Gradient boosting trees k-Nearest neighbors
- XAI-based modelling for complex systems interpretable AI-enhanced adaptive control methodologies XAI-enabled intelligent optimization methodologies Knowledge graph-driven explainable AI methods
- emerging explainable AI approaches neuro-symbolic explainable AI Federated explainable AI Large model explainable AI
Benefits of Publishing in a Special Issue
- Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
- Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
- Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
- External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
- Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

