applsci-logo

Journal Browser

Journal Browser

Machine Learning and Reasoning for Reliable and Explainable AI

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 30 May 2025 | Viewed by 3100

Special Issue Editors


E-Mail Website
Guest Editor
1. Associate Professor, School of Computing, University of Portsmouth, Portsmouth PO1 2UP, UK
2. Associate Professor, Faculty of Engineering, Technical University of Sofia, 1756 Sofia, Bulgaria
Interests: future and emerging technologies; computing; computational intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computing, University of Portsmouth, Portsmouth PO1 2UP, UK
Interests: fuzzy logic; artificial intelligence; machine learning; AI and ML applications; intelligent transportation systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Currently, artificial intelligence (AI) is one of the technologies with the greatest impact in several areas, with geopolitical, social, and economic implications, among others. However, the increasing complexity of AI models, such as convolutional neural networks and deep learning architectures, gives rise to concerns about their interpretability and explainability. As AI technologies become embedded in decision-making processes, it is crucial to understand and validate the reasoning behind AI-generated outcomes. This necessity has given rise to the concept of explainable AI (XAI), an area of research and development focused on making AI systems more transparent and interpretable.

XAI is the ability of AI systems to provide clear and understandable explanations for their actions and decisions. XAI allows human users to comprehend and trust the results and output created by machine learning (ML) algorithms. Traditional AI models, particularly deep learning algorithms, often operate as black boxes, making it challenging for users to comprehend how these systems arrive at specific outcomes. XAI focuses on developing new approaches for explanations of black-box models by achieving good explainability without sacrificing system performance.

Complementing ML with machine reasoning can make AI more sophisticated. Machine reasoning solves problems by applying human-like common sense to learned data. Machine reasoning is crucial as a complement to ML, as it provides recommendations that are explainable. This allows humans to trace any decisions made back through the process, which increases the auditability and explainability of the system. Explainability in ML can help build trust in ML models and enable the adoption of ML at a larger scale.

In this upcoming Special Issue, we welcome various research articles or reviews on explainable and interpretable ML techniques for various applications. Research topics of interest include (but are not limited to) the following:

  • Human–computer interaction for designing user interfaces for explainability;
  • Causal thinking, reasoning, and modelling;
  • Ethical ML;
  • Causal learning for explainable ML;
  • Transparent, comprehensible, and explainable ML;
  • Reliability analysis of ML models;
  • Interpretability in complex machine learning modelling;
  • Responsible generative AI;
  • Explainable and interpretable AI for classification and non-classification problems (e.g., regression, segmentation, and reinforcement learning);
  • Explainable/interpretable AI for fairness, privacy, and trustworthy models;
  • Novel criteria to evaluate explanation and interpretability;
  • Theoretical foundations of explainable/interpretable AI;
  • Planning under uncertainty;
  • Explainable conversational agents;
  • Explaining black-box models;
  • Hybrid approaches (e.g., Neuro-Fuzzy systems) for XAI;
  • Role of fuzzy knowledge representation in XAI;
  • Role of natural language generation in XAI.

Dr. Raheleh Jafari
Dr. Alexander Gegov
Dr. Farzad Arabikhan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • explainable AI
  • machine learning
  • artificial intelligence

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 2622 KiB  
Article
Transformer-Based Explainable Model for Breast Cancer Lesion Segmentation
by Huina Wang, Lan Wei, Bo Liu, Jianqiang Li, Jinshu Li, Juan Fang and Catherine Mooney
Appl. Sci. 2025, 15(3), 1295; https://doi.org/10.3390/app15031295 - 27 Jan 2025
Viewed by 1083
Abstract
Breast cancer is one of the most prevalent cancers among women, with early detection playing a critical role in improving survival rates. This study introduces a novel transformer-based explainable model for breast cancer lesion segmentation (TEBLS), aimed at enhancing the accuracy and interpretability [...] Read more.
Breast cancer is one of the most prevalent cancers among women, with early detection playing a critical role in improving survival rates. This study introduces a novel transformer-based explainable model for breast cancer lesion segmentation (TEBLS), aimed at enhancing the accuracy and interpretability of breast cancer lesion segmentation in medical imaging. TEBLS integrates a multi-scale information fusion approach with a hierarchical vision transformer, capturing both local and global features by leveraging the self-attention mechanism. This model addresses the limitations of existing segmentation methods, such as the inability to effectively capture long-range dependencies and fine-grained semantic information. Additionally, TEBLS incorporates visualization techniques to provide insights into the segmentation process, enhancing the model’s interpretability for clinical use. Experiments demonstrate that TEBLS outperforms traditional and existing deep learning-based methods in segmenting complex breast cancer lesions with variations in size, shape, and texture, achieving a mean DSC of 81.86% and a mean AUC of 97.72% on the CBIS-DDSM test set. Our model not only improves segmentation accuracy but also offers a more explainable framework, which has the potential to be used in clinical settings. Full article
(This article belongs to the Special Issue Machine Learning and Reasoning for Reliable and Explainable AI)
Show Figures

Figure 1

16 pages, 3002 KiB  
Article
Advancing Model Explainability: Visual Concept Knowledge Distillation for Concept Bottleneck Models
by Ju-Hwan Lee, Dang Thanh Vu, Nam-Kyung Lee, Il-Hong Shin and Jin-Young Kim
Appl. Sci. 2025, 15(2), 493; https://doi.org/10.3390/app15020493 - 7 Jan 2025
Viewed by 937
Abstract
This study explores the integration of concept bottleneck models (CBMs) with knowledge distillation (KD) while preserving the locality characteristics of the CBM. Although KD proves effective in model compression, compressed models often lack interpretability in their decision-making process. We enhance comprehensive explainability by [...] Read more.
This study explores the integration of concept bottleneck models (CBMs) with knowledge distillation (KD) while preserving the locality characteristics of the CBM. Although KD proves effective in model compression, compressed models often lack interpretability in their decision-making process. We enhance comprehensive explainability by maintaining CBMs’ inherent interpretability through our novel approach to knowledge distillation. We introduce visual concept knowledge distillation (VICO-KD), which transfers both explicit and implicit visual concepts from the teacher to the student model while preserving the local interpretability of the CBM, enabling accurate classification and clear visualization of evidence. VICO-KD demonstrates superior performance on benchmark datasets compared to Vanilla-KD, ensuring the student model learns visual concepts while maintaining the local interpretation capabilities of the teacher CBM. Our methodology shows competitive performance against existing concept models, and the student model, trained via VICO-KD, exhibits enhanced performance compared to the teacher during interventions. This study highlights the effectiveness of combining a CBM with KD to improve both interpretability and explainability in compressed models while maintaining locality properties. Full article
(This article belongs to the Special Issue Machine Learning and Reasoning for Reliable and Explainable AI)
Show Figures

Figure 1

Back to TopTop