applsci-logo

Journal Browser

Journal Browser

Explainable Artificial Intelligence Technology and Its Applications

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 December 2025 | Viewed by 2819

Special Issue Editors


E-Mail Website
Guest Editor
School of Integrated Circuits, Shandong University, Jinan 250101, China
Interests: deep learning; explainable artificial intelligence (XAI); EEG analysis; brain–computer interface (BCI); seizure detection; iris recognition; FPGA-based deep learning hardware accelerator design; cognitive science

E-Mail Website
Guest Editor
School of Integrated Circuits, Shandong University, Jinan 250101, China
Interests: deep learning; explainable artificial intelligence (XAI); seizure detection; EEG analysis; brain–computer interface (BCI); FPGA-based deep learning hardware accelerator design

E-Mail Website
Guest Editor
School of Microelectronics, Shandong University, Jinan 250100, China
Interests: deep learning; BCI; biomedical signal processing; neural photostimulation; cognitive computing

Special Issue Information

Dear Colleagues,

Explainable artificial intelligence (XAI) aims to address one of the most critical challenges in artificial intelligence: making AI systems transparent, interpretable and trustworthy for researchers, developers, and end-users. By elucidating the internal reasoning processes of AI models, XAI can enhance user trust and improve the reliability of automated decision-making systems.

This Special Issue welcomes the submission of high-quality original research and review articles exploring the application and emerging frontiers of XAI. In addition to the application of conventional XAI methods, we also encourage submissions exploring biologically inspired, cognitively motivated, and neuroscience-related XAI methods.

We invite studies covering theoretical foundations, algorithmic innovations, practical applications in various domains, and empirical evaluations of XAI methods. Relevant application areas include, but are not limited to, biomedical signal processing, computer vision, natural language processing, robotics, autonomous vehicles, recommender systems, trustworthy big data analytics, edge/IoT devices, finance, and cognitive sciences.

 Topics of interest include, but are not limited to, the following:

  1. Domain‑specific applications of XAI in health care, autonomous systems, smart manufacturing, finance, cybersecurity, and environmental science;
  2. Cognitive science- and biologically inspired XAI methods;
  3. Hardware‑efficient and real‑time XAI on edge, mobile, FPGA, or neuromorphic platforms;
  4. Causal, counterfactual, and contrastive explanation frameworks;
  5. XAI for privacy preservation, fairness auditing, and responsible AI governance;
  6. Benchmarks, evaluation metrics, and open‑source toolkits for XAI;
  7. Human–AI interaction studies evaluating explanatory effectiveness.

Dr. Guoyang Liu
Prof. Dr. Weidong Zhou
Prof. Dr. Lan Tian
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • explainable artificial intelligence (XAI)
  • post hoc explainability
  • ante‑hoc (intrinsic) interpretability
  • causal and counterfactual explanations
  • cognitive science‑inspired XAI
  • biologically inspired XAI
  • interpretable deep learning
  • trustworthy and responsible AI
  • human–AI interaction and usability studies
  • explainable biomedical signal processing
  • explainable computer vision
  • explainable natural language processing
  • edge and hardware‑efficient XAI

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 3099 KiB  
Article
Explainable Multi-Scale CAM Attention for Interpretable Cloud Segmentation in Astro-Meteorological Applications
by Qing Xu, Zichen Zhang, Guanfang Wang and Yunjie Chen
Appl. Sci. 2025, 15(15), 8555; https://doi.org/10.3390/app15158555 (registering DOI) - 1 Aug 2025
Abstract
Accurate cloud segmentation is critical for astronomical observations and solar forecasting. However, traditional threshold- and texture-based methods suffer from limited accuracy (65–80%) under complex conditions such as thin cirrus or twilight transitions. Although the deep-learning segmentation method based on U-Net effectively captures low-level [...] Read more.
Accurate cloud segmentation is critical for astronomical observations and solar forecasting. However, traditional threshold- and texture-based methods suffer from limited accuracy (65–80%) under complex conditions such as thin cirrus or twilight transitions. Although the deep-learning segmentation method based on U-Net effectively captures low-level and high-level features and achieves significant progress in accuracy, current methods still lack interpretability and multi-scale feature integration and usually produce fuzzy boundaries or fragmented predictions. In this paper, we propose multi-scale CAM, an explainable AI (XAI) framework that integrates class activation mapping (CAM) with hierarchical feature fusion to quantify pixel-level attention across hierarchical features, thereby enhancing the model’s discriminative capability. To achieve precise segmentation, we integrate CAM into an improved U-Net architecture, incorporating multi-scale CAM attention for adaptive feature fusion and dilated residual modules for large-scale context extraction. Experimental results on the SWINSEG dataset demonstrate that our method outperforms existing state-of-the-art methods, improving recall by 3.06%, F1 score by 1.49%, and MIoU by 2.21% over the best baseline. The proposed framework balances accuracy, interpretability, and computational efficiency, offering a trustworthy solution for cloud detection systems in operational settings. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence Technology and Its Applications)
Show Figures

Figure 1

35 pages, 1458 KiB  
Article
User Comment-Guided Cross-Modal Attention for Interpretable Multimodal Fake News Detection
by Zepu Yi, Chenxu Tang and Songfeng Lu
Appl. Sci. 2025, 15(14), 7904; https://doi.org/10.3390/app15147904 - 15 Jul 2025
Viewed by 373
Abstract
In order to address the pressing challenge posed by the proliferation of fake news in the digital age, we emphasize its profound and harmful impact on societal structures, including the misguidance of public opinion, the erosion of social trust, and the exacerbation of [...] Read more.
In order to address the pressing challenge posed by the proliferation of fake news in the digital age, we emphasize its profound and harmful impact on societal structures, including the misguidance of public opinion, the erosion of social trust, and the exacerbation of social polarization. Current fake news detection methods are largely limited to superficial text analysis or basic text–image integration, which face significant limitations in accurately identifying deceptive information. To bridge this gap, we propose the UC-CMAF framework, which comprehensively integrates news text, images, and user comments through an adaptive co-attention fusion mechanism. The UC-CMAF workflow consists of four key subprocesses: multimodal feature extraction, cross-modal adaptive collaborative attention fusion of news text and images, cross-modal attention fusion of user comments with news text and images, and finally, input of fusion features into a fake news detector. Specifically, we introduce multi-head cross-modal attention heatmaps and comment importance visualizations to provide interpretability support for the model’s predictions, revealing key semantic areas and user perspectives that influence judgments. Through the cross-modal adaptive collaborative attention mechanism, UC-CMAF achieves deep semantic alignment between news text and images and uses social signals from user comments to build an enhanced credibility evaluation path, offering a new paradigm for interpretable fake information detection. Experimental results demonstrate that UC-CMAF consistently outperforms 15 baseline models across two benchmark datasets, achieving F1 Scores of 0.894 and 0.909. These results validate the effectiveness of its adaptive cross-modal attention mechanism and the incorporation of user comments in enhancing both detection accuracy and interpretability. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence Technology and Its Applications)
Show Figures

Figure 1

27 pages, 33532 KiB  
Article
Seg-Eigen-CAM: Eigen-Value-Based Visual Explanations for Semantic Segmentation Models
by Ching-Ting Chung and Josh Jia-Ching Ying
Appl. Sci. 2025, 15(13), 7562; https://doi.org/10.3390/app15137562 - 5 Jul 2025
Viewed by 376
Abstract
In recent years, most Explainable Artificial Intelligence methods have primarily focused on image classification. Although research on interpretability in image segmentation has been increasing, it remains relatively limited. As an extension of Grad-CAM, several methods have been proposed and applied to image segmentation [...] Read more.
In recent years, most Explainable Artificial Intelligence methods have primarily focused on image classification. Although research on interpretability in image segmentation has been increasing, it remains relatively limited. As an extension of Grad-CAM, several methods have been proposed and applied to image segmentation with the aim of enhancing existing techniques and adapting their properties. However, in this study, we highlight a common issue with gradient-based methods when generating visual explanations—these methods tend to emphasize background information, resulting in significant noise, especially when dealing with image segmentation tasks involving complex or cluttered backgrounds. Inspired by the widely used Eigen-CAM method, this study proposes a novel explainability approach tailored for semantic segmentation. By integrating gradient information and introducing a sign correction strategy, our method enhances spatial localization and reduces background noise, particularly in complex scenes. Through empirical studies, we compare our method with several representative methods, employing multiple evaluation metrics to quantify explainability and validate the advantages of our method. Overall, this study advances explainability methods for convolutional neural networks in semantic segmentation. Our approach not only preserves localized attention but also offers a simpler and more intuitive CAM, which has the potential to play a crucial role in sensitive application scenarios, fostering the development of trustworthy AI models. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence Technology and Its Applications)
Show Figures

Figure 1

22 pages, 1345 KiB  
Article
Integrating Financial Knowledge for Explainable Stock Market Sentiment Analysis via Query-Guided Attention
by Chuanyang Hong and Qingyun He
Appl. Sci. 2025, 15(12), 6893; https://doi.org/10.3390/app15126893 - 18 Jun 2025
Viewed by 459
Abstract
Sentiment analysis is widely applied in the financial domain. However, financial documents, particularly those concerning the stock market, often contain complex and often ambiguous information, and their conclusions frequently deviate from actual market fluctuations. Thus, in comparison to sentiment polarity, financial analysts are [...] Read more.
Sentiment analysis is widely applied in the financial domain. However, financial documents, particularly those concerning the stock market, often contain complex and often ambiguous information, and their conclusions frequently deviate from actual market fluctuations. Thus, in comparison to sentiment polarity, financial analysts are primarily concerned with understanding the underlying rationale behind an article’s judgment. Therefore, providing an explainable foundation in a document classification model has become a critical focus in the financial sentiment analysis field. In this study, we propose a novel approach integrating financial domain knowledge within a hierarchical BERT-GRU model via a Query-Guided Dual Attention (QGDA) mechanism. Driven by domain-specific queries derived from securities knowledge, QGDA directs attention to text segments relevant to financial concepts, offering interpretable concept-level explanations for sentiment predictions and revealing the ’why’ behind a judgment. Crucially, this explainability is validated by designing diverse query categories. Utilizing attention weights to identify dominant query categories for each document, a case study demonstrates that predictions guided by these dominant categories exhibit statistically significant higher consistency with actual stock market fluctuations (p-value = 0.0368). This approach not only confirms the utility of the provided explanations but also identifies which conceptual drivers are more indicative of market movements. While prioritizing interpretability, the proposed model also achieves a 2.3% F1 score improvement over baselines, uniquely offering both competitive performance and structured, domain-specific explainability. This provides a valuable tool for analysts seeking deeper and more transparent insights into market-related texts. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence Technology and Its Applications)
Show Figures

Figure 1

20 pages, 4172 KiB  
Article
Multi-Level Feature Fusion Attention Generative Adversarial Network for Retinal Optical Coherence Tomography Image Denoising
by Yiming Qian and Yichao Meng
Appl. Sci. 2025, 15(12), 6697; https://doi.org/10.3390/app15126697 - 14 Jun 2025
Viewed by 457
Abstract
Background: Optical coherence tomography (OCT) is limited by inherent speckle noise, degrading retinal microarchitecture visualization and pathological analysis. Existing denoising methods inadequately balance noise suppression and structural preservation, necessitating advanced solutions for clinical OCT reconstruction. Methods: We propose MFFA-GAN, a generative adversarial [...] Read more.
Background: Optical coherence tomography (OCT) is limited by inherent speckle noise, degrading retinal microarchitecture visualization and pathological analysis. Existing denoising methods inadequately balance noise suppression and structural preservation, necessitating advanced solutions for clinical OCT reconstruction. Methods: We propose MFFA-GAN, a generative adversarial network integrating multilevel feature fusion and an efficient local attention (ELA) mechanism. It optimizes cross-feature interactions and channel-wise information flow. Evaluations on three public OCT datasets compared traditional methods and deep learning models using PSNR, SSIM, CNR, and ENL metrics. Results: MFFA-GAN achieved good performance (PSNR:30.107 dB, SSIM:0.727, CNR:3.927, ENL:529.161) on smaller datasets, outperforming benchmarks and further enhanced interpretability through pixel error maps. It preserved retinal layers and textures while suppressing noise. Ablation studies confirmed the synergy of multilevel features and ELA, improving PSNR by 1.8 dB and SSIM by 0.12 versus baselines. Conclusions: MFFA-GAN offers a reliable OCT denoising solution by harmonizing noise reduction and structural fidelity. Its hybrid attention mechanism enhances clinical image quality, aiding retinal analysis and diagnosis. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence Technology and Its Applications)
Show Figures

Figure 1

32 pages, 6964 KiB  
Article
MDFT-GAN: A Multi-Domain Feature Transformer GAN for Bearing Fault Diagnosis Under Limited and Imbalanced Data Conditions
by Chenxi Guo, Vyacheslav V. Potekhin, Peng Li, Elena A. Kovalchuk and Jing Lian
Appl. Sci. 2025, 15(11), 6225; https://doi.org/10.3390/app15116225 - 31 May 2025
Viewed by 630
Abstract
In industrial scenarios, bearing fault diagnosis often suffers from data scarcity and class imbalance, which significantly hinders the generalization performance of data-driven models. While generative adversarial networks (GANs) have shown promise in data augmentation, their efficacy deteriorates in the presence of multi-category and [...] Read more.
In industrial scenarios, bearing fault diagnosis often suffers from data scarcity and class imbalance, which significantly hinders the generalization performance of data-driven models. While generative adversarial networks (GANs) have shown promise in data augmentation, their efficacy deteriorates in the presence of multi-category and structurally complex fault distributions. To address these challenges, this paper proposes a novel fault diagnosis framework based on a Multi-Domain Feature Transformer GAN (MDFT-GAN). Specifically, raw vibration signals are transformed into 2D RGB representations via joint time-domain, frequency-domain, and time–frequency-domain mappings, effectively encoding multi-perspective fault signatures. A Transformer-based feature extractor, integrated with Efficient Channel Attention (ECA), is embedded into both the generator and discriminator to capture global dependencies and channel-wise interactions, thereby enhancing the representation quality of synthetic samples. Furthermore, a gradient penalty (GP) term is introduced to stabilize adversarial training and suppress mode collapse. To improve classification performance, an Enhanced Hybrid Visual Transformer (EH-ViT) is constructed by coupling a lightweight convolutional stem with a ViT encoder, enabling robust and discriminative fault identification. Beyond performance metrics, this work also incorporates a Grad-CAM-based interpretability scheme to visualize hierarchical feature activation patterns within the discriminator, providing transparent insight into the model’s decision-making rationale across different fault types. Extensive experiments on the CWRU and Jiangnan University (JNU) bearing datasets validate that the proposed method achieves superior diagnostic accuracy, robustness under limited and imbalanced conditions, and enhanced interpretability compared to existing state-of-the-art approaches. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence Technology and Its Applications)
Show Figures

Figure 1

Back to TopTop