Explainable AI in Computer Vision

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "AI in Imaging".

Deadline for manuscript submissions: closed (30 November 2025) | Viewed by 3309

Special Issue Editor


E-Mail Website
Guest Editor
Wageningen Food Safety Research, Wageningen University & Research, 6708 WB Wageningen, The Netherlands
Interests: explainable AI; image analysis; deep learning; computer vision

Special Issue Information

Dear Colleagues,

As AI continues to revolutionize computer vision, the demand for explainability has never been more crucial. This is especially the case when AI is used for high-stakes decision making. As a result, there has been a boom in the methods to improve our understanding of the black box nature of AI. These methods are commonly referred to as explainable artificial intelligence (XAI).

This Special Issue aims to highlight the latest advancements and challenges in XAI for computer vision. We welcome theoretical and practical submissions including, but not limited to, the following:

  • Novel XAI techniques in computer vision;
  • Visualization of XAI model decision processes;
  • Model-agnostic and model-specific XAI;
  • Case studies on XAI in real-world applications (e.g., agrifood, medical, autonomous driving, surveillance, etc.);
  • Benchmarking of XAI;
  • Evaluation of XAI;
  • Human-centered approaches to XAI;
  • Boosting AI performance using XAI.

We look forward to receiving your submissions.

Kind regards,

Dr. Bas Van der Velden
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • explainable AI
  • computer vision
  • image analysis
  • artificial intelligence
  • machine learning
  • deep learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 1142 KB  
Article
A Cross-Domain Benchmark of Intrinsic and Post Hoc Explainability for 3D Deep Learning Models
by Asmita Chakraborty, Gizem Karagoz and Nirvana Meratnia
J. Imaging 2026, 12(2), 63; https://doi.org/10.3390/jimaging12020063 - 30 Jan 2026
Viewed by 886
Abstract
Deep learning models for three-dimensional (3D) data are increasingly used in domains such as medical imaging, object recognition, and robotics. At the same time, the use of AI in these domains is increasing, while, due to their black-box nature, the need for explainability [...] Read more.
Deep learning models for three-dimensional (3D) data are increasingly used in domains such as medical imaging, object recognition, and robotics. At the same time, the use of AI in these domains is increasing, while, due to their black-box nature, the need for explainability has grown significantly. However, the lack of standardized and quantitative benchmarks for explainable artificial intelligence (XAI) in 3D data limits the reliable comparison of explanation quality. In this paper, we present a unified benchmarking framework to evaluate both intrinsic and post hoc XAI methods across three representative 3D datasets: volumetric CT scans (MosMed), voxelized CAD models (ModelNet40), and real-world point clouds (ScanObjectNN). The evaluated methods include Grad-CAM, Integrated Gradients, Saliency, Occlusion, and the intrinsic ResAttNet-3D model. We quantitatively assess explanations using the Correctness (AOPC), Completeness (AUPC), and Compactness metrics, consistently applied across all datasets. Our results show that explanation quality significantly varies across methods and domains, demonstrating that Grad-CAM and intrinsic attention performed best on medical CT scans, while gradient-based methods excelled on voxelized and point-based data. Statistical tests (Kruskal–Wallis and Mann–Whitney U) confirmed significant performance differences between methods. No single approach achieved superior results across all domains, highlighting the importance of multi-metric evaluation. This work provides a reproducible framework for standardized assessment of 3D explainability and comparative insights to guide future XAI method selection. Full article
(This article belongs to the Special Issue Explainable AI in Computer Vision)
Show Figures

Figure 1

17 pages, 3706 KB  
Article
Dual-Path Convolutional Neural Network with Squeeze-and-Excitation Attention for Lung and Colon Histopathology Classification
by Helala AlShehri
J. Imaging 2025, 11(12), 448; https://doi.org/10.3390/jimaging11120448 - 14 Dec 2025
Cited by 1 | Viewed by 758
Abstract
Lung and colon cancers remain among the leading causes of cancer-related mortality worldwide, underscoring the need for rapid and accurate histopathological diagnosis. Manual examination of biopsy slides is often time-consuming and prone to inter-observer variability, which highlights the importance of developing reliable and [...] Read more.
Lung and colon cancers remain among the leading causes of cancer-related mortality worldwide, underscoring the need for rapid and accurate histopathological diagnosis. Manual examination of biopsy slides is often time-consuming and prone to inter-observer variability, which highlights the importance of developing reliable and explainable automated diagnostic systems. This study presents DPCSE-Net, a lightweight dual-path convolutional neural network enhanced with a squeeze-and-excitation (SE) attention mechanism for lung and colon cancer classification. The dual-path structure captures both fine-grained cellular textures and global contextual information through multiscale feature extraction, while the SE attention module adaptively recalibrates channel responses to emphasize discriminative features. To enhance transparency and interpretability, Gradient-weighted Class Activation Mapping (Grad-CAM), attention heatmaps, and Integrated Gradients are employed to visualize class-specific activation patterns and verify that the model’s focus aligns with diagnostically relevant tissue regions. Evaluated on the publicly available LC25000 dataset, DPCSE-Net achieved state-of-the-art performance with 99.88% accuracy and F1-score, while maintaining low computational complexity. Ablation experiments confirmed the contribution of the dual-path design and SE module, and qualitative analyses demonstrated the model’s strong interpretability. These results establish DPCSE-Net as an accurate, efficient, and explainable framework for computer-aided histopathological diagnosis, supporting the broader goals of explainable AI in computer vision. Full article
(This article belongs to the Special Issue Explainable AI in Computer Vision)
Show Figures

Figure 1

64 pages, 45605 KB  
Article
SegClarity: An Attribution-Based XAI Workflow for Evaluating Historical Document Layout Models
by Iheb Brini, Najoua Rahal, Maroua Mehri, Rolf Ingold and Najoua Essoukri Ben Amara
J. Imaging 2025, 11(12), 424; https://doi.org/10.3390/jimaging11120424 - 28 Nov 2025
Viewed by 771
Abstract
In recent years, deep learning networks have demonstrated remarkable progress in the semantic segmentation of historical documents. Nonetheless, their limited explainability remains a critical concern, as these models frequently operate as black boxes, thereby constraining confidence in the trustworthiness of their outputs. To [...] Read more.
In recent years, deep learning networks have demonstrated remarkable progress in the semantic segmentation of historical documents. Nonetheless, their limited explainability remains a critical concern, as these models frequently operate as black boxes, thereby constraining confidence in the trustworthiness of their outputs. To enhance transparency and reliability in their deployment, increasing attention has been directed toward explainable artificial intelligence (XAI) techniques. These techniques typically produce fine-grained attribution maps in the form of heatmaps, illustrating feature contributions from different blocks and layers within a deep neural network (DNN). However, such maps often closely resemble the segmentation outputs themselves, and there is currently no consensus regarding appropriate explainability metrics for semantic segmentation. To overcome these challenges, we present SegClarity, a novel workflow designed to integrate explainability into the analysis of historical documents. The workflow combines visual and quantitative evaluations specifically tailored to segmentation-based applications. Furthermore, we introduce the Attribution Concordance Score (ACS), a new explainability metric that provides quantitative insights into the consistency and reliability of attribution maps. To evaluate the effectiveness of our approach, we conducted extensive qualitative and quantitative experiments using two datasets of historical document images, two U-Net model variants, and four attribution-based XAI methods. A qualitative assessment involved four XAI methods across multiple U-Net layers, including comparisons at the input level with state-of-the-art perturbation methods RISE and MiSuRe. Quantitatively, five XAI evaluation metrics were employed to benchmark these approaches comprehensively. Beyond historical document analysis, we further validated the workflow’s generalization by demonstrating its transferability to the Cityscapes dataset, a challenging benchmark for urban scene segmentation. The results demonstrate that the proposed workflow substantially improves the interpretability and reliability of deep learning models applied to the semantic segmentation of historical documents. To enhance reproducibility, we have released SegClarity’s source code along with interactive examples of the proposed workflow. Full article
(This article belongs to the Special Issue Explainable AI in Computer Vision)
Show Figures

Figure 1

Back to TopTop