Deep Learning and Explainable Artificial Intelligence (2nd Edition)

A special issue of Computers (ISSN 2073-431X). This special issue belongs to the section "AI-Driven Innovations".

Deadline for manuscript submissions: 31 July 2026 | Viewed by 3582

Special Issue Editor


E-Mail Website
Guest Editor
School of Mechanical Engineering, Purdue University, 585 Purdue Mall, West Lafayette, IN 47907, USA
Interests: predictive maintenance; heath monitoring for ground and aerial vehicles; data analytics; AI; innovation; nonlinear systems analysis and synthesis; adaptation; estimation; filtering; control; general artificial intelligence
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Breakthroughs in 'deep learning' via the use of intermediate features in multilayer 'neural networks' and generative adversarial networks using neural networks as generative and discriminative models, combined with the massive increase in computing power of GPU chips, have resulted in the widespread popularity and use of 'artificial intelligence' in the past decade. The apostrophes in the previous sentence are inserted intentionally to remind the reader that learning, in the biological sense, which improves survival outcomes via biological nervous systems or intelligent decisions that enhance energy and resource availability, is far beyond what current software can hope to achieve. The first phase of this Special Issue was successful in making AI/ML methods explainable in a variety of applications, ranging from agriculture to fluid mechanics to law enforcement. This second phase aims to go one step further towards the ultimate objective of making the human decision makers who use AI/ML methods accountable for their decisions.  This is possible when AI/ML methods generate outcomes with predictable properties when fed with data satisfying certain conditions. We do not want automatic machines massacring humans or opening dam waters or shutting down power grids or markets with no human being held responsible for negligence, damages, or even chargeable offences.

To this end, this issue calls for articles that lay out the limits of where and how AI in its present form can be used, besides continuing to make it explainable and transparent.

Thus, it is hoped that the Special Issue will stimulate AI that will increase efficiencies while not compromising safety, trust, fairness, predictability, and reliability when applied to systems with large energy use, such as power, water, transport, or financial grids, law, and government policy. As a first step towards this goal of transparency of AI algorithms, we seek papers that document the methods that meet the following criteria:

  1. The results are reproducible, at least in the statistical sense;
  2. Algorithms are provided in a common language of sequences of vector matrix algebra operations, which also underlies much deep learning;
  3. Conditions satisfied by data inputs, objective functions of optimization, or curve fitting are explicitly listed;
  4. The propagation of data uncertainty to algorithmic outcomes is documented through sensitivity analysis or Monte Carlo simulation;
  5. Show why AI/ML methods are suitable for the application from the perspective of decision-maker accountability.

Potential issues of interest include the following: while there is no repeatability, in general, in the training of weights in deep learning or most neural networks, there is repeatability in approximating functions or decision boundaries for similar sets of input data. Such results also exist in adaptive control, where there is asymptotic tracking without the convergence of parameter estimates. Similarly, a ChatGPT-like AI needs to maintain the consistency of its conclusions, provided the inputs remain consistent. The use of AI in the law can have, for example, quantifiable goals such as the prompt compensation of the victim and long-term reformation of the criminal to higher levels of productivity rather than classical legal outcomes of punishment or retribution, which are subjective. Can a chess or GO GAN handle some level of randomness in the rules of the game? Is the repeatability or predictability of AI/ML high enough in a certain application to permit use in critical areas?

Dr. Kartik B. Ariyur
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • model interpretability
  • model explainability
  • transparency in AI
  • trustworthy AI
  • human-in-the-loop AI
  • human/decision maker accountability
  • algorithmic accountability
  • AI ethics
  • causality/causal inference
  • fairness, accountability, and transparency (FAT/ML)
  • responsible AI

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

33 pages, 17932 KB  
Article
Early Detection of Aggressive Human Behavior in Video Streams Using Deep Spatiotemporal Models
by Aida Issembayeva, Anargul Shaushenova, Ardak Nurpeisova, Aidar Ispussinov, Buldyryk Suleimenova, Anargul Bekenova, Aliya Satybaldieva, Aigul Zholmukhanova and Galiya Mauina
Computers 2026, 15(5), 267; https://doi.org/10.3390/computers15050267 - 23 Apr 2026
Viewed by 277
Abstract
In this paper, we propose a spatiotemporal approach for binary classification of violent and non-violent behavior in real-world settings. The experimental pipeline includes video preprocessing, stratified data splitting, generation of temporally structured clips, and comparative evaluation of baseline models, including a convolutional neural [...] Read more.
In this paper, we propose a spatiotemporal approach for binary classification of violent and non-violent behavior in real-world settings. The experimental pipeline includes video preprocessing, stratified data splitting, generation of temporally structured clips, and comparative evaluation of baseline models, including a convolutional neural network. We also developed a Residual Adaptive Motion Temporal Binary Heat Network model that combines frame color characteristics, residual motion descriptions, temporal feature fusion, an early risk assessment mechanism, and interpretable localization maps. Experiments were conducted on a balanced dataset of 2000 video clips. The proposed model demonstrated the best early warning performance: a supervision rate of 0.6, an F1 score of 0.9527, and a balanced accuracy of 0.9533. With full supervision, the F1 score was 0.9342, and the area under the receiver operating characteristic curve (AUC) was 0.9871. The practical significance of the work is that the proposed approach can be used as a decision support tool for the preliminary identification of potentially dangerous video fragments with subsequent manual verification, without the assumption of autonomous use in high-risk scenarios. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence (2nd Edition))
Show Figures

Figure 1

29 pages, 5152 KB  
Article
Impact of Neural Network Initialisation Seed and Architecture on Accuracy, Generalisation and Generative Consistency in Data-Driven Internal Combustion Engine Modelling
by Arturas Gulevskis, Redha Benhadj-Djilali and Konstantin Volkov
Computers 2026, 15(3), 194; https://doi.org/10.3390/computers15030194 - 17 Mar 2026
Viewed by 420
Abstract
Artificial neural networks (ANNs) are widely used to approximate nonlinear mappings, yet their ability to capture thermodynamic behaviour in dynamic physical systems remains insufficiently characterised. This study investigates how representational capacity influences surrogate modelling accuracy for a crank-angle-resolved internal combustion engine (ICE) simulation [...] Read more.
Artificial neural networks (ANNs) are widely used to approximate nonlinear mappings, yet their ability to capture thermodynamic behaviour in dynamic physical systems remains insufficiently characterised. This study investigates how representational capacity influences surrogate modelling accuracy for a crank-angle-resolved internal combustion engine (ICE) simulation with a maximum dynamic state dimension of six. Two feedforward ANN configurations are evaluated: a low-capacity 5–5 architecture containing 84 trainable parameters and a high-capacity 25–25–25 architecture containing 1554 parameters (18.5× larger). Both networks approximate the nonlinear mapping from five embedded operating parameters to four peak thermodynamic outputs (maximum pressure, pressure phasing, maximum temperature, and temperature phasing). Evaluation across 53,178 operating points demonstrates that the high-capacity configuration reduces root mean squared error by factors of 30–50× relative to the low-capacity network, decreasing peak temperature error from 17.68 K to 0.36 K and peak pressure error from 0.116 MPa to 0.0025 MPa. Although both models achieve coefficients of determination exceeding 0.99, the low-capacity network exhibits heavy-tailed residual distributions and regime-dependent error amplification, whereas the high-capacity model reduces both central dispersion and extreme-case error. These results demonstrate that high correlation alone does not guarantee engineering reliability in nonlinear thermodynamic systems. Distribution-level analysis, including percentile and extreme-case characterisation, is required to evaluate engineering robustness. The findings provide a quantitative framework linking ANN capacity, nonlinear dynamic system representation, and predictive robustness. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence (2nd Edition))
Show Figures

Graphical abstract

26 pages, 5226 KB  
Article
Adaptive K-Fold Siamese Neural Network Classifier for Automatic Seatbelt Monitoring
by Ahmed M. Hasan, Farah F. Alkhalid, Safanah M. Rafaat and Amjad J. Humaidi
Computers 2026, 15(3), 157; https://doi.org/10.3390/computers15030157 - 3 Mar 2026
Viewed by 465
Abstract
A seatbelt is an essential aspect of safety in road traffic accidents. Although most traffic regulations enforce drivers and passengers to wear and fasten the seatbelt manually, AI-based techniques have been introduced for monitoring to improve safety standards. In this study, a new [...] Read more.
A seatbelt is an essential aspect of safety in road traffic accidents. Although most traffic regulations enforce drivers and passengers to wear and fasten the seatbelt manually, AI-based techniques have been introduced for monitoring to improve safety standards. In this study, a new approach is proposed to address the monitoring problem of seatbelts. Deep learning (DL) classification based on adaptive Siamese Neural Network (SNN) has been developed utilizing the K-fold method for feature verification. The proposed adaptive K-Fold-based SNN approach utilizes a binary seatbelt dataset, with positive and negative classes, to verify the status of the seatbelt. The network involves sharing a convolutional feature extractor, followed by a distinct-based similarity function. To enhance model reliability, 5-fold cross validation is applied (k = 5), splitting the dataset into 5 subsets, where the model is trained on four sets and validated on the fifth one. The model was trained using binary cross entropy loss, Adam optimization, and performance metrics such as accuracy, precision, recall, and F1 score. The seatbelt dataset is basically designed for object detection models. In this work, we used a dataset in the verification model and achieved high-performance metrics. The model is implemented using a Python-based Jupyter Notebook 7.5.1. It achieved a high performance in seatbelt verification with an average Accuracy = 0.9989, average Precision = 0.9988, average Recall = 0.9990, and average F1 Score = 0.9989. The proposed adaptive K-Fold SNN model can ensure reliability and reduce the risk of over fitting. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence (2nd Edition))
Show Figures

Figure 1

22 pages, 4731 KB  
Article
Proposal of a Modified Loss Function with the Gaussian Copula Density Function to Improve LSTM Predictions of PM10 and PM2.5 Concentrations
by Alejandro Mendoza-Ibarra, Marco Antonio Aceves-Fernandez, Juan Manuel Ramos-Arreguín and Artemio Sotomayor-Olmedo
Computers 2026, 15(2), 91; https://doi.org/10.3390/computers15020091 - 1 Feb 2026
Cited by 1 | Viewed by 544
Abstract
Air pollution forecasting for Particulate Matter (PM10 and PM2.5) is a challenge for human health in order to improve the life quality of humans around the world. This research focuses on evaluating a Long Short-Term Memory (LSTM) neural network [...] Read more.
Air pollution forecasting for Particulate Matter (PM10 and PM2.5) is a challenge for human health in order to improve the life quality of humans around the world. This research focuses on evaluating a Long Short-Term Memory (LSTM) neural network model with an improvement in the loss function using the Gaussian Copula Density function to predict PM10 and PM2.5 levels in four stations (AJM, CAM, MER and PED) in Mexico City. The model is compared with a plain LSTM neural network model for forecasting 12, 24, 48 and 72 h using error metrics root mean square error (RMSE), mean absolute error (MAE) and mean absolute percentage error (MAPE). The results demonstrate a superior performance of the modified loss function model, achieving the lowest error values across multiple stations and forecast horizons. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence (2nd Edition))
Show Figures

Figure 1

25 pages, 9162 KB  
Article
Image-Based Threat Detection and Explainability Investigation Using Incremental Learning and Grad-CAM with YOLOv8
by Zeynel Kutlu and Bülent Gürsel Emiroğlu
Computers 2025, 14(12), 511; https://doi.org/10.3390/computers14120511 - 24 Nov 2025
Viewed by 1439
Abstract
Real-world threat detection systems face critical challenges in adapting to evolving operational conditions while providing transparent decision making. Traditional deep learning models suffer from catastrophic forgetting during continual learning and lack interpretability in security-critical deployments. This study proposes a distributed edge–cloud framework integrating [...] Read more.
Real-world threat detection systems face critical challenges in adapting to evolving operational conditions while providing transparent decision making. Traditional deep learning models suffer from catastrophic forgetting during continual learning and lack interpretability in security-critical deployments. This study proposes a distributed edge–cloud framework integrating YOLOv8 object detection with incremental learning and Gradient-weighted Class Activation Mapping (Grad-CAM) for adaptive, interpretable threat detection. The framework employs distributed edge agents for inference on unlabeled surveillance data, with a central server validating detections through class verification and localization quality assessment (IoU ≥ 0.5). A lightweight YOLOv8-nano model (3.2 M parameters) was incrementally trained over five rounds using sequential fine tuning with weight inheritance, progressively incorporating verified samples from an unlabeled pool. Experiments on a 5064 image weapon detection dataset (pistol and knife classes) demonstrated substantial improvements: F1-score increased from 0.45 to 0.83, mAP@0.5 improved from 0.518 to 0.886 and minority class F1-score rose 196% without explicit resampling. Incremental learning achieved a 74% training time reduction compared to one-shot training while maintaining competitive accuracy. Grad-CAM analysis revealed progressive attention refinement quantified through the proposed Heatmap Focus Score, reaching 92.5% and exceeding one-shot-trained models. The framework provides a scalable, memory-efficient solution for continual threat detection with superior interpretability in dynamic security environments. The integration of Grad-CAM visualizations with detection outputs enables operator accountability by establishing auditable decision records in deployed systems. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence (2nd Edition))
Show Figures

Figure 1

Back to TopTop