Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (589)

Search Parameters:
Keywords = Explainable AI (xAI)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
69 pages, 30976 KB  
Review
Next-Gen Explainable AI (XAI) for Federated and Distributed Internet of Things Systems: A State-of-the-Art Survey
by Aristeidis Karras, Anastasios Giannaros, Natalia Amasiadi and Christos Karras
Future Internet 2026, 18(2), 83; https://doi.org/10.3390/fi18020083 - 4 Feb 2026
Abstract
Background: Explainable Artificial Intelligence (XAI) is deployed in Internet of Things (IoT) ecosystems for smart cities and precision agriculture, where opaque models can compromise trust, accountability, and regulatory compliance. Objective: This survey investigates how XAI is currently integrated into distributed and federated IoT [...] Read more.
Background: Explainable Artificial Intelligence (XAI) is deployed in Internet of Things (IoT) ecosystems for smart cities and precision agriculture, where opaque models can compromise trust, accountability, and regulatory compliance. Objective: This survey investigates how XAI is currently integrated into distributed and federated IoT architectures and identifies systematic gaps in evaluation under real-world resource constraints. Methods: A structured search across IEEE Xplore, ACM Digital Library, ScienceDirect, SpringerLink, and Google Scholar targeted publications related to XAI, IoT, edge/fog computing, smart cities, smart agriculture, and federated learning. Relevant peer-reviewed works were synthesized along three dimensions: deployment tier (device, edge/fog, cloud), explanation scope (local vs. global), and validation methodology. Results: The analysis reveals a persistent resource–interpretability gap: computationally intensive explainers are frequently applied on constrained edge and federated platforms without explicitly accounting for latency, memory footprint, or energy consumption. Only a minority of studies quantify privacy–utility effects or address causal attribution in sensor-rich environments, limiting the reliability of explanations in safety- and mission-critical IoT applications. Contribution: To address these shortcomings, the survey introduces a hardware-centric evaluation framework with the Computational Complexity Score (CCS), Memory Footprint Ratio (MFR), and Privacy–Utility Trade-off (PUT) metrics and proposes a hierarchical IoT–XAI reference architecture, together with the conceptual Internet of Things Interpretability Evaluation Standard (IOTIES) for cross-domain assessment. Conclusions: The findings indicate that IoT–XAI research must shift from accuracy-only reporting to lightweight, model-agnostic, and privacy-aware explanation pipelines that are explicitly budgeted for edge resources and aligned with the needs of heterogeneous stakeholders in smart city and agricultural deployments. Full article
(This article belongs to the Special Issue Human-Centric Explainability in Large-Scale IoT and AI Systems)
23 pages, 5043 KB  
Article
A Hybrid of ResNext101_32x8d and Swin Transformer Networks with XAI for Alzheimer’s Disease Detection
by Saeed Mohsen, Amr Yousef and M. Abdel-Aziz
Computers 2026, 15(2), 95; https://doi.org/10.3390/computers15020095 - 2 Feb 2026
Viewed by 58
Abstract
Medical images obtained from advanced imaging devices play a crucial role in supporting disease diagnosis and detection. Nevertheless, acquiring such images is often costly and storage-intensive, and it is time-consuming to diagnose individuals. The use of artificial intelligence (AI)-based automated diagnostic systems provides [...] Read more.
Medical images obtained from advanced imaging devices play a crucial role in supporting disease diagnosis and detection. Nevertheless, acquiring such images is often costly and storage-intensive, and it is time-consuming to diagnose individuals. The use of artificial intelligence (AI)-based automated diagnostic systems provides potential solutions to address the limitations of cost and diagnostic time. In particular, deep learning and explainable AI (XAI) techniques provide a reliable and robust approach to classifying medical images. This paper presents a hybrid model comprising two networks, ResNext101_32x8d and Swin Transformer to differentiate four categories of Alzheimer’s disease: no dementia, very mild dementia, mild dementia, and moderate dementia. The combination of the two networks is applied to imbalanced data, trained on 5120 MRI images, validated on 768 images, and tested on 512 other images. Grad-CAM and LIME techniques with a saliency map are employed to interpret the predictions of the model, providing transparent and clinically interpretable decision support. The proposed combination is realized through a TensorFlow framework, incorporating hyperparameter optimization and various data augmentation methods. The performance evaluation of the proposed model is conducted through several metrics, including the error matrix, precision recall (PR), receiver operating characteristic (ROC), accuracy, and loss curves. Experimental results reveal that the hybrid of ResNext101_32x8d and Swin Transformer achieved a testing accuracy of 98.83% with a corresponding loss rate of 0.1019. Furthermore, for the combination “ResNext101_32x8d + Swin Transformer”, the precision, F1-score, and recall were 99.39%, 99.15%, and 98.91%, respectively, while the area under the ROC curve (AUC) was 1.00, “100%”. The combination of proposed networks with XAI techniques establishes a unique contribution to advance medical AI systems and assist radiologists during Alzheimer’s disease screening of patients. Full article
(This article belongs to the Section AI-Driven Innovations)
Show Figures

Figure 1

23 pages, 744 KB  
Article
Integrating Explainable AI (XAI) and NCA-Validated Clustering for an Interpretable Multi-Layered Recruitment Model
by Marcin Nowak and Marta Pawłowska-Nowak
AI 2026, 7(2), 53; https://doi.org/10.3390/ai7020053 - 2 Feb 2026
Viewed by 84
Abstract
The growing use of AI-supported recruitment systems raises concerns related to model opacity, auditability, and ethically sensitive decision-making, despite their predictive potential. In human resource management, there is a clear need for recruitment solutions that combine analytical effectiveness with transparent and explainable decision [...] Read more.
The growing use of AI-supported recruitment systems raises concerns related to model opacity, auditability, and ethically sensitive decision-making, despite their predictive potential. In human resource management, there is a clear need for recruitment solutions that combine analytical effectiveness with transparent and explainable decision support. Existing approaches often lack coherent, multi-layered architectures integrating expert knowledge, machine learning, and interpretability within a single framework. This article proposes an interpretable, multi-layered recruitment model designed to balance predictive performance with decision transparency. The framework integrates an expert rule-based screening layer, an unsupervised clustering layer for structuring candidate profiles and generating pseudo-labels, and a supervised classification layer trained using repeated k-fold cross-validation. Model behavior is explained using SHAP, while Necessary Condition Analysis (NCA) is applied to diagnose minimum competency thresholds required to achieve a target quality level. The approach is demonstrated in a Data Scientist recruitment case study. Results show the predominance of centroid-based clustering and the high stability of linear classifiers, particularly logistic regression. The proposed framework is replicable and supports transparent, auditable recruitment decisions. Full article
Show Figures

Figure 1

34 pages, 5749 KB  
Systematic Review
Remote Sensing and Machine Learning Approaches for Hydrological Drought Detection: A PRISMA Review
by Odwa August, Malusi Sibiya, Masengo Ilunga and Mbuyu Sumbwanyambe
Water 2026, 18(3), 369; https://doi.org/10.3390/w18030369 - 31 Jan 2026
Viewed by 158
Abstract
Hydrological drought poses a significant threat to water security and ecosystems globally. While remote sensing offers vast spatial data, advanced analytical methods are required to translate this data into actionable insights. This review addresses this need by systematically synthesizing the state-of-the-art in using [...] Read more.
Hydrological drought poses a significant threat to water security and ecosystems globally. While remote sensing offers vast spatial data, advanced analytical methods are required to translate this data into actionable insights. This review addresses this need by systematically synthesizing the state-of-the-art in using convolutional neural networks (CNNs) and satellite-derived vegetation indices for hydrological drought detection. Following PRISMA guidelines, a systematic search of studies published between 1 January 2018 and August 2025 was conducted, resulting in 137 studies for inclusion. A narrative synthesis approach was adopted. Among the 137 studies included, 58% focused on hybrid CNN-LSTM models, with a marked increase in publications observed after 2020. The analysis reveals that hybrid spatiotemporal models are the most effective, demonstrating superior forecasting skill and in some cases achieving 10–20% higher accuracy than standalone CNNs. The most robust models employ multi-modal data fusion, integrating vegetation indices (VIs) with complementary data like Land Surface Temperature (LST). Future research should focus on enhancing model transferability and incorporating explainable AI (XAI) to strengthen the operational utility of drought early warning systems. Full article
(This article belongs to the Section Hydrology)
Show Figures

Figure 1

36 pages, 5431 KB  
Article
Explainable AI-Driven Quality and Condition Monitoring in Smart Manufacturing
by M. Nadeem Ahangar, Z. A. Farhat, Aparajithan Sivanathan, N. Ketheesram and S. Kaur
Sensors 2026, 26(3), 911; https://doi.org/10.3390/s26030911 - 30 Jan 2026
Viewed by 219
Abstract
Artificial intelligence (AI) is increasingly adopted in manufacturing for tasks such as automated inspection, predictive maintenance, and condition monitoring. However, the opaque, black-box nature of many AI models remains a major barrier to industrial trust, acceptance, and regulatory compliance. This study investigates how [...] Read more.
Artificial intelligence (AI) is increasingly adopted in manufacturing for tasks such as automated inspection, predictive maintenance, and condition monitoring. However, the opaque, black-box nature of many AI models remains a major barrier to industrial trust, acceptance, and regulatory compliance. This study investigates how explainable artificial intelligence (XAI) techniques can be used to systematically open and interpret the internal reasoning of AI systems commonly deployed in manufacturing, rather than to optimise or compare model performance. A unified explainability-centred framework is proposed and applied across three representative manufacturing use cases encompassing heterogeneous data modalities and learning paradigms: vision-based classification of casting defects, vision-based localisation of metal surface defects, and unsupervised acoustic anomaly detection for machine condition monitoring. Diverse models are intentionally employed as representative black-box decision-makers to evaluate whether XAI methods can provide consistent, physically meaningful explanations independent of model architecture, task formulation, or supervision strategy. A range of established XAI techniques, including Grad-CAM, Integrated Gradients, Saliency Maps, Occlusion Sensitivity, and SHAP, are applied to expose model attention, feature relevance, and decision drivers across visual and acoustic domains. The results demonstrate that XAI enables alignment between model behaviour and physically interpretable defect and fault mechanisms, supporting transparent, auditable, and human-interpretable decision-making. By positioning explainability as a core operational requirement rather than a post hoc visual aid, this work contributes a cross-modal framework for trustworthy AI in manufacturing, aligned with Industry 5.0 principles, human-in-the-loop oversight, and emerging expectations for transparent and accountable industrial AI systems. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

20 pages, 1142 KB  
Article
A Cross-Domain Benchmark of Intrinsic and Post Hoc Explainability for 3D Deep Learning Models
by Asmita Chakraborty, Gizem Karagoz and Nirvana Meratnia
J. Imaging 2026, 12(2), 63; https://doi.org/10.3390/jimaging12020063 - 30 Jan 2026
Viewed by 124
Abstract
Deep learning models for three-dimensional (3D) data are increasingly used in domains such as medical imaging, object recognition, and robotics. At the same time, the use of AI in these domains is increasing, while, due to their black-box nature, the need for explainability [...] Read more.
Deep learning models for three-dimensional (3D) data are increasingly used in domains such as medical imaging, object recognition, and robotics. At the same time, the use of AI in these domains is increasing, while, due to their black-box nature, the need for explainability has grown significantly. However, the lack of standardized and quantitative benchmarks for explainable artificial intelligence (XAI) in 3D data limits the reliable comparison of explanation quality. In this paper, we present a unified benchmarking framework to evaluate both intrinsic and post hoc XAI methods across three representative 3D datasets: volumetric CT scans (MosMed), voxelized CAD models (ModelNet40), and real-world point clouds (ScanObjectNN). The evaluated methods include Grad-CAM, Integrated Gradients, Saliency, Occlusion, and the intrinsic ResAttNet-3D model. We quantitatively assess explanations using the Correctness (AOPC), Completeness (AUPC), and Compactness metrics, consistently applied across all datasets. Our results show that explanation quality significantly varies across methods and domains, demonstrating that Grad-CAM and intrinsic attention performed best on medical CT scans, while gradient-based methods excelled on voxelized and point-based data. Statistical tests (Kruskal–Wallis and Mann–Whitney U) confirmed significant performance differences between methods. No single approach achieved superior results across all domains, highlighting the importance of multi-metric evaluation. This work provides a reproducible framework for standardized assessment of 3D explainability and comparative insights to guide future XAI method selection. Full article
(This article belongs to the Special Issue Explainable AI in Computer Vision)
Show Figures

Figure 1

37 pages, 9386 KB  
Article
Toward AI-Assisted Sickle Cell Screening: A Controlled Comparison of CNN, Transformer, and Hybrid Architectures Using Public Blood-Smear Images
by Linah Tasji, Hanan S. Alghamdi and Abdullah S Almalaise Al-Ghamdi
Diagnostics 2026, 16(3), 414; https://doi.org/10.3390/diagnostics16030414 - 29 Jan 2026
Viewed by 314
Abstract
Background: Sickle cell disease (SCD) is a prevalent hereditary hemoglobinopathy associated with substantial morbidity, particularly in regions with limited access to advanced laboratory diagnostics. Conventional diagnostic workflows, including manual peripheral blood smear examination and biochemical or molecular assays, are resource-intensive, time-consuming, and [...] Read more.
Background: Sickle cell disease (SCD) is a prevalent hereditary hemoglobinopathy associated with substantial morbidity, particularly in regions with limited access to advanced laboratory diagnostics. Conventional diagnostic workflows, including manual peripheral blood smear examination and biochemical or molecular assays, are resource-intensive, time-consuming, and subject to observer variability. Recent advances in artificial intelligence (AI) enable automated analysis of blood smear images and offer a scalable alternative for SCD screening. Methods: This study presents a controlled benchmark of CNNs, Vision Transformers, hierarchical Transformers, and hybrid CNN–Transformer architectures for image-level SCD classification using a publicly available peripheral blood smear dataset. Eleven ImageNet-pretrained models were fine-tuned under identical conditions using an explicit leakage-safe evaluation protocol, incorporating duplicate-aware, group-based data splitting and repeated splits to assess robustness. Performance was evaluated using accuracy and macro-averaged precision, recall, and F1-score, complemented by bootstrap confidence intervals, paired statistical testing, error-type analysis, and explainable AI (XAI). Results: Across repeated group-aware splits, CNN-based and hybrid architectures demonstrated more stable and consistently higher performance than transformer-only models. MaxViT-Tiny and DenseNet121 ranked highest overall, while pure ViTs showed reduced effectiveness under data-constrained conditions. Error analysis revealed a dominance of false-positive predictions, reflecting intrinsic morphological ambiguity in challenging samples. XAI visualizations suggest that CNNs focus on localized red blood cell morphology, whereas hybrid models integrate both local and contextual cues. Conclusions: Under limited-data conditions, convolutional inductive bias remains critical for robust blood-smear-based SCD classification. CNN and hybrid CNN–Transformer models offer interpretable and reliable performance, supporting their potential role as decision-support tools in screening-oriented research settings. Full article
(This article belongs to the Special Issue Artificial Intelligence in Pathological Image Analysis—2nd Edition)
Show Figures

Figure 1

36 pages, 6008 KB  
Article
Continuous Authentication Through Touch Stroke Analysis with Explainable AI (xAI)
by Muhammad Nadzmi Mohd Nizam, Shih Yin Ooi, Soodamani Ramalingam and Ying Han Pang
Electronics 2026, 15(3), 542; https://doi.org/10.3390/electronics15030542 - 27 Jan 2026
Viewed by 144
Abstract
Mobile authentication is crucial for device security; however, conventional techniques such as PINs and swipe patterns are susceptible to social engineering attacks. This work explores the integration of touch stroke analysis and Explainable AI (xAI) to address these vulnerabilities. Unlike static methods that [...] Read more.
Mobile authentication is crucial for device security; however, conventional techniques such as PINs and swipe patterns are susceptible to social engineering attacks. This work explores the integration of touch stroke analysis and Explainable AI (xAI) to address these vulnerabilities. Unlike static methods that require intervention at specific intervals, continuous authentication offers dynamic security by utilizing distinct user touch dynamics. This study aggregates touch stroke data from 150 participants to create comprehensive user profiles, incorporating novel biometric features such as mid-stroke pressure and mid-stroke area. These profiles are analyzed using machine learning methods, where the Random Tree classifier achieved the highest accuracy of 97.07%. To enhance interpretability and user trust, xAI methods such as SHAP and LIME are employed to provide transparency into the models’ decision-making processes, demonstrating how integrating touch stroke dynamics with xAI produces a visible, trustworthy, and continuous authentication system. Full article
Show Figures

Figure 1

31 pages, 2800 KB  
Article
Intelligent Fusion: A Resilient Anomaly Detection Framework for IoMT Health Devices
by Flavio Pastore, Raja Waseem Anwar, Nafaa Hadi Jabeur and Saqib Ali
Information 2026, 17(2), 117; https://doi.org/10.3390/info17020117 - 26 Jan 2026
Viewed by 229
Abstract
Modern healthcare systems increasingly depend on wearable Internet of Medical Things (IoMT) devices for the continuous monitoring of patients’ physiological parameters. It remains challenging to differentiate between genuine physiological anomalies, sensor faults, and malicious cyber interference. In this work, we propose a hybrid [...] Read more.
Modern healthcare systems increasingly depend on wearable Internet of Medical Things (IoMT) devices for the continuous monitoring of patients’ physiological parameters. It remains challenging to differentiate between genuine physiological anomalies, sensor faults, and malicious cyber interference. In this work, we propose a hybrid fusion framework designed to attribute the most plausible source of an anomaly, thereby supporting more reliable clinical decisions. The proposed framework is developed and evaluated using two complementary datasets: CICIoMT2024 for modelling security threats and a large-scale intensive care cohort from MIMIC-IV for analysing key vital signs and bedside interventions. The core of the system combines a supervised XGBoost classifier for attack detection with an unsupervised LSTM autoencoder for identifying physiological and technical deviations. To improve clinical realism and avoid artefacts introduced by quantised or placeholder measurements, the physiological module incorporates quality-aware preprocessing and missingness indicators. The fusion decision policy is calibrated under prudent, safety-oriented constraints to limit false escalation. Rather than relying on fixed fusion weights, we train a lightweight fusion classifier that combines complementary evidence from the security and clinical modules, and we select class-specific probability thresholds on a dedicated calibration split. The security module achieves high cross-validated performance, while the clinical model captures abnormal physiological patterns at scale, including deviations consistent with both acute deterioration and data-quality faults. Explainability is provided through SHAP analysis for the security module and reconstruction-error attribution for physiological anomalies. The integrated fusion framework achieves a final accuracy of 99.76% under prudent calibration and a Matthews Correlation Coefficient (MCC) of 0.995, with an average end-to-end inference latency of 84.69 ms (p95 upper bound of 107.30 ms), supporting near real-time execution in edge-oriented settings. While performance is strong, clinical severity labels are operationalised through rule-based proxies, and cross-domain fusion relies on harmonised alignment assumptions. These aspects should be further evaluated using realistic fault traces and prospective IoMT data. Despite these limitations, the proposed framework offers a practical and explainable approach for IoMT-based patient monitoring. Full article
(This article belongs to the Special Issue Intrusion Detection Systems in IoT Networks)
Show Figures

Graphical abstract

24 pages, 6152 KB  
Article
Adaptive Realities: Human-in-the-Loop AI for Trustworthy XR Training in Safety-Critical Domains
by Daniele Pretolesi, Georg Regal, Helmut Schrom-Feiertag and Manfred Tscheligi
Multimodal Technol. Interact. 2026, 10(1), 11; https://doi.org/10.3390/mti10010011 - 22 Jan 2026
Viewed by 173
Abstract
Extended Reality (XR) technologies have matured into powerful tools for training in high-stakes domains, from emergency response to search and rescue. Yet current systems often struggle to balance real-time AI-driven personalisation with the need for human oversight and calibrated trust. This article synthesizes [...] Read more.
Extended Reality (XR) technologies have matured into powerful tools for training in high-stakes domains, from emergency response to search and rescue. Yet current systems often struggle to balance real-time AI-driven personalisation with the need for human oversight and calibrated trust. This article synthesizes the programmatic contributions of a multi-study doctoral project to advance a design-and-evaluation framework for trustworthy adaptive XR training. Across six studies, we explored (i) recommender-driven scenario adaptation based on multimodal performance and physiological signals, (ii) persuasive dashboards for trainers, (iii) architectures for AI-supported XR training in medical mass-casualty contexts, (iv) theoretical and practical integration of Human-in-the-Loop (HITL) supervision, (v) user trust and over-reliance in the face of misleading AI suggestions, and (vi) the role of interaction modality in shaping workload, explainability, and trust in human–robot collaboration. Together, these investigations show how adaptive policies, transparent explanation, and adjustable autonomy can be orchestrated into a single adaptation loop that maintains trainee engagement, improves learning outcomes, and preserves trainer agency. We conclude with design guidelines and a research agenda for extending trustworthy XR training into safety-critical environments. Full article
Show Figures

Figure 1

30 pages, 4189 KB  
Systematic Review
Automated Fingerprint Identification: The Role of Artificial Intelligence in Crime Scene Investigation
by Csongor Herke
Forensic Sci. 2026, 6(1), 6; https://doi.org/10.3390/forensicsci6010006 - 22 Jan 2026
Viewed by 222
Abstract
Background/Objectives: This systematic review examines how artificial intelligence (AI) is transforming fingerprint and latent print identification in criminal investigations, tracing the evolution from traditional dactyloscopy to Automated Fingerprint Identification Systems (AFISs) and AI-enhanced biometric pipelines. Methods: Following PRISMA 2020 guidelines, we [...] Read more.
Background/Objectives: This systematic review examines how artificial intelligence (AI) is transforming fingerprint and latent print identification in criminal investigations, tracing the evolution from traditional dactyloscopy to Automated Fingerprint Identification Systems (AFISs) and AI-enhanced biometric pipelines. Methods: Following PRISMA 2020 guidelines, we conducted a literature search in the Scopus, Web of Science, PubMed/MEDLINE, and legal databases for the period 2000–2025, using multi-step Boolean search strings targeting AI-based fingerprint identification; 68,195 records were identified, of which 61 peer-reviewed studies met predefined inclusion criteria and were included in the qualitative synthesis (no meta-analysis). Results: Across the included studies, AI-enhanced AFIS solutions frequently demonstrated improvements in speed and scalability and, in several controlled benchmarks, improved matching performance on low-quality or partial fingerprints, although the results varied depending on datasets, evaluation protocols, and operational contexts. They also showed a potential to reduce certain forms of examiner-related contextual bias, while remaining susceptible to dataset- and model-induced biases. Conclusions: The evidence indicates that hybrid human–AI workflows—where expert examiners retain decision making authority but use AI for candidate filtering, image enhancement, and data structuring—currently offer the most reliable model, and emerging developments such as multimodal biometric fusion, edge computing, and quantum machine learning may contribute to making AI-based fingerprint identification an increasingly important component of law enforcement practice, provided that robust regulation, continuous validation, and transparent governance are ensured. Full article
Show Figures

Figure 1

36 pages, 4575 KB  
Article
A PI-Dual-STGCN Fault Diagnosis Model Based on the SHAP-LLM Joint Explanation Framework
by Zheng Zhao, Shuxia Ye, Liang Qi, Hao Ni, Siyu Fei and Zhe Tong
Sensors 2026, 26(2), 723; https://doi.org/10.3390/s26020723 - 21 Jan 2026
Viewed by 182
Abstract
This paper proposes a PI-Dual-STGCN fault diagnosis model based on a SHAP-LLM joint explanation framework to address issues such as the lack of transparency in the diagnostic process of deep learning models and the weak interpretability of diagnostic results. PI-Dual-STGCN enhances the interpretability [...] Read more.
This paper proposes a PI-Dual-STGCN fault diagnosis model based on a SHAP-LLM joint explanation framework to address issues such as the lack of transparency in the diagnostic process of deep learning models and the weak interpretability of diagnostic results. PI-Dual-STGCN enhances the interpretability of graph data by introducing physical constraints and constructs a dual-graph architecture based on physical topology graphs and signal similarity graphs. The experimental results show that the dual-graph complementary architecture enhances diagnostic accuracy to 99.22%. Second, a general-purpose SHAP-LLM explanation framework is designed: Explainable AI (XAI) technology is used to analyze the decision logic of the diagnostic model and generate visual explanations, establishing a hierarchical knowledge base that includes performance metrics, explanation reliability, and fault experience. Retrieval-Augmented Generation (RAG) technology is innovatively combined to integrate model performance and Shapley Additive Explanations (SHAP) reliability assessment through the main report prompt, while the sub-report prompt enables detailed fault analysis and repair decision generation. Finally, experiments demonstrate that this approach avoids the uncertainty of directly using large models for fault diagnosis: we delegate all fault diagnosis tasks and core explainability tasks to more mature deep learning algorithms and XAI technology and only leverage the powerful textual reasoning capabilities of large models to process pre-quantified, fact-based textual information (e.g., model performance metrics, SHAP explanation results). This method enhances diagnostic transparency through XAI-generated visual and quantitative explanations of model decision logic while reducing the risk of large model hallucinations by restricting large models to reasoning over grounded, fact-based textual content rather than direct fault diagnosis, providing verifiable intelligent decision support for industrial fault diagnosis. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

15 pages, 801 KB  
Systematic Review
Artificial Intelligence in Pediatric Dentistry: A Systematic Review and Meta-Analysis
by Nevra Karamüftüoğlu, Büşra Yavuz Üçpunar, İrem Birben, Asya Eda Altundağ, Kübra Örnek Mullaoğlu and Cenkhan Bal
Children 2026, 13(1), 152; https://doi.org/10.3390/children13010152 - 21 Jan 2026
Viewed by 333
Abstract
Background/Objectives: Artificial intelligence (AI) has gained substantial prominence in pediatric dentistry, offering new opportunities to enhance diagnostic precision and clinical decision-making. AI-based systems are increasingly applied in caries detection, early childhood caries (ECC) risk prediction, tooth development assessment, mesiodens identification, and other key [...] Read more.
Background/Objectives: Artificial intelligence (AI) has gained substantial prominence in pediatric dentistry, offering new opportunities to enhance diagnostic precision and clinical decision-making. AI-based systems are increasingly applied in caries detection, early childhood caries (ECC) risk prediction, tooth development assessment, mesiodens identification, and other key diagnostic tasks. This systematic review and meta-analysis aimed to synthesize evidence on the diagnostic performance of AI models developed specifically for pediatric dental applications. Methods: A systematic search was conducted in PubMed, Scopus, Web of Science, and Embase following PRISMA-DTA guidelines. Studies evaluating AI-based diagnostic or predictive models in pediatric populations (≤18 years) were included. Reference screening, data extraction, and quality assessment were performed independently by two reviewers. Pooled sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were calculated using random-effects models. Sources of heterogeneity related to imaging modality, annotation strategy, and dataset characteristics were examined. Results: Thirty-two studies met the inclusion criteria for qualitative synthesis, and fifteen were eligible for quantitative analysis. For radiographic caries detection, pooled sensitivity, specificity, and AUC were 0.91, 0.97, and 0.98, respectively. Prediction models demonstrated good diagnostic performance, with pooled sensitivity of 0.86, specificity of 0.82, and AUC of 0.89. Deep learning architectures, particularly convolutional neural networks, consistently outperformed traditional machine learning approaches. Considerable heterogeneity was identified across studies, primarily driven by differences in imaging protocols, dataset balance, and annotation procedures. Beyond quantitative accuracy estimates, this review critically evaluates whether current evidence supports meaningful clinical translation and identifies pediatric domains that remain underrepresented in AI-driven diagnostic innovation. Conclusions: AI technologies exhibit strong potential to improve diagnostic accuracy in pediatric dentistry. However, limited external validation, methodological variability, and the scarcity of prospective real-world studies restrict immediate clinical implementation. Future research should prioritize the development of multicenter pediatric datasets, harmonized annotation workflows, and transparent, explainable AI (XAI) models to support safe and effective clinical translation. Full article
(This article belongs to the Section Pediatric Dentistry & Oral Medicine)
Show Figures

Figure 1

26 pages, 1051 KB  
Article
Neural Signatures of Speed and Regular Reading: A Machine Learning and Explainable AI (XAI) Study of Sinhalese and Japanese
by Thishuli Walpola, Namal Rathnayake, Hoang Ngoc Thanh, Niluka Dilhani and Atsushi Senoo
Information 2026, 17(1), 108; https://doi.org/10.3390/info17010108 - 21 Jan 2026
Viewed by 138
Abstract
Reading speed is hypothesized to have distinct neural signatures across orthographically diverse languages, yet cross-linguistic evidence remains limited. We investigated this by classifying speed readers versus regular readers among Sinhalese and Japanese adults (n=142) using task-based fMRI and 35 [...] Read more.
Reading speed is hypothesized to have distinct neural signatures across orthographically diverse languages, yet cross-linguistic evidence remains limited. We investigated this by classifying speed readers versus regular readers among Sinhalese and Japanese adults (n=142) using task-based fMRI and 35 supervised machine learning classifiers. Functional activation was extracted from 12 reading-related cortical regions. We introduced Fuzzy C-Means (FCM) clustering for data augmentation and Shapley additive explanations (SHAP) for model interpretability, enabling evaluation of region-wise contributions to reading speed classification. The best model, an FT-TABPFN network with FCM augmentation, achieved 81.1% test accuracy in the Combined cohort. In the Japanese-only cohort, Quadratic SVM and Subspace KNN each reached 85.7% accuracy. SHAP analysis revealed that the angular gyrus (AG) and inferior frontal gyrus (triangularis) were the strongest contributors across cohorts. Additionally, the anterior supra marginal gyrus (ASMG) appeared as a higher contributor in the Japanese-only cohort, while the posterior superior temporal gyrus (PSTG) contributed strongly to both cohorts separately. However, the posterior middle temporal gyrus (PMTG) showed less or no contribution to the model classification in each cohort. These findings demonstrate the effectiveness of interpretable machine learning for decoding reading speed, highlighting both universal neural predictors and language-specific differences. Our study provides a novel, generalizable framework for cross-linguistic neuroimaging analysis of reading proficiency. Full article
Show Figures

Graphical abstract

30 pages, 6863 KB  
Article
Explainable Deep Learning and Edge Inference for Chilli Thrips Severity Classification in Strawberry Canopies
by Uchechukwu Ilodibe, Daeun Choi, Sriyanka Lahiri, Changying Li, Daniel Hofstetter and Yiannis Ampatzidis
Agriculture 2026, 16(2), 252; https://doi.org/10.3390/agriculture16020252 - 19 Jan 2026
Viewed by 216
Abstract
Traditional plant scouting is often a costly and labor-intensive task that requires experienced specialists to diagnose and manage plant stresses. Artificial intelligence (AI), particularly deep learning and computer vision, offers the potential to transform scouting by enabling rapid, non-intrusive detection and classification of [...] Read more.
Traditional plant scouting is often a costly and labor-intensive task that requires experienced specialists to diagnose and manage plant stresses. Artificial intelligence (AI), particularly deep learning and computer vision, offers the potential to transform scouting by enabling rapid, non-intrusive detection and classification of early stress symptoms from plant images. However, deep learning models are often opaque, relying on millions of parameters to extract complex nonlinear features that are not interpretable by growers. Recently, eXplainable AI (XAI) techniques have been used to identify key spatial regions that contribute to model predictions. This project explored the potential of convolutional neural networks (CNNs) for classifying the severity of chilli thrips damage in strawberry plants in Florida and employed XAI techniques to interpret model decisions and identify symptom-relevant canopy features. Four CNN architectures, YOLOv11, EfficientNetV2, Xception, and MobileNetV3, were trained and evaluated using 2353 square RGB canopy images of different sizes (256, 480, 640 and 1024 pixels) to classify symptoms as healthy, moderate, or severe. Trade-offs between image size, model parameter count, inference speed, and accuracy were examined in determining the best-performing model. The models achieved accuracies ranging from 77% to 85% with inference times of 5.7 to 262.3 ms, demonstrating strong potential for real-time pest severity estimation. Gradient-Weighted Class Activation Mapping (Grad-CAM) visualization revealed that model attention focused on biologically relevant regions such as fruits, stems, leaf edges, leaf surfaces, and dying leaves, areas commonly affected by chilli thrips. Subsequent analysis showed that model attention spread from localized regions in healthy plants to wide diffuse regions in severe plants. This alignment between model attention and expert scouting logic suggests that CNNs internalize symptom-specific visual cues and can reliably classify pest-induced plant stress. Full article
Show Figures

Graphical abstract

Back to TopTop