Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (749)

Search Parameters:
Keywords = explainable AI methods

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 1623 KB  
Article
Wearable Biomechanics and Video-Based Trajectory Analysis for Improving Performance in Alpine Skiing
by Denisa-Iulia Brus and Dorin-Ioan Cătană
Sensors 2026, 26(3), 1010; https://doi.org/10.3390/s26031010 - 4 Feb 2026
Abstract
Performance diagnostics in alpine skiing increasingly rely on integrated biomechanical and kinematic assessments to support technique optimization under real training conditions; however, many existing approaches address trajectory geometry or biomechanical variables separately, limiting their explanatory power. This study evaluates an integrated analysis framework [...] Read more.
Performance diagnostics in alpine skiing increasingly rely on integrated biomechanical and kinematic assessments to support technique optimization under real training conditions; however, many existing approaches address trajectory geometry or biomechanical variables separately, limiting their explanatory power. This study evaluates an integrated analysis framework combining OptiPath, an AI-assisted video-based trajectory analysis tool, with XSensDOT wearable inertial sensors to identify technical inefficiencies during giant slalom skiing. Thirty competitive youth athletes (n = 30; 14–16 years) performed controlled runs with predefined lateral offsets from the gates, enabling systematic examination of the relationship between spatial trajectory deviations, biomechanical execution, and performance outcomes. Skier trajectories were extracted using computer vision-based methods, while lower-limb kinematics, trunk motion, and tri-axial acceleration were recorded using inertial measurement units. Deviations from mathematically defined ideal trajectories were quantified through regression-based calibration and arc-based modeling. The results show that although OptiPath reliably detected trajectory variations, shorter skiing paths did not consistently produce faster run times. Instead, superior performance was associated with more efficient biomechanical execution, reflected by coordinated trunk–lower limb motion, controlled vertical loading, reduced lateral corrections, and higher forward acceleration, even when longer trajectories were followed. These findings indicate that trajectory geometry alone is insufficient to explain performance outcomes and support the integration of wearable biomechanics with trajectory modeling as a practical, low-cost, and field-deployable tool for alpine skiing performance diagnostics. Full article
(This article belongs to the Special Issue Wearable Sensors for Optimising Rehabilitation and Sport Training)
Show Figures

Figure 1

26 pages, 1858 KB  
Review
Artificial Intelligence in Lubricant Research—Advances in Monitoring and Predictive Maintenance
by Raj Shah, Kate Marussich, Vikram Mittal and Andreas Rosenkranz
Lubricants 2026, 14(2), 72; https://doi.org/10.3390/lubricants14020072 - 3 Feb 2026
Abstract
Artificial intelligence transforms lubricant research by linking molecular modeling, diagnostics, and industrial operations into predictive systems. In this regard, machine learning methods such as Bayesian optimization and neural-based Quantitative Structure–Property/Tribological Relationship (QSPR/QSTR) modeling help to accelerate additive design and formulation development. Moreover, deep [...] Read more.
Artificial intelligence transforms lubricant research by linking molecular modeling, diagnostics, and industrial operations into predictive systems. In this regard, machine learning methods such as Bayesian optimization and neural-based Quantitative Structure–Property/Tribological Relationship (QSPR/QSTR) modeling help to accelerate additive design and formulation development. Moreover, deep learning and hybrid physics–AI frameworks are now capable to predict key lubricant properties such as viscosity, oxidation stability, and wear resistance directly from molecular or spectral data, reducing the need for long-duration field trials like fleet or engine endurance tests. With respect to condition monitoring, convolutional neural networks automate wear debris classification, multimodal sensor fusion enables real-time oil health tracking, and digital twins provide predictive maintenance by forecasting lubricant degradation and optimizing drain intervals. AI-assisted blending and process control platforms extend these advantages into manufacturing, reducing waste and improving reproducibility. This article sheds light on recent progress in AI-driven formulation, monitoring, and maintenance, thus identifying major barriers to adoption such as fragmented datasets, limited model transferability, and low explainability. Moreover, it discusses how standardized data infrastructures, physics-informed learning, and secure federated approaches can advance the industry toward adaptive, sustainable lubricant development under the principles of Industry 5.0. Full article
Show Figures

Figure 1

11 pages, 194 KB  
Article
Transforming Relational Care Values in AI-Mediated Healthcare: A Text Mining Analysis of Patient Narrative
by So Young Lee
Healthcare 2026, 14(3), 371; https://doi.org/10.3390/healthcare14030371 - 2 Feb 2026
Viewed by 42
Abstract
Background: This study examined how patients and caregivers perceive and experience AI-based care technologies through text mining analysis. The goal was to identify major themes, sentiments, and value-oriented interpretations embedded in their narratives and to understand how these perceptions align with key [...] Read more.
Background: This study examined how patients and caregivers perceive and experience AI-based care technologies through text mining analysis. The goal was to identify major themes, sentiments, and value-oriented interpretations embedded in their narratives and to understand how these perceptions align with key dimensions of patient-centered care. Methods: A corpus of publicly available narratives describing experiences with AI-based care was compiled from online communities. Natural language processing techniques were applied, including descriptive term analysis, topic modeling using Latent Dirichlet Allocation, and sentiment profiling based on a Korean lexicon. Emergent topics and emotional patterns were mapped onto domains of patient-centered care such as information quality, emotional support, autonomy, and continuity. Results: The analysis revealed a three-phase evolution of care values over time. In the early phase of AI-mediated care, patient narratives emphasized disruption of relational care, with negative themes such as reduced human connection, privacy concerns, safety uncertainties, and usability challenges, accompanied by emotions of fear and frustration. During the transitional phase, positive themes including convenience, improved access, and reassurance from diagnostic accuracy emerged alongside persistent emotional ambivalence, reflecting uncertainty regarding responsibility and control. In the final phase, care values were restored and strengthened, with sentiment patterns shifting toward trust and relief as AI functions became supportive of clinical care, while concerns related to depersonalization and surveillance diminished. Conclusions: Patients and caregivers experience AI-based care as both beneficial and unsettling. Perceptions improve when AI enhances efficiency and information flow without compromising relational aspects of care. Ensuring transparency, explainability, opportunities for human contact, and strong data protections is essential for aligning AI with principles of patient-centered care. Based on a small-scale qualitative dataset of patient narratives, this study offers an exploratory, value-oriented interpretation of how relational care evolves in AI-mediated healthcare contexts. In this study, care-ethics values are used as an analytical lens to operationalize key principles of patient-centered care within AI-mediated healthcare contexts. Full article
(This article belongs to the Section Digital Health Technologies)
23 pages, 5043 KB  
Article
A Hybrid of ResNext101_32x8d and Swin Transformer Networks with XAI for Alzheimer’s Disease Detection
by Saeed Mohsen, Amr Yousef and M. Abdel-Aziz
Computers 2026, 15(2), 95; https://doi.org/10.3390/computers15020095 - 2 Feb 2026
Viewed by 58
Abstract
Medical images obtained from advanced imaging devices play a crucial role in supporting disease diagnosis and detection. Nevertheless, acquiring such images is often costly and storage-intensive, and it is time-consuming to diagnose individuals. The use of artificial intelligence (AI)-based automated diagnostic systems provides [...] Read more.
Medical images obtained from advanced imaging devices play a crucial role in supporting disease diagnosis and detection. Nevertheless, acquiring such images is often costly and storage-intensive, and it is time-consuming to diagnose individuals. The use of artificial intelligence (AI)-based automated diagnostic systems provides potential solutions to address the limitations of cost and diagnostic time. In particular, deep learning and explainable AI (XAI) techniques provide a reliable and robust approach to classifying medical images. This paper presents a hybrid model comprising two networks, ResNext101_32x8d and Swin Transformer to differentiate four categories of Alzheimer’s disease: no dementia, very mild dementia, mild dementia, and moderate dementia. The combination of the two networks is applied to imbalanced data, trained on 5120 MRI images, validated on 768 images, and tested on 512 other images. Grad-CAM and LIME techniques with a saliency map are employed to interpret the predictions of the model, providing transparent and clinically interpretable decision support. The proposed combination is realized through a TensorFlow framework, incorporating hyperparameter optimization and various data augmentation methods. The performance evaluation of the proposed model is conducted through several metrics, including the error matrix, precision recall (PR), receiver operating characteristic (ROC), accuracy, and loss curves. Experimental results reveal that the hybrid of ResNext101_32x8d and Swin Transformer achieved a testing accuracy of 98.83% with a corresponding loss rate of 0.1019. Furthermore, for the combination “ResNext101_32x8d + Swin Transformer”, the precision, F1-score, and recall were 99.39%, 99.15%, and 98.91%, respectively, while the area under the ROC curve (AUC) was 1.00, “100%”. The combination of proposed networks with XAI techniques establishes a unique contribution to advance medical AI systems and assist radiologists during Alzheimer’s disease screening of patients. Full article
(This article belongs to the Section AI-Driven Innovations)
Show Figures

Figure 1

55 pages, 2886 KB  
Article
Hybrid AI and LLM-Enabled Agent-Based Real-Time Decision Support Architecture for Industrial Batch Processes: A Clean-in-Place Case Study
by Apolinar González-Potes, Diego Martínez-Castro, Carlos M. Paredes, Alberto Ochoa-Brust, Luis J. Mena, Rafael Martínez-Peláez, Vanessa G. Félix and Ramón A. Félix-Cuadras
AI 2026, 7(2), 51; https://doi.org/10.3390/ai7020051 - 1 Feb 2026
Viewed by 157
Abstract
A hybrid AI and LLM-enabled architecture is presented for real-time decision support in industrial batch processes, where supervision still relies heavily on human operators and ad hoc SCADA logic. Unlike algorithmic contributions proposing novel AI methods, this work addresses the practical integration and [...] Read more.
A hybrid AI and LLM-enabled architecture is presented for real-time decision support in industrial batch processes, where supervision still relies heavily on human operators and ad hoc SCADA logic. Unlike algorithmic contributions proposing novel AI methods, this work addresses the practical integration and deployment challenges arising when applying existing AI techniques to safety-critical industrial environments with legacy PLC/SCADA infrastructure and real-time constraints. The framework combines deterministic rule-based agents, fuzzy and statistical enrichment, and large language models (LLMs) to support monitoring, diagnostic interpretation, preventive maintenance planning, and operator interaction with minimal manual intervention. High-frequency sensor streams are collected into rolling buffers per active process instance; deterministic agents compute enriched variables, discrete supervisory states, and rule-based alarms, while an LLM-driven analytics agent answers free-form operator queries over the same enriched datasets through a conversational interface. The architecture is instantiated and deployed in the Clean-in-Place (CIP) system of an industrial beverage plant and evaluated following a case study design aimed at demonstrating architectural feasibility and diagnostic behavior under realistic operating regimes rather than statistical generalization. Three representative multi-stage CIP executions—purposively selected from 24 runs monitored during a six-month deployment—span nominal baseline, preventive-warning, and diagnostic-alert conditions. The study quantifies stage-specification compliance, state-to-specification consistency, and temporal stability of supervisory states, and performs spot-check audits of numerical consistency between language-based summaries and enriched logs. Results in the evaluated CIP deployment show high time within specification in sanitizing stages (100% compliance across the evaluated runs), coherent and mostly stable supervisory states in variable alkaline conditions (state-specification consistency Γs0.98), and data-grounded conversational diagnostics in real time (median numerical error below 3% in audited samples), without altering the existing CIP control logic. These findings suggest that the architecture can be transferred to other industrial cleaning and batch operations by reconfiguring process-specific rules and ontologies, though empirical validation in other process types remains future work. The contribution lies in demonstrating how to bridge the gap between AI theory and industrial practice through careful system architecture, data transformation pipelines, and integration patterns that enable reliable AI-enhanced decision support in production environments, offering a practical path toward AI-assisted process supervision with explainable conversational interfaces that support preventive maintenance decision-making and equipment health monitoring. Full article
Show Figures

Figure 1

34 pages, 5749 KB  
Systematic Review
Remote Sensing and Machine Learning Approaches for Hydrological Drought Detection: A PRISMA Review
by Odwa August, Malusi Sibiya, Masengo Ilunga and Mbuyu Sumbwanyambe
Water 2026, 18(3), 369; https://doi.org/10.3390/w18030369 - 31 Jan 2026
Viewed by 158
Abstract
Hydrological drought poses a significant threat to water security and ecosystems globally. While remote sensing offers vast spatial data, advanced analytical methods are required to translate this data into actionable insights. This review addresses this need by systematically synthesizing the state-of-the-art in using [...] Read more.
Hydrological drought poses a significant threat to water security and ecosystems globally. While remote sensing offers vast spatial data, advanced analytical methods are required to translate this data into actionable insights. This review addresses this need by systematically synthesizing the state-of-the-art in using convolutional neural networks (CNNs) and satellite-derived vegetation indices for hydrological drought detection. Following PRISMA guidelines, a systematic search of studies published between 1 January 2018 and August 2025 was conducted, resulting in 137 studies for inclusion. A narrative synthesis approach was adopted. Among the 137 studies included, 58% focused on hybrid CNN-LSTM models, with a marked increase in publications observed after 2020. The analysis reveals that hybrid spatiotemporal models are the most effective, demonstrating superior forecasting skill and in some cases achieving 10–20% higher accuracy than standalone CNNs. The most robust models employ multi-modal data fusion, integrating vegetation indices (VIs) with complementary data like Land Surface Temperature (LST). Future research should focus on enhancing model transferability and incorporating explainable AI (XAI) to strengthen the operational utility of drought early warning systems. Full article
(This article belongs to the Section Hydrology)
Show Figures

Figure 1

36 pages, 5431 KB  
Article
Explainable AI-Driven Quality and Condition Monitoring in Smart Manufacturing
by M. Nadeem Ahangar, Z. A. Farhat, Aparajithan Sivanathan, N. Ketheesram and S. Kaur
Sensors 2026, 26(3), 911; https://doi.org/10.3390/s26030911 - 30 Jan 2026
Viewed by 219
Abstract
Artificial intelligence (AI) is increasingly adopted in manufacturing for tasks such as automated inspection, predictive maintenance, and condition monitoring. However, the opaque, black-box nature of many AI models remains a major barrier to industrial trust, acceptance, and regulatory compliance. This study investigates how [...] Read more.
Artificial intelligence (AI) is increasingly adopted in manufacturing for tasks such as automated inspection, predictive maintenance, and condition monitoring. However, the opaque, black-box nature of many AI models remains a major barrier to industrial trust, acceptance, and regulatory compliance. This study investigates how explainable artificial intelligence (XAI) techniques can be used to systematically open and interpret the internal reasoning of AI systems commonly deployed in manufacturing, rather than to optimise or compare model performance. A unified explainability-centred framework is proposed and applied across three representative manufacturing use cases encompassing heterogeneous data modalities and learning paradigms: vision-based classification of casting defects, vision-based localisation of metal surface defects, and unsupervised acoustic anomaly detection for machine condition monitoring. Diverse models are intentionally employed as representative black-box decision-makers to evaluate whether XAI methods can provide consistent, physically meaningful explanations independent of model architecture, task formulation, or supervision strategy. A range of established XAI techniques, including Grad-CAM, Integrated Gradients, Saliency Maps, Occlusion Sensitivity, and SHAP, are applied to expose model attention, feature relevance, and decision drivers across visual and acoustic domains. The results demonstrate that XAI enables alignment between model behaviour and physically interpretable defect and fault mechanisms, supporting transparent, auditable, and human-interpretable decision-making. By positioning explainability as a core operational requirement rather than a post hoc visual aid, this work contributes a cross-modal framework for trustworthy AI in manufacturing, aligned with Industry 5.0 principles, human-in-the-loop oversight, and emerging expectations for transparent and accountable industrial AI systems. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

38 pages, 783 KB  
Article
A Review on Protection and Cybersecurity in Hybrid AC/DC Microgrids: Conventional Challenges and AI/ML Approaches
by Farzaneh Eslami, Manaswini Gangineni, Ali Ebrahimi, Menaka Rathnayake, Mihirkumar Patel and Olga Lavrova
Energies 2026, 19(3), 744; https://doi.org/10.3390/en19030744 - 30 Jan 2026
Viewed by 298
Abstract
Hybrid AC/DC microgrids (HMGs) are increasingly recognized as a solution for the transition toward future energy systems because they can combine the efficiency of DC networks with an AC system. Despite these advantages, HMGs still have challenges in protection, cybersecurity, and reliability. Conventional [...] Read more.
Hybrid AC/DC microgrids (HMGs) are increasingly recognized as a solution for the transition toward future energy systems because they can combine the efficiency of DC networks with an AC system. Despite these advantages, HMGs still have challenges in protection, cybersecurity, and reliability. Conventional protection schemes often fail due to reduced fault currents and the dominance of power electronic converters in islanded or dynamically reconfigured topologies. At the same time, IEC 61850 protocols remain vulnerable to advanced cyberattacks such as Denial of Service (DoS), false data injection (FDIA), and man-in-the-middle (MITM), posing serious threats to the stability and operational security of intelligent power networks. Previous surveys have typically examined these challenges in isolation; however, this paper provides the first integrated review of HMG protection across three complementary dimensions: traditional protection schemes, cybersecurity threats, and artificial intelligence/machine learning (AI/ML)-based approaches. By analyzing more than 100 studies published between 2012 and 2024, we show that AI/ML methods in simulation environments can achieve detection accuracies of 95–98% with response times under 10 ms, while these values are case-specific and depend on the evaluation setting such as network scale, sampling configuration, noise levels, inverter control mode, and whether results are obtained in simulation, hardware in loop (HIL)/real-time digital simulator (RTDS), or field conditions. Nevertheless, the absence of standardized datasets and limited field validation remain key barriers to industrial adoption. Likewise, existing cybersecurity frameworks provide acceptable protection timing but lack resilience against emerging threats, while conventional methods underperform in clustered and islanded scenarios. Therefore, the future of HMG protection requires the integration of traditional schemes, resilient cybersecurity architectures, and explainable AI models, along with the development of benchmark datasets, hardware-in-the-loop validation, and implementation on platforms such as field-programmable gate array (FPGA) and μPMU. Full article
Show Figures

Figure 1

14 pages, 541 KB  
Article
Discrepancies Between MDT Recommendations and AI-Generated Decisions in Gynecologic Oncology: A Retrospective Comparative Cohort Study
by Vasilios Pergialiotis, Nikolaos Thomakos, Vasilios Lygizos, Maria Fanaki, Antonia Varthaliti, Dimitrios Efthymios Vlachos and Dimitrios Haidopoulos
Cancers 2026, 18(3), 452; https://doi.org/10.3390/cancers18030452 - 30 Jan 2026
Viewed by 140
Abstract
Background: Multidisciplinary tumor boards (MDTs) remain the foundation of gynecologic cancer management, yet increasing diagnostic complexity and rapidly evolving molecular classifications have intensified interest in artificial intelligence (AI) as a potential decision-support tool. This study aimed to evaluate the concordance between MDT-derived recommendations [...] Read more.
Background: Multidisciplinary tumor boards (MDTs) remain the foundation of gynecologic cancer management, yet increasing diagnostic complexity and rapidly evolving molecular classifications have intensified interest in artificial intelligence (AI) as a potential decision-support tool. This study aimed to evaluate the concordance between MDT-derived recommendations and those generated by ChatGPT 5.0 across a large, real-world cohort of gynecologic oncology cases. Methods: This single-center retrospective analysis included 599 consecutive patients with cervical, endometrial, ovarian, or vulvar cancer evaluated during MDT meetings over a 2-month period. Standardized anonymized case summaries were entered into ChatGPT 5.0, which was instructed to follow current ESGO guidelines. AI-generated staging and treatment recommendations were compared with MDT decisions. Discrepancies were independently assessed by two reviewers and stratified by malignancy type, disease stage, and treatment domain. Results: Overall concordance for FIGO staging was 77.0%, while treatment-related decisions demonstrated lower discordance, particularly in chemotherapy (8.2%) and targeted therapy (6.8%). The highest staging disagreement occurred in early-stage endometrial cancer (32.6%), reflecting the complexity of newly revised molecular classifications. In recurrent ovarian and cervical cancer, discrepancies were more pronounced in surgical and systemic therapy recommendations, suggesting limited AI capacity to integrate multimodal imaging, prior treatments, and individualized considerations. Vulvar cancer cases showed the highest overall agreement. Conclusions: ChatGPT 5.0 aligns with MDT decisions in many straightforward scenarios but falls short in complex or nuanced cases requiring contextual, multimodal, and patient-specific reasoning. These findings underscore the need for prospective, real-time evaluation, multimodal data integration, external validation, and explainable AI frameworks before LLMs can be safely incorporated into routine gynecologic oncology decision-making. Full article
(This article belongs to the Special Issue Advances in Ovarian Cancer Treatment: Past, Present and Future)
Show Figures

Graphical abstract

20 pages, 1142 KB  
Article
A Cross-Domain Benchmark of Intrinsic and Post Hoc Explainability for 3D Deep Learning Models
by Asmita Chakraborty, Gizem Karagoz and Nirvana Meratnia
J. Imaging 2026, 12(2), 63; https://doi.org/10.3390/jimaging12020063 - 30 Jan 2026
Viewed by 124
Abstract
Deep learning models for three-dimensional (3D) data are increasingly used in domains such as medical imaging, object recognition, and robotics. At the same time, the use of AI in these domains is increasing, while, due to their black-box nature, the need for explainability [...] Read more.
Deep learning models for three-dimensional (3D) data are increasingly used in domains such as medical imaging, object recognition, and robotics. At the same time, the use of AI in these domains is increasing, while, due to their black-box nature, the need for explainability has grown significantly. However, the lack of standardized and quantitative benchmarks for explainable artificial intelligence (XAI) in 3D data limits the reliable comparison of explanation quality. In this paper, we present a unified benchmarking framework to evaluate both intrinsic and post hoc XAI methods across three representative 3D datasets: volumetric CT scans (MosMed), voxelized CAD models (ModelNet40), and real-world point clouds (ScanObjectNN). The evaluated methods include Grad-CAM, Integrated Gradients, Saliency, Occlusion, and the intrinsic ResAttNet-3D model. We quantitatively assess explanations using the Correctness (AOPC), Completeness (AUPC), and Compactness metrics, consistently applied across all datasets. Our results show that explanation quality significantly varies across methods and domains, demonstrating that Grad-CAM and intrinsic attention performed best on medical CT scans, while gradient-based methods excelled on voxelized and point-based data. Statistical tests (Kruskal–Wallis and Mann–Whitney U) confirmed significant performance differences between methods. No single approach achieved superior results across all domains, highlighting the importance of multi-metric evaluation. This work provides a reproducible framework for standardized assessment of 3D explainability and comparative insights to guide future XAI method selection. Full article
(This article belongs to the Special Issue Explainable AI in Computer Vision)
Show Figures

Figure 1

37 pages, 9386 KB  
Article
Toward AI-Assisted Sickle Cell Screening: A Controlled Comparison of CNN, Transformer, and Hybrid Architectures Using Public Blood-Smear Images
by Linah Tasji, Hanan S. Alghamdi and Abdullah S Almalaise Al-Ghamdi
Diagnostics 2026, 16(3), 414; https://doi.org/10.3390/diagnostics16030414 - 29 Jan 2026
Viewed by 314
Abstract
Background: Sickle cell disease (SCD) is a prevalent hereditary hemoglobinopathy associated with substantial morbidity, particularly in regions with limited access to advanced laboratory diagnostics. Conventional diagnostic workflows, including manual peripheral blood smear examination and biochemical or molecular assays, are resource-intensive, time-consuming, and [...] Read more.
Background: Sickle cell disease (SCD) is a prevalent hereditary hemoglobinopathy associated with substantial morbidity, particularly in regions with limited access to advanced laboratory diagnostics. Conventional diagnostic workflows, including manual peripheral blood smear examination and biochemical or molecular assays, are resource-intensive, time-consuming, and subject to observer variability. Recent advances in artificial intelligence (AI) enable automated analysis of blood smear images and offer a scalable alternative for SCD screening. Methods: This study presents a controlled benchmark of CNNs, Vision Transformers, hierarchical Transformers, and hybrid CNN–Transformer architectures for image-level SCD classification using a publicly available peripheral blood smear dataset. Eleven ImageNet-pretrained models were fine-tuned under identical conditions using an explicit leakage-safe evaluation protocol, incorporating duplicate-aware, group-based data splitting and repeated splits to assess robustness. Performance was evaluated using accuracy and macro-averaged precision, recall, and F1-score, complemented by bootstrap confidence intervals, paired statistical testing, error-type analysis, and explainable AI (XAI). Results: Across repeated group-aware splits, CNN-based and hybrid architectures demonstrated more stable and consistently higher performance than transformer-only models. MaxViT-Tiny and DenseNet121 ranked highest overall, while pure ViTs showed reduced effectiveness under data-constrained conditions. Error analysis revealed a dominance of false-positive predictions, reflecting intrinsic morphological ambiguity in challenging samples. XAI visualizations suggest that CNNs focus on localized red blood cell morphology, whereas hybrid models integrate both local and contextual cues. Conclusions: Under limited-data conditions, convolutional inductive bias remains critical for robust blood-smear-based SCD classification. CNN and hybrid CNN–Transformer models offer interpretable and reliable performance, supporting their potential role as decision-support tools in screening-oriented research settings. Full article
(This article belongs to the Special Issue Artificial Intelligence in Pathological Image Analysis—2nd Edition)
Show Figures

Figure 1

18 pages, 2686 KB  
Article
MRI-Based Bladder Cancer Staging via YOLOv11 Segmentation and Deep Learning Classification
by Phisit Katongtung, Kanokwatt Shiangjen, Watcharaporn Cholamjiak and Krittin Naravejsakul
Diseases 2026, 14(2), 45; https://doi.org/10.3390/diseases14020045 - 28 Jan 2026
Viewed by 167
Abstract
Background: Accurate staging of bladder cancer is critical for guiding clinical management, particularly the distinction between non–muscle-invasive (T1) and muscle-invasive (T2–T4) disease. Although MRI offers superior soft-tissue contrast, image interpretation remains opera-tor-dependent and subject to inter-observer variability. This study proposes an automated deep [...] Read more.
Background: Accurate staging of bladder cancer is critical for guiding clinical management, particularly the distinction between non–muscle-invasive (T1) and muscle-invasive (T2–T4) disease. Although MRI offers superior soft-tissue contrast, image interpretation remains opera-tor-dependent and subject to inter-observer variability. This study proposes an automated deep learning framework for MRI-based bladder cancer staging to support standardized radio-logical interpretation. Methods: A sequential AI-based pipeline was developed, integrating hybrid tumor segmentation using YOLOv11 for lesion detection and DeepLabV3 for boundary refinement, followed by three deep learning classifiers (VGG19, ResNet50, and Vision Transformer) for MRI-based stage prediction. A total of 416 T2-weighted MRI images with radiology-derived stage labels (T1–T4) were included, with data augmentation applied during training. Model performance was evaluated using accuracy, precision, recall, F1-score, and multi-class AUC. Performance un-certainty was characterized using patient-level bootstrap confidence intervals under a fixed training and evaluation pipeline. Results: All evaluated models demonstrated high and broadly comparable discriminative performance for MRI-based bladder cancer staging within the present dataset, with high point estimates of accuracy and AUC, particularly for differentiating non–muscle-invasive from muscle-invasive disease. Calibration analysis characterized the probabilistic behavior of predicted stage probabilities under the current experimental setting. Conclusions: The proposed framework demonstrates the feasibility of automated MRI-based bladder cancer staging derived from radiological reference labels and supports the potential of deep learning for stand-ardizing and reproducing MRI-based staging procedures. Rather than serving as an independent clinical decision-support system, the framework is intended as a methodological and work-flow-oriented tool for automated staging consistency. Further validation using multi-center datasets, patient-level data splitting prior to augmentation, pathology-confirmed reference stand-ards, and explainable AI techniques is required to establish generalizability and clinical relevance. Full article
Show Figures

Figure 1

14 pages, 1488 KB  
Article
A Framework for Interpreting Machine Learning Models in Bond Default Risk Prediction Using LIME and SHAP
by Yan Zhang, Lin Chen and Yixiang Tian
Risks 2026, 14(2), 23; https://doi.org/10.3390/risks14020023 - 28 Jan 2026
Viewed by 192
Abstract
Interpretability analysis methods, such as LIME and SHAP, are widely employed to explain the predictions of artificial intelligence models; however, they primarily function as post hoc tools and do not directly quantify the intrinsic interpretability of the models. Although it is commonly assumed [...] Read more.
Interpretability analysis methods, such as LIME and SHAP, are widely employed to explain the predictions of artificial intelligence models; however, they primarily function as post hoc tools and do not directly quantify the intrinsic interpretability of the models. Although it is commonly assumed that model transparency decreases with increasing complexity, there is currently no standardized framework for evaluating interpretability as an inherent property of AI models. In this study, we examine the prediction of bond defaults using several widely used machine learning algorithms. The classification performance of each algorithm is first evaluated, followed by the application of LIME and SHAP to assess the influence of input features on model outputs. Based on these analyses, we propose a novel approach for quantifying intrinsic model interpretability. The results align with theoretical expectations and provide insights into the trade-off between model complexity and interpretability. Full article
(This article belongs to the Special Issue Artificial Intelligence Risk Management)
Show Figures

Figure 1

24 pages, 1289 KB  
Article
Designing Understandable and Fair AI for Learning: The PEARL Framework for Human-Centered Educational AI
by Sagnik Dakshit, Kouider Mokhtari and Ayesha Khalid
Educ. Sci. 2026, 16(2), 198; https://doi.org/10.3390/educsci16020198 - 28 Jan 2026
Viewed by 210
Abstract
As artificial intelligence (AI) is increasingly used in classrooms, tutoring systems, and learning platforms, it is essential that these tools are not only powerful, but also easy to understand, fair, and supportive of real learning. Many current AI systems can generate fluent responses [...] Read more.
As artificial intelligence (AI) is increasingly used in classrooms, tutoring systems, and learning platforms, it is essential that these tools are not only powerful, but also easy to understand, fair, and supportive of real learning. Many current AI systems can generate fluent responses or accurate predictions, yet they often fail to clearly explain their decisions, reflect students’ cultural contexts, or give learners and educators meaningful control. This gap can reduce trust and limit the educational value of AI-supported learning. This paper introduces the PEARL framework, a human-centered approach for designing and evaluating explainable AI in education. PEARL is built around five core principles: Pedagogical Personalization (adapting support to learners’ levels and curriculum goals), Explainability and Engagement (providing clear, motivating explanations in everyday language), Attribution and Accountability (making AI decisions traceable and justifiable), Representation and Reflection (supporting fairness, diversity, and learner self-reflection), and Localized Learner Agency (giving learners control over how AI explains and supports them). Unlike many existing explainability approaches that focus mainly on technical performance, PEARL emphasizes how students, teachers, and administrators experience and make sense of AI decisions. The framework is demonstrated through simulated examples using an AI-based tutoring system, showing how PEARL can improve feedback clarity, support different stakeholder needs, reduce bias, and promote culturally relevant learning. The paper also introduces the PEARL Composite Score, a practical evaluation tool that helps assess how well educational AI systems align with ethical, pedagogical, and human-centered principles. This study includes a small exploratory mixed-methods user study (N = 17) evaluating example AI tutor interactions; no live classroom deployment was conducted. Together, these contributions offer a practical roadmap for building educational AI systems that are not only effective, but also trustworthy, inclusive, and genuinely supportive of human learning. Full article
(This article belongs to the Section Technology Enhanced Education)
Show Figures

Figure 1

27 pages, 1633 KB  
Review
Transformer Models, Graph Networks, and Generative AI in Gut Microbiome Research: A Narrative Review
by Yan Zhu, Yiteng Tang, Xin Qi and Xiong Zhu
Bioengineering 2026, 13(2), 144; https://doi.org/10.3390/bioengineering13020144 - 27 Jan 2026
Viewed by 326
Abstract
Background: The rapid advancement in artificial intelligence (AI) has fundamentally reshaped gut microbiome research by enabling high-resolution analysis of complex, high-dimensional microbial communities and their functional interactions with the human host. Objective: This narrative review aims to synthesize recent methodological advances in AI-driven [...] Read more.
Background: The rapid advancement in artificial intelligence (AI) has fundamentally reshaped gut microbiome research by enabling high-resolution analysis of complex, high-dimensional microbial communities and their functional interactions with the human host. Objective: This narrative review aims to synthesize recent methodological advances in AI-driven gut microbiome research and to evaluate their translational relevance for therapeutic optimization, personalized nutrition, and precision medicine. Methods: A narrative literature review was conducted using PubMed, Google Scholar, Web of Science, and IEEE Xplore, focusing on peer-reviewed studies published between approximately 2015 and early 2025. Representative articles were selected based on relevance to AI methodologies applied to gut microbiome analysis, including machine learning, deep learning, transformer-based models, graph neural networks, generative AI, and multi-omics integration frameworks. Additional seminal studies were identified through manual screening of reference lists. Results: The reviewed literature demonstrates that AI enables robust identification of diagnostic microbial signatures, prediction of individual responses to microbiome-targeted therapies, and design of personalized nutritional and pharmacological interventions using in silico simulations and digital twin models. AI-driven multi-omics integration—encompassing metagenomics, metatranscriptomics, metabolomics, proteomics, and clinical data—has improved functional interpretation of host–microbiome interactions and enhanced predictive performance across diverse disease contexts. For example, AI-guided personalized nutrition models have achieved AUC exceeding 0.8 for predicting postprandial glycemic responses, while community-scale metabolic modeling frameworks have accurately forecast individualized short-chain fatty acid production. Conclusions: Despite substantial progress, key challenges remain, including data heterogeneity, limited model interpretability, population bias, and barriers to clinical deployment. Future research should prioritize standardized data pipelines, explainable and privacy-preserving AI frameworks, and broader population representation. Collectively, these advances position AI as a cornerstone technology for translating gut microbiome data into actionable insights for diagnostics, therapeutics, and precision nutrition. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Complex Diseases)
Show Figures

Figure 1

Back to TopTop