Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (265)

Search Parameters:
Keywords = XAI applications

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
40 pages, 16360 KB  
Review
Artificial Intelligence Meets Nail Diagnostics: Emerging Image-Based Sensing Platforms for Non-Invasive Disease Detection
by Tejrao Panjabrao Marode, Vikas K. Bhangdiya, Shon Nemane, Dhiraj Tulaskar, Vaishnavi M. Sarad, K. Sankar, Sonam Chopade, Ankita Avthankar, Manish Bhaiyya and Madhusudan B. Kulkarni
Bioengineering 2026, 13(1), 75; https://doi.org/10.3390/bioengineering13010075 - 8 Jan 2026
Viewed by 259
Abstract
Artificial intelligence (AI) and machine learning (ML) are transforming medical diagnostics, but human nail, an easily accessible and rich biological substrate, is still not fully exploited in the digital health field. Nail pathologies are easily diagnosed, non-invasive disease biomarkers, including systemic diseases such [...] Read more.
Artificial intelligence (AI) and machine learning (ML) are transforming medical diagnostics, but human nail, an easily accessible and rich biological substrate, is still not fully exploited in the digital health field. Nail pathologies are easily diagnosed, non-invasive disease biomarkers, including systemic diseases such as anemia, diabetes, psoriasis, melanoma, and fungal diseases. This review presents the first big synthesis of image analysis for nail lesions incorporating AI/ML for diagnostic purposes. Where dermatological reviews to date have been more wide-ranging in scope, our review will focus specifically on diagnosis and screening related to nails. The various technological modalities involved (smartphone imaging, dermoscopy, Optical Coherence Tomography) will be presented, together with the different processing techniques for images (color corrections, segmentation, cropping of regions of interest), and models that range from classical methods to deep learning, with annotated descriptions of each. There will also be additional descriptions of AI applications related to some diseases, together with analytical discussions regarding real-world impediments to clinical application, including scarcity of data, variations in skin type, annotation errors, and other laws of clinical adoption. Some emerging solutions will also be emphasized: explainable AI (XAI), federated learning, and platform diagnostics allied with smartphones. Bridging the gap between clinical dermatology, artificial intelligence and mobile health, this review consolidates our existing knowledge and charts a path through yet others to scalable, equitable, and trustworthy nail based medically diagnostic techniques. Our findings advocate for interdisciplinary innovation to bring AI-enabled nail analysis from lab prototypes to routine healthcare and global screening initiatives. Full article
(This article belongs to the Special Issue Bioengineering in a Generative AI World)
Show Figures

Graphical abstract

23 pages, 3238 KB  
Article
Agricultural Injury Severity Prediction Using Integrated Data-Driven Analysis: Global Versus Local Explainability Using SHAP
by Omer Mermer, Yanan Liu, Charles A. Jennissen, Milan Sonka and Ibrahim Demir
Safety 2026, 12(1), 6; https://doi.org/10.3390/safety12010006 - 8 Jan 2026
Viewed by 70
Abstract
Despite the agricultural sector’s consistently high injury rates, formal reporting is often limited, leading to sparse national datasets that hinder effective safety interventions. To address this, our study introduces a comprehensive framework leveraging advanced ensemble machine learning (ML) models to predict and interpret [...] Read more.
Despite the agricultural sector’s consistently high injury rates, formal reporting is often limited, leading to sparse national datasets that hinder effective safety interventions. To address this, our study introduces a comprehensive framework leveraging advanced ensemble machine learning (ML) models to predict and interpret the severity of agricultural injuries. We use a unique, manually curated dataset of over 2400 agricultural incidents from AgInjuryNews, a public repository of news reports detailing incidents across the United States. We evaluated six ensemble models, including Gradient Boosting (GB), eXtreme Grading Boosting (XGB), Light Gradient Boosting Machine (LightGBM), Adaptive Boosting (AdaBoost), Histogram-based Gradient Boosting Regression Trees (HistGBRT), and Random Forest (RF), for their accuracy in classifying injury outcomes as fatal or non-fatal. A key contribution of our work is the novel integration of explainable artificial intelligence (XAI), specifically SHapley Additive exPlanations (SHAP), to overcome the “black-box” nature of complex ensemble models. The models demonstrated strong predictive performance, with most achieving an accuracy of approximately 0.71 and an F1-score of 0.81. Through global SHAP analysis, we identified key factors influencing injury severity across the dataset, such as the presence of helmet use, victim age, and the type of injury agent. Additionally, our application of local SHAP analysis revealed how specific variables like location and the victim’s role can have varying impacts depending on the context of the incident. These findings provide actionable, context-aware insights for developing targeted policy and safety interventions for a range of stakeholders, from first responders to policymakers, offering a powerful tool for a more proactive approach to agricultural safety. Full article
(This article belongs to the Special Issue Farm Safety, 2nd Edition)
Show Figures

Figure 1

38 pages, 2642 KB  
Article
Capturing Short- and Long-Term Temporal Dependencies Using Bahdanau-Enhanced Fused Attention Model for Financial Data—An Explainable AI Approach
by Rasmi Ranjan Khansama, Rojalina Priyadarshini, Surendra Kumar Nanda and Rabindra Kumar Barik
FinTech 2026, 5(1), 4; https://doi.org/10.3390/fintech5010004 - 7 Jan 2026
Viewed by 71
Abstract
Prediction of stock closing price plays a critical role in financial planning, risk management, and informed investment decision-making. In this study, we propose a novel model that synergistically amalgamates Bidirectional GRU (BiGRU) with three complementary attention techniques—Top-k Sparse, Global, and Bahdanau Attention—to tackle [...] Read more.
Prediction of stock closing price plays a critical role in financial planning, risk management, and informed investment decision-making. In this study, we propose a novel model that synergistically amalgamates Bidirectional GRU (BiGRU) with three complementary attention techniques—Top-k Sparse, Global, and Bahdanau Attention—to tackle the complex, intricate, and non-linear temporal dependencies in financial time series. The proposed Fused Attention Model is validated on two highly volatile, non-linear, and complex- patterned stock indices: NIFTY 50 and S&P 500, with 80% of the historical price data used for model learning and the remaining 20% for testing. A comprehensive analysis of the results, benchmarked against various baseline and hybrid deep learning architectures across multiple regression performance metrics such as Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), Mean Absolute Percentage Error (MAPE), and R2 Score, demonstrates the superiority and noteworthiness of our proposed Fused Attention Model. Most significantly, the proposed model yields the highest prediction accuracy and generalization capability, with R2 scores of 0.9955 on NIFTY 50 and 0.9961 on S&P 500. Additionally, to mitigate the issues of interpretability and transparency of the deep learning model for financial forecasting, we utilized three different Explainable Artificial Intelligence (XAI) techniques, namely Integrated Gradients, SHapley Additive exPlanations (SHAP), and Attention Weight Analysis. The results of these three XAI techniques validated the utilization of three attention techniques along with the BiGRU model. The explainability of the proposed model named as BiGRU based Fused Attention (BiG-FA), in addition to its superior performance, thus offers a robust and interpretable deep learning model for time-series prediction, making it applicable beyond the financial domain. Full article
Show Figures

Figure 1

24 pages, 4323 KB  
Article
A Tabular Data Augmentation Framework Based on Error-Focused XAI-Supported Weighting Strategy: Application to Soil Liquefaction Classification
by Engin Nacaroglu, Ayse Tuba Tugrul and Berk Yagcioglu
Appl. Sci. 2026, 16(1), 330; https://doi.org/10.3390/app16010330 - 29 Dec 2025
Viewed by 196
Abstract
In tabular liquefaction datasets, data augmentation plays a crucial role in enhancing the classification performance of machine learning models. In this study, an XAI-supported, error-focused, weighting-based data augmentation framework is proposed to improve CPT-based soil liquefaction classification in data-limited case-history settings by leveraging [...] Read more.
In tabular liquefaction datasets, data augmentation plays a crucial role in enhancing the classification performance of machine learning models. In this study, an XAI-supported, error-focused, weighting-based data augmentation framework is proposed to improve CPT-based soil liquefaction classification in data-limited case-history settings by leveraging feedback from test misclassifications. First, it is hypothesized that test errors are non-random and that certain features contributed the most to misclassifications. Accordingly, a SHAP-based error-contribution score approach was developed to identify error-contributing features. The core of the proposed framework relies on assigning weights to error-contributing features. This targeted weighting was employed in two components: (i) clustering to select training samples for augmentation; and (ii) noise injection applied only in difficult-to-predict regions. To this end, test errors were combined with the training data, and weighted Fuzzy C-Means clustering was applied by assigning a weight of 1.5 to the distance metric in the error-contributing features. Clusters where test errors were concentrated were therefore defined as “difficult-to-predict regions”. In these clusters, noise was injected into the error-contributing features with 1.5× higher amplitude. This design directly integrated XAI-based error explanations into the data augmentation process, enabling targeted augmentation in difficult-to-predict regions. Consequently, the decision boundaries of the models became sharper, particularly in the error-contributing features. The Random Forest model achieved the highest improvement, with its F1 score increasing by 0.019. These findings demonstrate that the proposed framework enhances classification performance for tabular liquefaction data. Full article
Show Figures

Figure 1

29 pages, 2906 KB  
Review
Human-Centered AI to Accelerate the SDGs: Evidence Map (2020–2024)
by Denise Helena Lombardo Ferreira, Bruno de Aguiar Normanha, Cibele Roberta Sugahara, Diego de Melo Conti, Cândido Ferreira da Silva Filho and Ernesto D. R. Santibanez-Gonzalez
Sustainability 2026, 18(1), 149; https://doi.org/10.3390/su18010149 - 23 Dec 2025
Viewed by 371
Abstract
Artificial Intelligence (AI) has gained prominence on sustainability agendas while raising ethical, social, and environmental challenges. This study synthesizes evidence and maps the scientific production on Human-Centered AI (HCAI) at the interface with the Sustainable Development Goals (SDGs) for 2020–2024. Searches in Scopus [...] Read more.
Artificial Intelligence (AI) has gained prominence on sustainability agendas while raising ethical, social, and environmental challenges. This study synthesizes evidence and maps the scientific production on Human-Centered AI (HCAI) at the interface with the Sustainable Development Goals (SDGs) for 2020–2024. Searches in Scopus and Web of Science (Boolean operators; thematic and temporal filters), followed by deduplication, yielded 265 articles, which were analyzed with Bibliometrix/Biblioshiny version 5.1.1 and VOSviewer version 1.6.20 (0) to generate term co-occurrence maps, collaboration networks, and bibliographic coupling. The results indicate accelerated growth and diffusion of the topic, with journals such as Sustainability, IEEE Access, and Applied Sciences standing out. Three interdependent axes were identified: (i) technical performance, with emphasis on machine learning and deep learning; (ii) explainability and human-centeredness (XAI, ethics, and algorithmic governance); and (iii) socio-environmental applications oriented toward the SDGs. Underrepresentation of the Global South, particularly Brazil, was observed. It is concluded that HCAI is being consolidated as an emerging interdisciplinary field with potential to accelerate the SDGs, although there remains a need to integrate ethical, regional, and impact-assessment dimensions more systematically to achieve global targets effectively. Full article
(This article belongs to the Section Development Goals towards Sustainability)
Show Figures

Figure 1

23 pages, 6281 KB  
Article
Empirical Mode Decomposition-Based Deep Learning Model Development for Medical Imaging: Feasibility Study for Gastrointestinal Endoscopic Image Classification
by Mou Deb, Mrinal Kanti Dhar, Poonguzhali Elangovan, Keerthy Gopalakrishnan, Divyanshi Sood, Aaftab Sethi, Sabah Afroze, Sourav Bansal, Aastha Goudel, Charmy Parikh, Avneet Kaur, Swetha Rapolu, Gianeshwaree Alias Rachna Panjwani, Rabiah Aslam Ansari, Naghmeh Asadimanesh, Shiva Sankari Karuppiah, Scott A. Helgeson, Venkata S. Akshintala and Shivaram P. Arunachalam
J. Imaging 2026, 12(1), 4; https://doi.org/10.3390/jimaging12010004 - 22 Dec 2025
Viewed by 297
Abstract
This study proposes a novel two-dimensional Empirical Mode Decomposition (2D EMD)-based deep learning framework to enhance model performance in multi-class image classification tasks and potential early detection of diseases in healthcare using medical imaging. To validate this approach, we apply it to gastrointestinal [...] Read more.
This study proposes a novel two-dimensional Empirical Mode Decomposition (2D EMD)-based deep learning framework to enhance model performance in multi-class image classification tasks and potential early detection of diseases in healthcare using medical imaging. To validate this approach, we apply it to gastrointestinal (GI) endoscopic image classification using the publicly available Kvasir dataset, which contains eight GI image classes with 1000 images each. The proposed 2D EMD-based design procedure decomposes images into a full set of intrinsic mode functions (IMFs) to enhance image features beneficial for AI model development. Integrating 2D EMD into a deep learning pipeline, we evaluate its impact on four popular models (ResNet152, VGG19bn, MobileNetV3L, and SwinTransformerV2S). The results demonstrate that subtracting IMFs from the original image consistently improves accuracy, F1-score, and AUC for all models. The study reveals a notable enhancement in model performance, with an approximately 9% increase in accuracy compared to counterparts without EMD integration for ResNet152. Similarly, there is an increase of around 18% for VGG19L, 3% for MobileNetV3L, and 8% for SwinTransformerV2. Additionally, explainable AI (XAI) techniques, such as Grad-CAM, illustrate that the model focuses on GI regions for predictions. This study highlights the efficacy of 2D EMD in enhancing deep learning model performance for GI image classification, with potential applications in other domains. Full article
Show Figures

Figure 1

28 pages, 3264 KB  
Article
A Unified Fuzzy–Explainable AI Framework (FAS-XAI) for Customer Service Value Prediction and Strategic Decision-Making
by Gabriel Marín Díaz
AI 2026, 7(1), 3; https://doi.org/10.3390/ai7010003 - 22 Dec 2025
Viewed by 606
Abstract
Real-world decision-making often involves uncertainty, incomplete data, and the need to evaluate alternatives based on both quantitative and qualitative criteria. To address these challenges, this study presents FAS-XAI, a unified methodological framework that integrates fuzzy clustering and explainable artificial intelligence (XAI). FAS-XAI supports [...] Read more.
Real-world decision-making often involves uncertainty, incomplete data, and the need to evaluate alternatives based on both quantitative and qualitative criteria. To address these challenges, this study presents FAS-XAI, a unified methodological framework that integrates fuzzy clustering and explainable artificial intelligence (XAI). FAS-XAI supports interpretable, data-driven decision-making by combining three key components: fuzzy clustering to uncover latent behavioral profiles under ambiguity, supervised prediction models to estimate decision outcomes, and expert-guided interpretation to contextualize results and enhance transparency. The framework ensures both global and local interpretability through SHAP, LIME, and ELI5, placing human reasoning and transparency at the center of intelligent decision systems. To demonstrate its applicability, FAS-XAI is applied to a real-world B2B customer service dataset from a global ERP software distributor. Customer engagement is modeled using the RFID approach (Recency, Frequency, Importance, Duration), with Fuzzy C-Means employed to identify overlapping customer profiles and XGBoost models predicting attrition risk with explainable outputs. This case study illustrates the coherence, interpretability, and operational value of the FAS-XAI methodology in managing customer relationships and supporting strategic decision-making. Finally, the study reflects additional applications across education, physics, and industry, positioning FAS-XAI as a general-purpose, human-centered framework for transparent, explainable, and adaptive decision-making across domains. Full article
Show Figures

Figure 1

27 pages, 3722 KB  
Article
Integrating Exploratory Data Analysis and Explainable AI into Astronomy Education: A Fuzzy Approach to Data-Literate Learning
by Gabriel Marín Díaz
Educ. Sci. 2025, 15(12), 1688; https://doi.org/10.3390/educsci15121688 - 15 Dec 2025
Viewed by 457
Abstract
Astronomy provides an exceptional context for developing data literacy, critical thinking, and computational skills in education. This paper presents a project-based learning (PBL) framework that integrates exploratory data analysis (EDA), fuzzy logic, and explainable artificial intelligence (XAI) to teach students how to extract [...] Read more.
Astronomy provides an exceptional context for developing data literacy, critical thinking, and computational skills in education. This paper presents a project-based learning (PBL) framework that integrates exploratory data analysis (EDA), fuzzy logic, and explainable artificial intelligence (XAI) to teach students how to extract and interpret scientific knowledge from real astronomical data. Using open-access resources such as NASA’s JPL Horizons and ESA’s Gaia DR3, together with Python libraries like Astroquery and Plotly, learners retrieve, process, and visualize dynamic datasets of comets, asteroids, and stars. The methodology follows the full data science pipeline, from acquisition and preprocessing to modeling and interpretation, culminating with the application of the FAS-XAI framework (Fuzzy-Adaptive System for Explainable AI) for pattern discovery and interpretability. Through this approach, students can reproduce astronomical analyses and understand how data-driven methods reveal underlying physical relationships, such as orbital structures and stellar classifications. The results demonstrate that combining EDA with fuzzy clustering and explainable models promotes deeper conceptual understanding and analytical reasoning. From an educational perspective, this experience highlights how inquiry-based and computationally rich activities can bridge the gap between theoretical astronomy and data science, empowering students to see the Universe as a laboratory for exploration, reasoning, and discovery. This framework thus provides an effective model for incorporating artificial intelligence, open data, and reproducible research practices into STEM education. Full article
Show Figures

Figure 1

48 pages, 5616 KB  
Review
Recent Developments in Pharmaceutical Spray Drying: Modeling, Process Optimization, and Emerging Trends with Machine Learning
by Waasif Wahab, Raya Alshamsi, Bouta Alharsousi, Manar Alnuaimi, Zaina Alhammadi and Belal Al-Zaitone
Pharmaceutics 2025, 17(12), 1605; https://doi.org/10.3390/pharmaceutics17121605 - 13 Dec 2025
Viewed by 1164
Abstract
Spray drying techniques are widely used in the pharmaceutical industry to produce fine drug powders with different properties depending on the route of administration. Process parameters play a vital role in the critical quality attributes of the final product. This review highlights the [...] Read more.
Spray drying techniques are widely used in the pharmaceutical industry to produce fine drug powders with different properties depending on the route of administration. Process parameters play a vital role in the critical quality attributes of the final product. This review highlights the progress and challenges in modeling the spray-drying process, with a focus on pharmaceutical applications. Computational fluid dynamics (CFD) is a well-known method used for the modeling and numerical simulation of spray drying processes. However, owing to their limitations, including high computational costs, experimental validation, and limited accuracy under complex spray drying conditions. Machine learning (ML) models have recently emerged as integral tools for modeling/optimizing the spray drying process. Despite promising accuracy, ML models depend on high-quality data and may fail to predict the influence of new formulation or process parameters on the properties of the dried powder. This review outlines the shortcomings of CFD modeling in the spray drying process. A hybrid model combining ML and CFD models, emerging techniques such as the digital twin approach, transfer learning, and explainable AI (XAI) are also discussed. A hybrid model combining ML and CFD models is also discussed. ML is considered an emerging technique that could assist the spray drying process, and most importantly, the utilization of this method in pharmaceutical spray drying. Full article
Show Figures

Graphical abstract

10 pages, 488 KB  
Proceeding Paper
Enhancing Critical Industrial Processes with Artificial Intelligence Models
by Karim Amzil, Rajaa Saidi and Walid Cherif
Eng. Proc. 2025, 112(1), 75; https://doi.org/10.3390/engproc2025112075 - 8 Dec 2025
Viewed by 394
Abstract
This review explores the deployment of Artificial Intelligence (AI) technologies to augment key industry processes in the new paradigm of Industry 5.0. Based on a handpicked collection of 35 peer-reviewed articles and leading resources, the study integrates the latest breakthroughs in Machine Learning [...] Read more.
This review explores the deployment of Artificial Intelligence (AI) technologies to augment key industry processes in the new paradigm of Industry 5.0. Based on a handpicked collection of 35 peer-reviewed articles and leading resources, the study integrates the latest breakthroughs in Machine Learning (ML), Deep Learning (DL), Reinforcement Learning (RL), and Federated Learning (FL) with their applications in predictive maintenance, process planning, real-time monitoring, and operational excellence. The results emphasize AI’s central role in making manufacturing smarter, minimizing system downtime, and facilitating decision-making based on information in various industries like aerospace, energy, and intelligent manufacturing. Yet, the review also highlights significant challenges, ranging from data heterogeneity to model interpretability, security risks, and the ethics of automation. Solutions in the making, including Explainable AI (XAI), privacy-enhancing collaborative models, and enhanced cybersecurity protocols, are postulated to be the key drivers for the development of dependable and resilient industrial AI systems. The study concludes by postulating directions for further research and practice to secure the safe, transparent, and human-centered deployment of AI in industrial settings. Full article
Show Figures

Figure 1

32 pages, 3384 KB  
Review
A Survey of the Application of Explainable Artificial Intelligence in Biomedical Informatics
by Hassan Eshkiki, Farinaz Tanhaei, Fabio Caraffini and Benjamin Mora
Appl. Sci. 2025, 15(24), 12934; https://doi.org/10.3390/app152412934 - 8 Dec 2025
Viewed by 1034
Abstract
This review investigates the application of Explainable Artificial Intelligence (XAI) in biomedical informatics, encompassing domains such as medical imaging, genomics, and electronic health records. Through a systematic analysis of 43 peer-reviewed articles, we examine current trends, as well as the strengths and limitations [...] Read more.
This review investigates the application of Explainable Artificial Intelligence (XAI) in biomedical informatics, encompassing domains such as medical imaging, genomics, and electronic health records. Through a systematic analysis of 43 peer-reviewed articles, we examine current trends, as well as the strengths and limitations of methodologies currently used in real-world healthcare settings. Our findings highlight a growing interest in XAI, particularly in medical imaging, yet reveal persistent challenges in clinical adoption, including issues of trust, interpretability, and integration into decision-making workflows. We identify critical gaps in existing approaches and underscore the need for more robust, human-centred, and intrinsically interpretable models, with only 44% of the papers studied proposing human-centred validations. Furthermore, we argue that fairness and accountability, which are key to the acceptance of AI in clinical practice, can be supported by the use of post hoc tools for identifying potential biases but ultimately require the implementation of complementary fairness-aware or causal approaches alongside evaluation frameworks that prioritise clinical relevance and user trust. This review provides a foundation for advancing XAI research on the development of more transparent, equitable, and clinically meaningful AI systems for use in healthcare. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Biomedical Informatics)
Show Figures

Figure 1

26 pages, 3504 KB  
Review
The Evolution of Artificial Intelligence in Ocular Toxoplasmosis Detection: A Scoping Review on Diagnostic Models, Data Challenges, and Future Directions
by Dodit Suprianto, Loeki Enggar Fitri, Ovi Sofia, Akhmad Sabarudin, Wayan Firdaus Mahmudy, Muhammad Hatta Prabowo and Werasak Surareungchai
Infect. Dis. Rep. 2025, 17(6), 148; https://doi.org/10.3390/idr17060148 - 8 Dec 2025
Viewed by 517
Abstract
Ocular Toxoplasmosis (OT), a leading cause of infectious posterior uveitis, presents significant diagnostic challenges in atypical cases due to phenotypic overlap with other retinochoroiditides and a reliance on expert interpretation of multimodal imaging. This scoping review systematically maps the burgeoning application of artificial [...] Read more.
Ocular Toxoplasmosis (OT), a leading cause of infectious posterior uveitis, presents significant diagnostic challenges in atypical cases due to phenotypic overlap with other retinochoroiditides and a reliance on expert interpretation of multimodal imaging. This scoping review systematically maps the burgeoning application of artificial intelligence (AI), particularly deep learning, in automating OT diagnosis. We synthesized 22 studies to characterize the current evidence, data landscape, and clinical translation readiness. Findings reveal a field in its nascent yet rapidly accelerating phase, dominated by convolutional neural networks (CNNs) applied to fundus photography for binary classification tasks, often reporting high accuracy (87–99.2%). However, development is critically constrained by small, imbalanced, single-center datasets, a near-universal lack of external validation, and insufficient explainable AI (XAI), creating a significant gap between technical promise and clinical utility. While AI demonstrates strong potential to standardize diagnosis and reduce subjectivity, its path to integration is hampered by over-reliance on internal validation, the “black box” nature of models, and an absence of implementation strategies. Future progress hinges on collaborative multi-center data curation, mandatory external and prospective validation, the integration of XAI for transparency, and a focused shift towards developing AI tools that assist in the complex differential diagnosis of posterior uveitis, ultimately bridging the translational chasm to clinical practice. Full article
Show Figures

Figure 1

43 pages, 7699 KB  
Review
Unveiling the Algorithm: The Role of Explainable Artificial Intelligence in Modern Surgery
by Sara Lopes, Miguel Mascarenhas, João Fonseca, Maria Gabriela O. Fernandes and Adelino F. Leite-Moreira
Healthcare 2025, 13(24), 3208; https://doi.org/10.3390/healthcare13243208 - 8 Dec 2025
Viewed by 931
Abstract
Artificial Intelligence (AI) is rapidly transforming surgical care by enabling more accurate diagnosis and risk prediction, personalized decision-making, real-time intraoperative support, and postoperative management. Ongoing trends such as multi-task learning, real-time integration, and clinician-centered design suggest AI is maturing into a safe, pragmatic [...] Read more.
Artificial Intelligence (AI) is rapidly transforming surgical care by enabling more accurate diagnosis and risk prediction, personalized decision-making, real-time intraoperative support, and postoperative management. Ongoing trends such as multi-task learning, real-time integration, and clinician-centered design suggest AI is maturing into a safe, pragmatic asset in surgical care. Yet, significant challenges, such as the complexity and opacity of many AI models (particularly deep learning), transparency, bias, data sharing, and equitable deployment, must be surpassed to achieve clinical trust, ethical use, and regulatory approval of AI algorithms in healthcare. Explainable Artificial Intelligence (XAI) is an emerging field that plays an important role in bridging the gap between algorithmic power and clinical use as surgery becomes increasingly data-driven. The authors reviewed current applications of XAI in the context of surgery—preoperative risk assessment, surgical planning, intraoperative guidance, and postoperative monitoring—and highlighted the absence of these mechanisms in Generative AI (e.g., ChatGPT). XAI will allow surgeons to interpret, validate, and trust AI tools. XAI applied in surgery is not a luxury: it must be a prerequisite for responsible innovation. Model bias, overfitting, and user interface design are key challenges that need to be overcome and will be explored in this review to achieve the integration of XAI into the surgical field. Unveiling the algorithm is the first step toward a safe, accountable, transparent, and human-centered surgical AI. Full article
(This article belongs to the Section Artificial Intelligence in Healthcare)
Show Figures

Figure 1

27 pages, 6664 KB  
Article
Advancing Multi-Label Tomato Leaf Disease Identification Using Vision Transformer and EfficientNet with Explainable AI Techniques
by Md. Nurullah, Rania Hodhod, Hyrum Carroll and Yi Zhou
Electronics 2025, 14(23), 4762; https://doi.org/10.3390/electronics14234762 - 3 Dec 2025
Viewed by 649
Abstract
Plant diseases pose a significant threat to global food security, affecting crop yield, quality, and overall agricultural productivity. Traditionally, diagnosing plant diseases has relied on time-consuming visual inspections by experts, which can often lead to errors. Machine learning (ML) and artificial intelligence (AI), [...] Read more.
Plant diseases pose a significant threat to global food security, affecting crop yield, quality, and overall agricultural productivity. Traditionally, diagnosing plant diseases has relied on time-consuming visual inspections by experts, which can often lead to errors. Machine learning (ML) and artificial intelligence (AI), particularly Vision Transformers (ViTs), and Convolutional Neural Networks, offer a faster, automated alternative for identifying plant diseases through leaf image analysis. However, these models are often criticized for their “black box” nature, limiting trust in their predictions due to a lack of transparency. Our findings show that incorporating Explainable AI (XAI) techniques, such as Grad-CAM, Integrated Gradients, and LIME, significantly improves model interpretability, making it easier for practitioners to identify the underlying symptoms of plant diseases. This study not only contributes to the field of plant disease detection but also offers a novel perspective on improving AI transparency in real-world agricultural applications through the use of XAI techniques. With training accuracies of 100.00% for ViT, 96.88% for EfficientNetB7, 93.75% for EfficientNetB0, and 87.50% for ResNet50, and corresponding validation accuracies of 96.39% for ViT, 86.98% for EfficientNetB7, and 82.00% for EfficientNetB0, our proposed models outperform earlier research on the same dataset. This demonstrates a notable improvement in model performance while maintaining transparency and trustworthiness through interpretable and reliable decision-making. Full article
(This article belongs to the Special Issue Artificial Intelligence and Image Processing in Smart Agriculture)
Show Figures

Figure 1

46 pages, 2312 KB  
Article
A Multi-Criteria Decision-Making Approach for the Selection of Explainable AI Methods
by Miroslava Matejová and Ján Paralič
Mach. Learn. Knowl. Extr. 2025, 7(4), 158; https://doi.org/10.3390/make7040158 - 1 Dec 2025
Viewed by 1341
Abstract
The growing trend of using artificial intelligence models in many areas increases the need for a proper understanding of their functioning and decision-making. Although these models achieve high predictive accuracy, their lack of transparency poses major obstacles to trust. Explainable artificial intelligence (XAI) [...] Read more.
The growing trend of using artificial intelligence models in many areas increases the need for a proper understanding of their functioning and decision-making. Although these models achieve high predictive accuracy, their lack of transparency poses major obstacles to trust. Explainable artificial intelligence (XAI) has emerged as a key discipline that offers a wide range of methods to explain the decisions of models. Selecting the most appropriate XAI method for a given application is a non-trivial problem that requires careful consideration of the nature of the method and other aspects. This paper proposes a systematic approach to solving this problem using multi-criteria decision-making (MCDM) techniques: ARAS, CODAS, EDAS, MABAC, MARCOS, PROMETHEE II, TOPSIS, VIKOR, WASPAS, and WSM. The resulting score is an aggregation of the results of these methods using Borda Count. We present a framework that integrates objective and subjective criteria for selecting XAI methods. The proposed methodology includes two main phases. In the first phase, methods that meet the specified parameters are filtered, and in the second phase, the most suitable alternative is selected based on the weights using multi-criteria decision-making and sensitivity analysis. Metric weights can be entered directly, using pairwise comparisons, or calculated objectively using the CRITIC method. The framework is demonstrated on concrete use cases where we compare several popular XAI methods on tasks in different domains. The results show that the proposed approach provides a transparent and robust mechanism for objectively selecting the most appropriate XAI method, thereby helping researchers and practitioners make more informed decisions when deploying explainable AI systems. Sensitivity analysis confirmed the robustness of our XAI method selection: LIME dominated 98.5% of tests in the first use case, and Tree SHAP dominated 94.3% in the second. Full article
Show Figures

Figure 1

Back to TopTop