Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (398)

Search Parameters:
Keywords = explainable AI (XAI)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 3099 KiB  
Article
Explainable Multi-Scale CAM Attention for Interpretable Cloud Segmentation in Astro-Meteorological Applications
by Qing Xu, Zichen Zhang, Guanfang Wang and Yunjie Chen
Appl. Sci. 2025, 15(15), 8555; https://doi.org/10.3390/app15158555 (registering DOI) - 1 Aug 2025
Viewed by 130
Abstract
Accurate cloud segmentation is critical for astronomical observations and solar forecasting. However, traditional threshold- and texture-based methods suffer from limited accuracy (65–80%) under complex conditions such as thin cirrus or twilight transitions. Although the deep-learning segmentation method based on U-Net effectively captures low-level [...] Read more.
Accurate cloud segmentation is critical for astronomical observations and solar forecasting. However, traditional threshold- and texture-based methods suffer from limited accuracy (65–80%) under complex conditions such as thin cirrus or twilight transitions. Although the deep-learning segmentation method based on U-Net effectively captures low-level and high-level features and achieves significant progress in accuracy, current methods still lack interpretability and multi-scale feature integration and usually produce fuzzy boundaries or fragmented predictions. In this paper, we propose multi-scale CAM, an explainable AI (XAI) framework that integrates class activation mapping (CAM) with hierarchical feature fusion to quantify pixel-level attention across hierarchical features, thereby enhancing the model’s discriminative capability. To achieve precise segmentation, we integrate CAM into an improved U-Net architecture, incorporating multi-scale CAM attention for adaptive feature fusion and dilated residual modules for large-scale context extraction. Experimental results on the SWINSEG dataset demonstrate that our method outperforms existing state-of-the-art methods, improving recall by 3.06%, F1 score by 1.49%, and MIoU by 2.21% over the best baseline. The proposed framework balances accuracy, interpretability, and computational efficiency, offering a trustworthy solution for cloud detection systems in operational settings. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence Technology and Its Applications)
Show Figures

Figure 1

28 pages, 6624 KiB  
Article
YoloMal-XAI: Interpretable Android Malware Classification Using RGB Images and YOLO11
by Chaymae El Youssofi and Khalid Chougdali
J. Cybersecur. Priv. 2025, 5(3), 52; https://doi.org/10.3390/jcp5030052 (registering DOI) - 1 Aug 2025
Viewed by 130
Abstract
As Android malware grows increasingly sophisticated, traditional detection methods struggle to keep pace, creating an urgent need for robust, interpretable, and real-time solutions to safeguard mobile ecosystems. This study introduces YoloMal-XAI, a novel deep learning framework that transforms Android application files into RGB [...] Read more.
As Android malware grows increasingly sophisticated, traditional detection methods struggle to keep pace, creating an urgent need for robust, interpretable, and real-time solutions to safeguard mobile ecosystems. This study introduces YoloMal-XAI, a novel deep learning framework that transforms Android application files into RGB image representations by mapping DEX (Dalvik Executable), Manifest.xml, and Resources.arsc files to distinct color channels. Evaluated on the CICMalDroid2020 dataset using YOLO11 pretrained classification models, YoloMal-XAI achieves 99.87% accuracy in binary classification and 99.56% in multi-class classification (Adware, Banking, Riskware, SMS, and Benign). Compared to ResNet-50, GoogLeNet, and MobileNetV2, YOLO11 offers competitive accuracy with at least 7× faster training over 100 epochs. Against YOLOv8, YOLO11 achieves comparable or superior accuracy while reducing training time by up to 3.5×. Cross-corpus validation using Drebin and CICAndMal2017 further confirms the model’s generalization capability on previously unseen malware. An ablation study highlights the value of integrating DEX, Manifest, and Resources components, with the full RGB configuration consistently delivering the best performance. Explainable AI (XAI) techniques—Grad-CAM, Grad-CAM++, Eigen-CAM, and HiRes-CAM—are employed to interpret model decisions, revealing the DEX segment as the most influential component. These results establish YoloMal-XAI as a scalable, efficient, and interpretable framework for Android malware detection, with strong potential for future deployment on resource-constrained mobile devices. Full article
Show Figures

Figure 1

20 pages, 314 KiB  
Review
AI and Machine Learning in Transplantation
by Kavyesh Vivek and Vassilios Papalois
Transplantology 2025, 6(3), 23; https://doi.org/10.3390/transplantology6030023 - 30 Jul 2025
Viewed by 210
Abstract
Artificial Intelligence (AI) and machine learning (ML) are increasingly being applied across the transplantation care pathway, supporting tasks such as donor–recipient matching, immunological risk stratification, early detection of graft dysfunction, and optimisation of immunosuppressive therapy. This review provides a structured synthesis of current [...] Read more.
Artificial Intelligence (AI) and machine learning (ML) are increasingly being applied across the transplantation care pathway, supporting tasks such as donor–recipient matching, immunological risk stratification, early detection of graft dysfunction, and optimisation of immunosuppressive therapy. This review provides a structured synthesis of current AI applications in transplantation, with a focus on underrepresented areas including real-time graft viability assessment, adaptive immunosuppression, and cross-organ immune modelling. The review also examines the translational infrastructure needed for clinical implementation, such as federated learning, explainable AI (XAI), and data governance. Evidence suggests that AI-based models can improve predictive accuracy and clinical decision support when compared to conventional approaches. However, limitations related to data quality, algorithmic bias, model transparency, and integration into clinical workflows remain. Addressing these challenges through rigorous validation, ethical oversight, and interdisciplinary collaboration will be necessary to support the safe and effective use of AI in transplant medicine. Full article
(This article belongs to the Special Issue Artificial Intelligence in Modern Transplantation)
13 pages, 3685 KiB  
Article
A Controlled Variation Approach for Example-Based Explainable AI in Colorectal Polyp Classification
by Miguel Filipe Fontes, Alexandre Henrique Neto, João Dallyson Almeida and António Trigueiros Cunha
Appl. Sci. 2025, 15(15), 8467; https://doi.org/10.3390/app15158467 (registering DOI) - 30 Jul 2025
Viewed by 164
Abstract
Medical imaging is vital for diagnosing and treating colorectal cancer (CRC), a leading cause of mortality. Classifying colorectal polyps and CRC precursors remains challenging due to operator variability and expertise dependence. Deep learning (DL) models show promise in polyp classification but face adoption [...] Read more.
Medical imaging is vital for diagnosing and treating colorectal cancer (CRC), a leading cause of mortality. Classifying colorectal polyps and CRC precursors remains challenging due to operator variability and expertise dependence. Deep learning (DL) models show promise in polyp classification but face adoption barriers due to their ‘black box’ nature, limiting interpretability. This study presents an example-based explainable artificial intehlligence (XAI) approach using Pix2Pix to generate synthetic polyp images with controlled size variations and LIME to explain classifier predictions visually. EfficientNet and Vision Transformer (ViT) were trained on datasets of real and synthetic images, achieving strong baseline accuracies of 94% and 96%, respectively. Image quality was assessed using PSNR (18.04), SSIM (0.64), and FID (123.32), while classifier robustness was evaluated across polyp sizes. Results show that Pix2Pix effectively controls image attributes like polyp size despite limitations in visual fidelity. LIME integration revealed classifier vulnerabilities, underscoring the value of complementary XAI techniques. This enhances DL model interpretability and deepens understanding of their behaviour. The findings contribute to developing explainable AI tools for polyp classification and CRC diagnosis. Future work will improve synthetic image quality and refine XAI methodologies for broader clinical use. Full article
Show Figures

Figure 1

34 pages, 3535 KiB  
Article
Hybrid Optimization and Explainable Deep Learning for Breast Cancer Detection
by Maral A. Mustafa, Osman Ayhan Erdem and Esra Söğüt
Appl. Sci. 2025, 15(15), 8448; https://doi.org/10.3390/app15158448 - 30 Jul 2025
Viewed by 221
Abstract
Breast cancer continues to be one of the leading causes of women’s deaths around the world, and this has emphasized the necessity to have novel and interpretable diagnostic models. This work offers a clear learning deep learning model that integrates the mobility of [...] Read more.
Breast cancer continues to be one of the leading causes of women’s deaths around the world, and this has emphasized the necessity to have novel and interpretable diagnostic models. This work offers a clear learning deep learning model that integrates the mobility of MobileNet and two bio-driven optimization operators, the Firefly Algorithm (FLA) and Dingo Optimization Algorithm (DOA), in an effort to boost classification appreciation and the convergence of the model. The suggested model demonstrated excellent findings as the DOA-optimized MobileNet acquired the highest performance of 98.96 percent accuracy on the fusion test, and the FLA-optimized MobileNet scaled up to 98.06 percent and 95.44 percent accuracies on mammographic and ultrasound tests, respectively. Further to good quantitative results, Grad-CAM visualizations indeed showed clinically consistent localization of the lesions, which strengthened the interpretability and model diagnostic reliability of Grad-CAM. These results show that lightweight, compact CNNs can be used to do high-performance, multimodal breast cancer diagnosis. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

24 pages, 6025 KiB  
Article
Uniform Manifold Approximation and Projection Filtering and Explainable Artificial Intelligence to Detect Adversarial Machine Learning
by Achmed Samuel Koroma, Sara Narteni, Enrico Cambiaso and Maurizio Mongelli
Information 2025, 16(8), 647; https://doi.org/10.3390/info16080647 - 29 Jul 2025
Viewed by 276
Abstract
Adversarial machine learning exploits the vulnerabilities of artificial intelligence (AI) models by inducing malicious distortion in input data. Starting with the effect of adversarial methods on well-known MNIST and CIFAR-10 open datasets, this paper investigates the ability of Uniform Manifold Approximation and Projection [...] Read more.
Adversarial machine learning exploits the vulnerabilities of artificial intelligence (AI) models by inducing malicious distortion in input data. Starting with the effect of adversarial methods on well-known MNIST and CIFAR-10 open datasets, this paper investigates the ability of Uniform Manifold Approximation and Projection (UMAP) in providing useful representations of both legitimate and malicious images and analyzes the attacks’ behavior under various conditions. By enabling the extraction of decision rules and the ranking of important features from classifiers such as decision trees, eXplainable AI (XAI) achieves zero false positives and negatives in detection through very simple if-then rules over UMAP variables. Several examples are reported in order to highlight attacks behaviour. The data availability statement details all code and data which is publicly available to offer support to reproducibility. Full article
Show Figures

Figure 1

25 pages, 1319 KiB  
Article
Beyond Performance: Explaining and Ensuring Fairness in Student Academic Performance Prediction with Machine Learning
by Kadir Kesgin, Salih Kiraz, Selahattin Kosunalp and Bozhana Stoycheva
Appl. Sci. 2025, 15(15), 8409; https://doi.org/10.3390/app15158409 - 29 Jul 2025
Viewed by 185
Abstract
This study addresses fairness in machine learning for student academic performance prediction using the UCI Student Performance dataset. We comparatively evaluate logistic regression, Random Forest, and XGBoost, integrating the Synthetic Minority Oversampling Technique (SMOTE) to address class imbalance and 5-fold cross-validation for robust [...] Read more.
This study addresses fairness in machine learning for student academic performance prediction using the UCI Student Performance dataset. We comparatively evaluate logistic regression, Random Forest, and XGBoost, integrating the Synthetic Minority Oversampling Technique (SMOTE) to address class imbalance and 5-fold cross-validation for robust model training. A comprehensive fairness analysis is conducted, considering sensitive attributes such as gender, school type, and socioeconomic factors, including parental education (Medu and Fedu), cohabitation status (Pstatus), and family size (famsize). Using the AIF360 library, we compute the demographic parity difference (DP) and Equalized Odds Difference (EO) to assess model biases across diverse subgroups. Our results demonstrate that XGBoost achieves high predictive performance (accuracy: 0.789; F1 score: 0.803) while maintaining low bias for socioeconomic attributes, offering a balanced approach to fairness and performance. A sensitivity analysis of bias mitigation strategies further enhances the study, advancing equitable artificial intelligence in education by incorporating socially relevant factors. Full article
(This article belongs to the Special Issue Challenges and Trends in Technology-Enhanced Learning)
Show Figures

Figure 1

18 pages, 1296 KiB  
Article
A Comprehensive Comparison and Evaluation of AI-Powered Healthcare Mobile Applications’ Usability
by Hessah W. Alduhailan, Majed A. Alshamari and Heider A. M. Wahsheh
Healthcare 2025, 13(15), 1829; https://doi.org/10.3390/healthcare13151829 - 26 Jul 2025
Viewed by 440
Abstract
Objectives: Artificial intelligence (AI) symptom-checker apps are proliferating, yet their everyday usability and transparency remain under-examined. This study provides a triangulated evaluation of three widely used AI-powered mHealth apps: ADA, Mediktor, and WebMD. Methods: Five usability experts applied a 13-item AI-specific [...] Read more.
Objectives: Artificial intelligence (AI) symptom-checker apps are proliferating, yet their everyday usability and transparency remain under-examined. This study provides a triangulated evaluation of three widely used AI-powered mHealth apps: ADA, Mediktor, and WebMD. Methods: Five usability experts applied a 13-item AI-specific heuristic checklist. In parallel, thirty lay users (18–65 years) completed five health-scenario tasks on each app, while task success, errors, completion time, and System Usability Scale (SUS) ratings were recorded. A repeated-measures ANOVA followed by paired-sample t-tests was conducted to compare SUS scores across the three applications. Results: The analysis revealed statistically significant differences in usability across the apps. ADA achieved a significantly higher mean SUS score than both Mediktor (p = 0.0004) and WebMD (p < 0.001), while Mediktor also outperformed WebMD (p = 0.0009). Common issues across all apps included vague AI outputs, limited feedback for input errors, and inconsistent navigation. Each application also failed key explainability heuristics, offering no confidence scores or interpretable rationales for AI-generated recommendations. Conclusions: Even highly rated AI mHealth apps display critical gaps in explainability and error handling. Embedding explainable AI (XAI) cues such as confidence indicators, input validation, and transparent justifications can enhance user trust, safety, and overall adoption in real-world healthcare contexts. Full article
Show Figures

Figure 1

26 pages, 2652 KiB  
Article
Predictive Framework for Membrane Fouling in Full-Scale Membrane Bioreactors (MBRs): Integrating AI-Driven Feature Engineering and Explainable AI (XAI)
by Jie Liang, Sangyoup Lee, Xianghao Ren, Yingjie Guo, Jeonghyun Park, Sung-Gwan Park, Ji-Yeon Kim and Moon-Hyun Hwang
Processes 2025, 13(8), 2352; https://doi.org/10.3390/pr13082352 - 24 Jul 2025
Viewed by 319
Abstract
Membrane fouling remains a major challenge in full-scale membrane bioreactor (MBR) systems, reducing operational efficiency and increasing maintenance needs. This study introduces a predictive and analytic framework for membrane fouling by integrating artificial intelligence (AI)-driven feature engineering and explainable AI (XAI) using real-world [...] Read more.
Membrane fouling remains a major challenge in full-scale membrane bioreactor (MBR) systems, reducing operational efficiency and increasing maintenance needs. This study introduces a predictive and analytic framework for membrane fouling by integrating artificial intelligence (AI)-driven feature engineering and explainable AI (XAI) using real-world data from an MBR treating food processing wastewater. The framework refines the target parameter to specific flux (flux/transmembrane pressure (TMP)), incorporates chemical oxygen demand (COD) removal efficiency to reflect biological performance, and applies a moving average function to capture temporal fouling dynamics. Among tested models, CatBoost achieved the highest predictive accuracy (R2 = 0.8374), outperforming traditional statistical and other machine learning models. XAI analysis identified the food-to-microorganism (F/M) ratio and mixed liquor suspended solids (MLSSs) as the most influential variables affecting fouling. This robust and interpretable approach enables proactive fouling prediction and supports informed decision making in practical MBR operations, even with limited data. The methodology establishes a foundation for future integration with real-time monitoring and adaptive control, contributing to more sustainable and efficient membrane-based wastewater treatment operations. However, this study is based on data from a single full-scale MBR treating food processing wastewater and lacks severe fouling or cleaning events, so further validation with diverse datasets is needed to confirm broader applicability. Full article
(This article belongs to the Special Issue Membrane Technologies for Desalination and Wastewater Treatment)
Show Figures

Figure 1

30 pages, 11103 KiB  
Article
Histological Image Classification Between Follicular Lymphoma and Reactive Lymphoid Tissue Using Deep Learning and Explainable Artificial Intelligence (XAI)
by Joaquim Carreras, Haruka Ikoma, Yara Yukie Kikuti, Shunsuke Nagase, Atsushi Ito, Makoto Orita, Sakura Tomita, Yuki Tanigaki, Naoya Nakamura and Yohei Masugi
Cancers 2025, 17(15), 2428; https://doi.org/10.3390/cancers17152428 - 22 Jul 2025
Viewed by 195
Abstract
Background/Objectives: The major question that confronts a pathologist when evaluating a lymph node biopsy is whether the process is benign or malignant, and the differential diagnosis between follicular lymphoma and reactive lymphoid tissue can be challenging. Methods: This study designed a [...] Read more.
Background/Objectives: The major question that confronts a pathologist when evaluating a lymph node biopsy is whether the process is benign or malignant, and the differential diagnosis between follicular lymphoma and reactive lymphoid tissue can be challenging. Methods: This study designed a convolutional neural network based on ResNet architecture to classify a large series of 221 cases, including 177 follicular lymphoma and 44 reactive lymphoid tissue/lymphoid hyperplasia, which were stained with hematoxylin and eosin (H&E). Explainable artificial intelligence (XAI) methods were used for interpretability. Results: The series included 1,004,509 follicular lymphoma and 490,506 reactive lymphoid tissue image-patches at 224 × 244 × 3, and was partitioned into training (70%), validation (10%), and testing (20%) sets. The performance of the training (training and validation sets) had an accuracy of 99.81%. In the testing set, the performance metrics achieved an accuracy of 99.80% at the image-patch level for follicular lymphoma. The other performance parameters were precision (99.8%), recall (99.8%), false positive rate (0.35%), specificity (99.7%), and F1 score (99.9%). Interpretability was analyzed using three methods: grad-CAM, image LIME, and occlusion sensitivity. Additionally, hybrid partitioning was performed to avoid information leakage using a patient-level independent validation set that confirmed high classification performance. Conclusions: Narrow artificial intelligence (AI) can perform differential diagnosis between follicular lymphoma and reactive lymphoma tissue, but it is task-specific and operates within limited constraints. The trained ResNet convolutional neural network (CNN) may be used as transfer learning for larger series of cases and lymphoma diagnoses in the future. Full article
(This article belongs to the Special Issue AI-Based Applications in Cancers)
Show Figures

Figure 1

23 pages, 1458 KiB  
Article
From Meals to Marks: Modeling the Impact of Family Involvement on Reading Performance with Counterfactual Explainable AI
by Myint Swe Khine, Nagla Ali and Othman Abu Khurma
Educ. Sci. 2025, 15(7), 928; https://doi.org/10.3390/educsci15070928 - 21 Jul 2025
Viewed by 270
Abstract
This study investigates the impact of family engagement on student reading achievement in the United Arab Emirates (UAE) using counterfactual explainable artificial intelligence (CXAI) analysis. Drawing data from 24,600 students in the UAE PISA dataset, the analysis employed Gradient Boosting, SHAP (SHapley Additive [...] Read more.
This study investigates the impact of family engagement on student reading achievement in the United Arab Emirates (UAE) using counterfactual explainable artificial intelligence (CXAI) analysis. Drawing data from 24,600 students in the UAE PISA dataset, the analysis employed Gradient Boosting, SHAP (SHapley Additive exPlanations), and counterfactual simulations to model and interpret the influence of ten parental involvement variables. The results identified time spent talking with parents, frequency of family meals, and encouragement to achieve good marks as the strongest predictors of reading performance. Counterfactual analysis revealed that increasing the time spent talking with parents and frequency of family meals from their minimum (1) to maximum (5) levels, while holding other variables constant at their medians, could increase the predicted reading score from the baseline of 358.93 to as high as 448.68, marking an improvement of nearly 90 points. These findings emphasize the educational value of culturally compatible parental behaviors. The study also contributes to methodological advancement by integrating interpretable machine learning with prescriptive insights, demonstrating the potential of XAI for educational policy and intervention design. Implications for educators, policymakers, and families highlight the importance of promoting high-impact family practices to support literacy development. The approach offers a replicable model for leveraging AI to understand and enhance student learning outcomes across diverse contexts. Full article
Show Figures

Figure 1

40 pages, 1540 KiB  
Review
A Survey on Video Big Data Analytics: Architecture, Technologies, and Open Research Challenges
by Thi-Thu-Trang Do, Quyet-Thang Huynh, Kyungbaek Kim and Van-Quyet Nguyen
Appl. Sci. 2025, 15(14), 8089; https://doi.org/10.3390/app15148089 - 21 Jul 2025
Viewed by 533
Abstract
The exponential growth of video data across domains such as surveillance, transportation, and healthcare has raised critical challenges in scalability, real-time processing, and privacy preservation. While existing studies have addressed individual aspects of Video Big Data Analytics (VBDA), an integrated, up-to-date perspective remains [...] Read more.
The exponential growth of video data across domains such as surveillance, transportation, and healthcare has raised critical challenges in scalability, real-time processing, and privacy preservation. While existing studies have addressed individual aspects of Video Big Data Analytics (VBDA), an integrated, up-to-date perspective remains limited. This paper presents a comprehensive survey of system architectures and enabling technologies in VBDA. It categorizes system architectures into four primary types as follows: centralized, cloud-based infrastructures, edge computing, and hybrid cloud–edge. It also analyzes key enabling technologies, including real-time streaming, scalable distributed processing, intelligent AI models, and advanced storage for managing large-scale multimodal video data. In addition, the study provides a functional taxonomy of core video processing tasks, including object detection, anomaly recognition, and semantic retrieval, and maps these tasks to real-world applications. Based on the survey findings, the paper proposes ViMindXAI, a hybrid AI-driven platform that combines edge and cloud orchestration, adaptive storage, and privacy-aware learning to support scalable and trustworthy video analytics. Our analysis in this survey highlights emerging trends such as the shift toward hybrid cloud–edge architectures, the growing importance of explainable AI and federated learning, and the urgent need for secure and efficient video data management. These findings highlight key directions for designing next-generation VBDA platforms that enhance real-time, data-driven decision-making in domains such as public safety, transportation, and healthcare. These platforms facilitate timely insights, rapid response, and regulatory alignment through scalable and explainable analytics. This work provides a robust conceptual foundation for future research on adaptive and efficient decision-support systems in video-intensive environments. Full article
Show Figures

Figure 1

83 pages, 3818 KiB  
Systematic Review
Explainability and Interpretability in Concept and Data Drift: A Systematic Literature Review
by Daniele Pelosi, Diletta Cacciagrano and Marco Piangerelli
Algorithms 2025, 18(7), 443; https://doi.org/10.3390/a18070443 - 18 Jul 2025
Viewed by 461
Abstract
Explainability and interpretability have emerged as essential considerations in machine learning, particularly as models become more complex and integral to a wide range of applications. In response to increasing concerns over opaque “black-box” solutions, the literature has seen a shift toward two distinct [...] Read more.
Explainability and interpretability have emerged as essential considerations in machine learning, particularly as models become more complex and integral to a wide range of applications. In response to increasing concerns over opaque “black-box” solutions, the literature has seen a shift toward two distinct yet often conflated paradigms: explainable AI (XAI), which refers to post hoc techniques that provide external explanations for model predictions, and interpretable AI, which emphasizes models whose internal mechanisms are understandable by design. Meanwhile, the phenomenon of concept and data drift—where models lose relevance due to evolving conditions—demands renewed attention. High-impact events, such as financial crises or natural disasters, have highlighted the need for robust interpretable or explainable models capable of adapting to changing circumstances. Against this backdrop, our systematic review aims to consolidate current research on explainability and interpretability with a focus on concept and data drift. We gather a comprehensive range of proposed models, available datasets, and other technical aspects. By synthesizing these diverse resources into a clear taxonomy, we intend to provide researchers and practitioners with actionable insights and guidance for model selection, implementation, and ongoing evaluation. Ultimately, this work aspires to serve as a practical roadmap for future studies, fostering further advancements in transparent, adaptable machine learning systems that can meet the evolving needs of real-world applications. Full article
(This article belongs to the Special Issue Machine Learning for Pattern Recognition (3rd Edition))
Show Figures

Figure 1

32 pages, 2529 KiB  
Article
Cloud Adoption in the Digital Era: An Interpretable Machine Learning Analysis of National Readiness and Structural Disparities Across the EU
by Cristiana Tudor, Margareta Florescu, Persefoni Polychronidou, Pavlos Stamatiou, Vasileios Vlachos and Konstadina Kasabali
Appl. Sci. 2025, 15(14), 8019; https://doi.org/10.3390/app15148019 - 18 Jul 2025
Viewed by 269
Abstract
As digital transformation accelerates across Europe, cloud computing plays an increasingly central role in modernizing public services and private enterprises. Yet adoption rates vary markedly among EU member states, reflecting deeper structural differences in digital capacity. This study employs explainable machine learning to [...] Read more.
As digital transformation accelerates across Europe, cloud computing plays an increasingly central role in modernizing public services and private enterprises. Yet adoption rates vary markedly among EU member states, reflecting deeper structural differences in digital capacity. This study employs explainable machine learning to uncover the drivers of national cloud adoption across 27 EU countries using harmonized panel datasets spanning 2014–2021 and 2014–2024. A methodological pipeline combining Random Forests (RF), XGBoost, Support Vector Machines (SVM), and Elastic Net regression is implemented, with model tuning conducted via nested cross-validation. Among individual models, Elastic Net and SVM delivered superior predictive performance, while a stacked ensemble achieved the best overall accuracy (MAE = 0.214, R2 = 0.948). The most interpretable model, a standardized RF with country fixed effects, attained MAE = 0.321, and R2 = 0.864, making it well-suited for policy analysis. Variable importance analysis reveals that the density of ICT specialists is the strongest predictor of adoption, followed by broadband access and higher education. Fixed-effect modeling confirms significant national heterogeneity, with countries like Finland and Luxembourg consistently leading adoption, while Bulgaria and Romania exhibit structural barriers. Partial dependence and SHAP analyses reveal nonlinear complementarities between digital skills and infrastructure. A hierarchical clustering of countries reveals three distinct digital maturity profiles, offering tailored policy pathways. These results directly support the EU Digital Decade’s strategic targets and provide actionable insights for advancing inclusive and resilient digital transformation across the Union. Full article
(This article belongs to the Special Issue Advanced Technologies Applied in Digital Media Era)
Show Figures

Figure 1

21 pages, 1415 KiB  
Review
Next-Generation River Health Monitoring: Integrating AI, GIS, and eDNA for Real-Time and Biodiversity-Driven Assessment
by Su-Ok Hwang, Byeong-Hun Han, Hyo-Gyeom Kim and Baik-Ho Kim
Hydrobiology 2025, 4(3), 19; https://doi.org/10.3390/hydrobiology4030019 - 16 Jul 2025
Viewed by 482
Abstract
Freshwater ecosystems face escalating degradation, demanding real-time, scalable, and biodiversity-aware monitoring solutions. This review proposes an integrated framework combining artificial intelligence (AI), geographic information systems (GISs), and environmental DNA (eDNA) to overcome these limitations and support next-generation river health assessment. The AI-GIS-eDNA system [...] Read more.
Freshwater ecosystems face escalating degradation, demanding real-time, scalable, and biodiversity-aware monitoring solutions. This review proposes an integrated framework combining artificial intelligence (AI), geographic information systems (GISs), and environmental DNA (eDNA) to overcome these limitations and support next-generation river health assessment. The AI-GIS-eDNA system was applied to four representative river basins—the Mississippi, Amazon, Yangtze, and Danube—demonstrating enhanced predictive accuracy (up to 94%), spatial pollution mapping precision (85–95%), and species detection sensitivity (+18–30%) compared to conventional methods. Furthermore, the framework reduces operational costs by up to 40%, highlighting its potential for cost-effective deployment in low-resource regions. Despite its strengths, challenges persist in the areas of regulatory acceptance, data standardization, and digital infrastructure. We recommend legal recognition of AI and eDNA indicators, investment in explainable AI (XAI), and global data harmonization initiatives. The integrated AI-GIS-eDNA framework offers a scalable and policy-relevant tool for adaptive freshwater governance in the Anthropocene. Full article
(This article belongs to the Special Issue Ecosystem Disturbance in Small Streams)
Show Figures

Figure 1

Back to TopTop