Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,330)

Search Parameters:
Keywords = interpretable machine learning method

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 1209 KB  
Review
Dynamic Response-Based Bridge Monitoring and Structural Assessment: A Structured Scoping Review and Evidence Inventory
by Muhammad Ziad Bacha, Mario Lucio Puppio, Marco Zucca and Mauro Sassu
Infrastructures 2026, 11(4), 134; https://doi.org/10.3390/infrastructures11040134 - 10 Apr 2026
Abstract
Dynamic response measurements support bridge monitoring and structural assessment because they are obtainable under operational loading and are sensitive to changes in stiffness, boundary conditions, and mass distribution. This article presents a structured scoping review of dynamic-response-based bridge monitoring and assessment. It covers [...] Read more.
Dynamic response measurements support bridge monitoring and structural assessment because they are obtainable under operational loading and are sensitive to changes in stiffness, boundary conditions, and mass distribution. This article presents a structured scoping review of dynamic-response-based bridge monitoring and assessment. It covers damage-sensitive indicators, stiffness/capacity proxy inference, interpretation under operational and extreme loading, sensing with acquisition (contact, and indirect/drive-by), and data processing, machine learning and digital-twin integration for decision support. Evidence was identified through targeted searches in Scopus and The Lens with duplicate resolution in Zotero. The cited studies are compiled into a traceable evidence inventory linked to method families and decision objectives. The synthesis shows that global modal properties enable change screening but are highly confounded by environmental/operational variability. Localization and state characterization typically require denser or higher-fidelity sensing and signal conditioning. Finally, capacity-related inference using calibrated conversion models or machine learning (ML) surrogates remains context-bounded and validation-dependent. This review provides an end-to-end pipeline, evidence-maturity rubric, and conservative failure-mode checks with escalation logic that tie SHM outputs to inspection and analysis rather than direct condition declarations for bridge owners. This review is intentionally scoped and does not claim PRISMA-style comprehensiveness. Full article
22 pages, 2075 KB  
Article
WISCA: A Consensus-Based Approach to Harmonizing Interpretability in Tabular Datasets
by Antonio Jesús Banegas-Luna, Horacio Pérez-Sánchez and Carlos Martínez-Cortés
Mach. Learn. Knowl. Extr. 2026, 8(4), 97; https://doi.org/10.3390/make8040097 - 10 Apr 2026
Abstract
While predictive accuracy is often prioritized in machine learning (ML) models, interpretability remains essential in scientific and high-stakes domains. However, diverse interpretability algorithms frequently yield conflicting explanations, highlighting the need for consensus to harmonize results. In this study, six ML models were trained [...] Read more.
While predictive accuracy is often prioritized in machine learning (ML) models, interpretability remains essential in scientific and high-stakes domains. However, diverse interpretability algorithms frequently yield conflicting explanations, highlighting the need for consensus to harmonize results. In this study, six ML models were trained on six synthetic datasets with known ground truths, utilizing various model-agnostic interpretability techniques, as well as gradient-based and counterfactual-based explainers. Consensus explanations were generated using established methods and a novel approach: WISCA (Weighted Scaled Consensus Attributions), which integrates class probability and normalized attributions. WISCA consistently aligned with the most reliable individual method, underscoring the value of robust consensus strategies in improving explanation reliability. Full article
Show Figures

Figure 1

26 pages, 4938 KB  
Article
Machine Learning Prediction of Shear Strength in Cold-Formed Steel Modular Construction-Optimised (MCO) Beam
by Drew Thomas Gray, Lenganji Simwanda, Mohamed Sifan, Keerthan Poologanathan and Thushanthan Kannan
Buildings 2026, 16(8), 1497; https://doi.org/10.3390/buildings16081497 - 10 Apr 2026
Abstract
The rapid growth of modular construction has increased the demand for accurate and computationally efficient methods for predicting the shear performance of cold-formed steel members. Modular construction-optimised beams, characterised by a mono-symmetric triangular hollow flange geometry, exhibit shear behaviour that is not well [...] Read more.
The rapid growth of modular construction has increased the demand for accurate and computationally efficient methods for predicting the shear performance of cold-formed steel members. Modular construction-optimised beams, characterised by a mono-symmetric triangular hollow flange geometry, exhibit shear behaviour that is not well represented by existing analytical formulations. This study proposes an explainable machine learning framework to predict the ultimate shear capacity of cold-formed steel modular construction-optimised beams using a validated finite-element dataset comprising 105 parametric models. Six supervised machine learning algorithms are trained and evaluated using resampling-based validation and statistical performance metrics. Categorical boosting achieved the best predictive performance, with a coefficient of determination of 95.9% and a mean absolute percentage error of 6.49% under 50 repeated train and test splits. Model transparency is supported using Shapley Additive Explanations, which confirm thickness and yield strength as the most influential inputs within the investigated domain. In addition, prediction uncertainty was quantified using empirical 95% prediction intervals, and the modelling workflow was strengthened by explicitly defining reproducibility and no-leakage conditions. Overall, the proposed framework provides an efficient and interpretable finite element surrogate tool for rapid design-oriented estimation of modular construction-optimised beam shear capacity within the defined parameter ranges and loading configuration. Full article
Show Figures

Figure 1

31 pages, 2718 KB  
Review
A Narrative Review of AI Frameworks for Chronic Stress Detection Using Physiological Sensing: Resting, Longitudinal, and Reactivity Perspectives
by Totok Nugroho, Wahyu Rahmaniar and Alfian Ma’arif
Sensors 2026, 26(8), 2345; https://doi.org/10.3390/s26082345 - 10 Apr 2026
Abstract
Chronic stress is a time-dependent condition characterized by sustained dysregulation across neural, autonomic, and endocrine systems, with important consequences for both health and socioeconomic outcomes. Unlike acute stress, which is typically characterized by short-lived physiological activation, chronic stress reflects an accumulated allostatic load [...] Read more.
Chronic stress is a time-dependent condition characterized by sustained dysregulation across neural, autonomic, and endocrine systems, with important consequences for both health and socioeconomic outcomes. Unlike acute stress, which is typically characterized by short-lived physiological activation, chronic stress reflects an accumulated allostatic load and a longer-term recalibration of stress response systems. Recent advances in physiological sensing and artificial intelligence (AI) have supported the development of computational approaches for chronic stress detection using electroencephalography (EEG), heart rate variability (HRV), photoplethysmography (PPG), electrodermal activity (EDA), and wearable multimodal platforms. This narrative review examines current AI-based studies through three main inferential paradigms: resting baseline dysregulation, longitudinal physiological monitoring, and reactivity-based inference. Across modalities, classical machine learning (ML) methods, particularly support vector machines (SVMs) and tree-based ensembles, remain the most commonly used approaches, largely because available datasets are small and most pipelines still depend on engineered features. Deep learning (DL) methods are beginning to emerge, but their use remains constrained by the lack of large, standardized, longitudinal datasets specifically designed for chronic stress research. Major challenges include ambiguity in stress labeling, limited longitudinal validation, circadian confounding, inter-individual variability, and small cohort sizes. Future progress will depend on standardized datasets, biologically grounded multimodal integration, hybrid baseline-reactivity modeling, adaptive personalization, and more interpretable AI systems. Greater emphasis is also needed on clinical relevance and generalizability if AI-based chronic stress monitoring is to move beyond experimental settings. Full article
(This article belongs to the Special Issue AI-Based Sensing and Imaging Applications)
Show Figures

Figure 1

19 pages, 2833 KB  
Article
An Interpretable Multimodal Machine-Learning Model for Non-Invasive Preoperative Glioma Grading
by Xianfeng Rao, Min Yang, Hao Chen, Guanhao Li, Li Wu, Liudong Gong, Minchun Yang, Haiyang Wang, Ye Ding, Guanxi Chen, Xianjun Rao, Na Zhang, Xiaoxiong Wang and Lei Teng
Cancers 2026, 18(8), 1204; https://doi.org/10.3390/cancers18081204 - 10 Apr 2026
Abstract
Background: Gliomas are the most common primary malignant tumors of the central nervous system. Accurate preoperative grading is essential for individualized surgical planning and treatment selection; however, reliable non-invasive prediction tools integrating multimodal preoperative data remain limited. This study aimed to develop [...] Read more.
Background: Gliomas are the most common primary malignant tumors of the central nervous system. Accurate preoperative grading is essential for individualized surgical planning and treatment selection; however, reliable non-invasive prediction tools integrating multimodal preoperative data remain limited. This study aimed to develop and internally validate an interpretable machine-learning model for non-invasive glioma grading. Methods: Clinical and imaging data from 400 patients with pathologically confirmed gliomas were retrospectively collected. Twenty-four preoperative variables were analyzed. The dataset was randomly divided into training and validation cohorts (7:3). Feature selection was performed using a combination of the Boruta algorithm and logistic regression analyses, followed by correlation filtering. Seventeen machine-learning algorithms were benchmarked using five-fold cross-validation, and the optimal model was evaluated in the independent validation cohort using ROC analysis, calibration assessment, precision–recall curves, and decision curve analysis. Model interpretability was examined using SHAP. Results: Eight key predictors were identified, including age, focal neurological deficits, midline shift, tumor laterality, tumor lobar location, enhancing tumor volume, and MRS-derived Cho/NAA and Cho/Cr ratios. The Random Forest model achieved an area under the ROC curve of 0.946 (95% CI: 0.902–0.989) in the validation cohort. Calibration analysis demonstrated reasonable agreement between predicted and observed outcomes, and the precision–recall curve yielded an average precision of 0.98. Decision curve analysis indicated net clinical benefit across relevant probability thresholds. Conclusions: A multimodal machine-learning model integrating clinical, structural imaging, and MRS-derived metabolic features was developed and internally validated for non-invasive preoperative glioma grading. The model showed good discrimination and calibration and provided individualized probability estimates, suggesting potential value for preoperative risk stratification. However, clinical deployment remains premature, and further external validation is required. Full article
(This article belongs to the Section Cancer Pathophysiology)
Show Figures

Graphical abstract

19 pages, 3057 KB  
Article
Advancing Masonry Engineering: Effective Prediction of Prism Strength via Machine Learning Techniques
by Panumas Saingam, Burachat Chatveera, Adnan Nawaz, Muhammad Hassan Ali, Sandeerah Choudhary, Muhammad Salman, Muhammad Noman, Preeda Chaimahawan, Chisanuphong Suthumma, Qudeer Hussain, Tahir Mehmood, Suniti Suparp and Gritsada Sua-Iam
Buildings 2026, 16(8), 1471; https://doi.org/10.3390/buildings16081471 - 8 Apr 2026
Abstract
Masonry buildings have shaped construction history since about 6500 BCE. They offer durability, strength, and cost effectiveness, especially in developing countries. Yet assessing compressive strength during construction remains challenging due to the constituent materials soil, cement, and stone, complicating standardization worldwide. In the [...] Read more.
Masonry buildings have shaped construction history since about 6500 BCE. They offer durability, strength, and cost effectiveness, especially in developing countries. Yet assessing compressive strength during construction remains challenging due to the constituent materials soil, cement, and stone, complicating standardization worldwide. In the present study, an innovative model based on a machine learning algorithm is put forth to predict the compressive strengths of prisms. Some important factors considered as input to the algorithm based on traditional methods are the brick and mortar strengths, prism geometry, mortar bed thickness, and empirically derived height-to-thickness (t) (h/t) ratios. Three different ANN algorithms are coded and trained on the input data, and they are based on the Levenberg–Marquardt algorithm, the resilient backpropagation algorithm, and the conjugate gradient algorithm. The optimal ANN model trained using the conjugate gradient Polak–Ribière algorithm (traincgp) achieves superior performance, with R2 = 0.9881, R2 = 0.9927, RMSE = 0.9914 MPa, MAE = 0.6039 MPa, MAPE = 20.9141%, VAF = 0.9881, and WI = 0.9970. Sensitivity analysis shows the height-to-thickness (h/t) ratio is the dominant influence on compressive strength, consistent with structural mechanics. The primary contributions are the systematically curated, richly parameterized dataset and its use to produce robust, physically interpretable predictions with established ANN methods. Full article
Show Figures

Figure 1

25 pages, 3968 KB  
Article
Explainable Data-Driven Approach for Smart Crop Yield Prediction in Sub-Saharan Africa: Performance and Interpretability Analysis
by Damilola D. Olatinwo, Herman C. Myburgh, Allan De Freitas and Adnan Abu-Mahfouz
Agriculture 2026, 16(8), 826; https://doi.org/10.3390/agriculture16080826 - 8 Apr 2026
Abstract
The increasing demand for innovative strategies in sustainable food production—driven by rapid global population growth, particularly in sub-Saharan Africa (SSA)—necessitates urgent attention to agricultural resilience. Recent technological advancements have enhanced crop productivity, post-harvest preservation, and environmentally sustainable farming practices. However, three critical bottlenecks [...] Read more.
The increasing demand for innovative strategies in sustainable food production—driven by rapid global population growth, particularly in sub-Saharan Africa (SSA)—necessitates urgent attention to agricultural resilience. Recent technological advancements have enhanced crop productivity, post-harvest preservation, and environmentally sustainable farming practices. However, three critical bottlenecks remain: (i) the lack of accurate, maize-specific yield prediction methods tailored to SSA; (ii) limited multimodal modeling approaches capable of capturing complex, nonlinear interactions among heterogeneous data sources; and (iii) a lack of explainability mechanisms, which render high-performing models “black boxes” and hinder stakeholder trust. To address these gaps, this study presents an explainable machine learning framework for smart maize yield prediction. We integrate multimodal SSA-specific soil, crop, and weather data to capture the multi-dimensional drivers of maize productivity. Six diverse algorithms—including extreme gradient boosting (XGBoost), light gradient boosting machine (LGBM), categorical boosting (CatBoost), support vector machine (SVM), random forest (RF), and an artificial neural network (ANN) combined with a k-nearest neighbors (kNN)—were benchmarked to evaluate predictive performance. To ensure robustness against spatial heterogeneity, we employed a Leave-One-Plot-Out (LOPO) cross-validation strategy. Empirical results on unseen test data identify CatBoost as the best-performing model, achieving a coefficient of determination of (R2 =~76%), demonstrating its ability to capture complex, nonlinear relationships in agricultural data. To enhance transparency and stakeholder trust, we integrated Local Interpretable Model-agnostic Explanations (LIME), providing plot-level insights into the physiological and environmental drivers of maize yield. Together, these contributions establish a scalable and interpretable modeling framework capable of supporting data-driven agricultural decision-making in SSA. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

27 pages, 1519 KB  
Article
Analysis of International Tourism Flows: A Gravity Model and an Explainable Machine Learning Approach
by Tsolmon Sodnomdavaa
Tour. Hosp. 2026, 7(4), 105; https://doi.org/10.3390/tourhosp7040105 - 8 Apr 2026
Abstract
International tourism plays an important role in the global service economy, contributing to trade, employment, and regional development. For this reason, identifying the factors that influence tourist flows is an important issue for tourism policy, market strategy, and infrastructure planning. A large body [...] Read more.
International tourism plays an important role in the global service economy, contributing to trade, employment, and regional development. For this reason, identifying the factors that influence tourist flows is an important issue for tourism policy, market strategy, and infrastructure planning. A large body of research has applied gravity models to analyze tourism flows between countries. While this approach provides a clear economic interpretation, it is usually based on linear specifications and may therefore capture only part of the relationships present in tourism data. This study examines the economic and geographic determinants of international tourism flows to Mongolia using a framework that combines a traditional gravity model with machine learning techniques. Mongolia serves as an instructive empirical setting, a landlocked, geographically peripheral destination whose inbound demand determinants have received limited systematic empirical attention. The analysis uses panel data for 27 origin countries covering the period from 2000 to 2024. In the first stage, a gravity model is estimated to assess how tourism flows relate to economic size and geographic distance. The results show that tourism flows tend to increase with the economic size of origin and destination countries, while greater geographical distance is associated with lower tourism flows. The estimated distance elasticity ranges from approximately −1.85 to −2.10 across model specifications, which is larger in absolute terms than the values typically reported in cross-country studies. This result is consistent with the relatively high travel cost barriers associated with Mongolia’s geographic location. These findings are consistent with the distance decay relationship commonly reported in the tourism literature. In the second stage, machine learning algorithms, including Random Forest, LightGBM, and XGBoost, are used as complementary interpretive instruments rather than forecasting tools to explore possible nonlinear relationships among the explanatory variables. To make the results more interpretable, the contribution of individual variables is examined using SHAP (Shapley Additive Explanations). The machine learning results indicate that some relationships in tourism demand may be nonlinear and not fully captured by the linear gravity specification. Specifically, distance sensitivity is approximately 6.5 times greater in nearby markets than in long-haul markets, with a structural inflexion at around 5700 km. Further analysis suggests that the influence of geographical distance is not uniform across all markets. In particular, tourism flows originating from middle-income countries appear to be more sensitive to increases in travel distance than those from higher-income countries. Overall, the findings indicate that economic size and geographical distance remain key determinants of international tourism flows to Mongolia. At the same time, the use of machine learning methods provides additional insight into potential nonlinear patterns in tourism demand. By combining econometric modelling with explainable machine learning techniques, the study offers an integrated analytical perspective for examining international tourism flows at geographically peripheral destinations where standard gravity assumptions may be insufficient. Full article
Show Figures

Figure 1

25 pages, 3820 KB  
Article
Ensemble Machine Learning Predicts Platinum Resistance in Ovarian Cancer Using Laboratory Data
by Xueting Peng, Yangyang Zhang, Chaoyu Zhu, Weijie Chen, Xiaohua Wu, Fan Zhong, Qinhao Guo and Lei Liu
Cancers 2026, 18(8), 1190; https://doi.org/10.3390/cancers18081190 - 8 Apr 2026
Abstract
Objectives: Platinum resistance remains a critical bottleneck in ovarian cancer management, yet reliable pre-treatment predictive tools are lacking. Existing markers like the platinum-free interval are retrospective, while genomic profiling is often cost-prohibitive. This study aimed to develop an accessible, machine learning-based dynamic weighted [...] Read more.
Objectives: Platinum resistance remains a critical bottleneck in ovarian cancer management, yet reliable pre-treatment predictive tools are lacking. Existing markers like the platinum-free interval are retrospective, while genomic profiling is often cost-prohibitive. This study aimed to develop an accessible, machine learning-based dynamic weighted fusion (DWF) model using routine laboratory data to provide bidirectional risk stratification, particularly to reliably rule out platinum resistance before treatment initiation. Methods: In this retrospective study (2019–2023), seventy baseline clinical features were collected to differentiate platinum-resistant from platinum-sensitive ovarian cancer patients. We developed a DWF framework that dynamically integrates the top-performing classifiers from a library of 168 algorithms (combining 14 feature selection and 12 machine learning methods). Class imbalance was addressed via oversampling, and model efficacy was evaluated using area under the curve (AUC), accuracy, sensitivity, and specificity. Results: The DWF model achieved a robust AUC of 0.760 (95% CI: 0.683–0.837), outperforming all individual base classifiers. Subgroup analysis demonstrated highly consistent overall discrimination across initial treatment strategies (AUC of 0.755 for primary debulking surgery and 0.761 for neoadjuvant chemotherapy). Feature interpretation highlighted that resistance is driven by synergistic dysregulation of systemic inflammation and hypercoagulability, rather than single biomarkers. Conclusions: The proposed DWF model effectively leverages low-cost, standardized clinical data to serve as a robust bidirectional stratification tool. Its exceptional ability to rule out resistance provides clinicians with the evidence-based confidence to proceed with standard therapies, while its high-risk alerts identify candidates for early therapeutic adjustments and enhanced surveillance in ovarian cancer care. Full article
Show Figures

Figure 1

15 pages, 1474 KB  
Article
Prognostic Power of Ensemble Learning in Colorectal Cancer with Peritoneal Metastasis: A Multi-Institutional Analysis
by Yoshiko Bamba, Michio Itabashi, Hirotoshi Kobayashi, Kenjiro Kotake, Masayasu Kawasaki, Yukihide Kanemitsu, Yusuke Kinugasa, Hideki Ueno, Kotaro Maeda, Takeshi Suto, Kimihiko Funahashi, Heita Ozawa, Fumikazu Koyama, Shingo Noura, Hideyuki Ishida, Masayuki Ohue, Tomomichi Kiyomatsu, Soichiro Ishihara, Keiji Koda, Hideo Baba, Kenji Kawada, Yojiro Hashiguchi, Takanori Goi, Yuji Toiyama, Naohiro Tomita, Eiji Sunami, Yoshito Akagi, Jun Watanabe, Kenichi Hakamada, Goro Nakayama, Kenichi Sugihara and Yoichi Ajiokaadd Show full author list remove Hide full author list
Bioengineering 2026, 13(4), 434; https://doi.org/10.3390/bioengineering13040434 - 8 Apr 2026
Abstract
Background: Owing to significant clinical heterogeneity, the achievement of accurate survival forecasting for individuals with colorectal cancer and peritoneal metastasis continues to be a complex undertaking. We aimed to transcend traditional prognostic limitations by evaluating machine learning boosting models against standard regression-based methods [...] Read more.
Background: Owing to significant clinical heterogeneity, the achievement of accurate survival forecasting for individuals with colorectal cancer and peritoneal metastasis continues to be a complex undertaking. We aimed to transcend traditional prognostic limitations by evaluating machine learning boosting models against standard regression-based methods in terms of estimating overall survival (OS). Methods: Utilizing a multi-institutional registry of 150 patients diagnosed with synchronous peritoneal metastasis of colorectal cancer, we integrated 124 clinicopathological variables to refine our predictive models. Beyond standard preprocessing—including standardization and median imputation—we rigorously compared XGBoost and LightGBM against Ridge, Lasso, and linear regression via five-fold cross-validation. To specifically address right-censoring, an XGBoost Cox model was implemented and validated using Harrell’s C-index, with SHAP and LIME providing essential model interpretability. Results: Boosting models consistently outperformed linear alternatives, which struggled with high error rates and negative R2 values. Specifically, XGBoost achieved an MAE of 475 ± 60 and an RMSE of 585 ± 88. The XGBoost Cox model reached a C-index of 0.64 ± 0.06. SHAP analysis highlighted inflammatory markers and peritoneal disease extent as the most influential prognostic drivers. Conclusions: While boosting models offer a clear accuracy advantage over linear methods, their prognostic power remains moderate. These findings underscore the potential of ensemble learning in oncology, yet mandate external validation before these tools can be integrated into clinical decision-making. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

24 pages, 648 KB  
Article
Intuitive Risk Equation for Post-Transplant Bloodstream Infection Prediction: A Symbolic Regression Approach
by Sungsu Oh, Jeogin Jang, Yunseong Ko, Hyunsu Lee and Seungjin Lim
Biomedicines 2026, 14(4), 840; https://doi.org/10.3390/biomedicines14040840 - 7 Apr 2026
Abstract
Background: Liver transplant recipients are highly susceptible to infectious complications due to surgical invasiveness and immunosuppressive therapy, and post-transplant bloodstream infection is associated with substantial morbidity and mortality. Although several prediction models for bloodstream infection have been proposed, most focus on emergency department [...] Read more.
Background: Liver transplant recipients are highly susceptible to infectious complications due to surgical invasiveness and immunosuppressive therapy, and post-transplant bloodstream infection is associated with substantial morbidity and mortality. Although several prediction models for bloodstream infection have been proposed, most focus on emergency department or general ward populations and rely on black-box approaches. This limits their applicability and clinical interpretability in liver transplant settings. Therefore, this study aimed to develop predictive models for post-transplant bloodstream infection using preoperative and perioperative clinical data and to derive an interpretable risk equation through symbolic regression. Methods: We conducted a retrospective observational study including 245 adult liver transplant recipients treated at a single tertiary center. Clinical and laboratory variables were extracted from electronic medical records and analyzed using standard statistical methods. For prediction tasks, multiple conventional machine learning models were developed and compared with a symbolic regression-based model. Predictive performance and model interpretability were evaluated using discrimination metrics and Shapley Additive Explanations. Results: Post-transplant bloodstream infection occurred in 82 patients (33.4%). In the test set, conventional machine learning models showed modest discriminative performance (area under the curve, 0.53–0.64). The symbolic regression model achieved comparable discrimination (area under the curve, 0.63) while providing transparent, threshold-based risk equations. While conventional models primarily relied on laboratory variables, symbolic regression additionally identified perioperative clinical factors and viral serologic markers as important predictors. Discussion: Although overall predictive performance was modest, symbolic regression highlighted viral serologic markers as potential indicators of immunologic vulnerability, extending beyond standard laboratory predictors. Conclusions: This interpretability-focused approach may inform future risk stratification models incorporating richer perioperative data. Full article
(This article belongs to the Section Microbiology in Human Health and Disease)
Show Figures

Figure 1

22 pages, 4411 KB  
Article
Mineral Inversion Constrained by Lithofacies for Prediction of Ga-Rich Laminations in Coal Seams from the Haerwusu Mine, Jungar Coalfield
by Wan Li, Tongjun Chen, Xuanyu Liu, Haicheng Xu and Haiyang Yin
Minerals 2026, 16(4), 387; https://doi.org/10.3390/min16040387 - 7 Apr 2026
Abstract
Gallium (Ga) in coal is a nationally emerging strategic mineral resource, yet research on using petrophysical methods to detect the spatial variation in critical metals in coal seams remains limited. Analyzing the distribution characteristics of Ga-rich coal using geophysical well-logging methods is of [...] Read more.
Gallium (Ga) in coal is a nationally emerging strategic mineral resource, yet research on using petrophysical methods to detect the spatial variation in critical metals in coal seams remains limited. Analyzing the distribution characteristics of Ga-rich coal using geophysical well-logging methods is of great significance for the development and utilization of Ga. This study introduces a quantitative method for predicting Ga-rich laminations in ultra-thick bituminous coal seams by integrating: (i) wireline-log-based lithofacies classification, (ii) lithofacies-constrained mineral inversion, and (iii) lithofacies-constrained and laboratory-established Ga–mineral correlations. The coal seam was first classified into four distinct lithofacies types—(i) parting, (ii) medium-ash coal (MA), (iii) low-ash coal (LA), and (iv) extra-low-ash coal (ELA)—through integration of conventional wireline log interpretation, cluster analysis, and XGBoost machine learning. Second, lithofacies-constrained Ga–host mineral associations were established by integrating core sample analysis, correlation analysis, and linear regression modeling. Third, mineral content predictions for each lithofacies were obtained through wireline-log-based mineral inversion, constrained by petrophysical boundaries. Finally, prediction uncertainties were evaluated using Markov Chain Monte Carlo (MCMC) simulation, while Ga-rich laminations were predicted by integrating log-derived mineral inversion results with regressed Ga prediction models. The results demonstrate strong agreement between mineral inversion and XRD analyses within uncertainty ranges, achieving a prediction accuracy of 73.6% for Ga. This validated methodology presents a novel approach for quantifying Ga concentrations in coal, as demonstrated through a case study. Full article
(This article belongs to the Section Mineral Exploration Methods and Applications)
Show Figures

Figure 1

15 pages, 1148 KB  
Article
Early Prediction of Well-Being Outcomes in Older Adults Using Explainable AI and Emotional Intelligence Measures
by Evgenia Kouli, Evangelos Bebetsos, Maria Michalopoulou and Filippos Filippou
Appl. Sci. 2026, 16(7), 3586; https://doi.org/10.3390/app16073586 - 7 Apr 2026
Viewed by 77
Abstract
Background: Well-being in the elderly is shaped by complex emotional and social factors. Early identification of individuals at risk for reduced well-being may support timely preventive or supportive interventions. This study examined whether emotional intelligence indicators collected at baseline can predict well-being status [...] Read more.
Background: Well-being in the elderly is shaped by complex emotional and social factors. Early identification of individuals at risk for reduced well-being may support timely preventive or supportive interventions. This study examined whether emotional intelligence indicators collected at baseline can predict well-being status 5 months later using explainable machine learning models. Methods: A cohort of elderly participants aged 60 to 89 years completed emotional intelligence measures at baseline, and well-being was assessed 5 months later using the POMS questionnaire. Four machine learning algorithms, Logistic Regression (LR), Support Vector Machines (SVM), Random Forest (RF), and Extreme Gradient Boosting (XGBoost), were developed using 5-fold stratified cross-validation. Model performance was evaluated through accuracy, precision, recall, F1-score, ROC AUC, and normalized confusion matrices. SHapley Additive exPlanations (SHAP) were applied to interpret the contribution and directionality of each predictor. Results: XGBoost achieved the highest predictive performance (accuracy = 0.789; F1 = 0.778) and demonstrated balanced classification across well-being categories. SVM also performed robustly (accuracy = 0.760), while LR showed reduced sensitivity for detecting those with poorer well-being. SHAP analysis identified self-control, emotionality, sociability, self-motivation, and well-being components as the most influential predictors. Lower emotionality, higher sociability, and higher self-control scores were linked to a greater probability of favorable well-being outcomes. Conclusions: The findings demonstrate the feasibility of using explainable machine learning models to predict 5-month well-being status within this sample of older adults using emotional intelligence indicators. XGBoost provided the strongest and most balanced performance, while SHAP analysis clarified how specific emotional intelligence dimensions influenced predictions. These findings suggest that interpretable machine learning approaches may support future efforts toward early recognition of older adults who may be at risk for reduced well-being and guide personalized intervention strategies. Full article
Show Figures

Figure 1

25 pages, 3712 KB  
Article
An AI-Enabled Single-Cell Transcriptomic Analysis Pipeline for Gene Signature Discovery in Natural Killer Cells Linked to Remission Outcomes in Chronic Myeloid Leukemia
by Santoshi Borra, Da Yan, Robert S. Welner and Zongliang Yue
Biology 2026, 15(7), 588; https://doi.org/10.3390/biology15070588 - 6 Apr 2026
Viewed by 254
Abstract
Background: A major technical challenge in single-cell transcriptomics is the absence of an integrative analytic pipeline that can simultaneously leverage gene regulatory network (GRN) architecture, AI-assisted gene panel discovery, and functional relevance analyses to generate coherent biological insights. Existing approaches often treat these [...] Read more.
Background: A major technical challenge in single-cell transcriptomics is the absence of an integrative analytic pipeline that can simultaneously leverage gene regulatory network (GRN) architecture, AI-assisted gene panel discovery, and functional relevance analyses to generate coherent biological insights. Existing approaches often treat these components independently, focusing on clusters, marker genes, or predictive features without integrating them into a mechanistically grounded framework. Consequently, comprehensive screening that links regulatory association, gene signature screening, and functional interpretation within single-cell datasets remains limited, underscoring the need for an integrated strategy. Methods: We developed an integrative bioinformatics pipeline based on Gene regulatory network–AI–Functional Analysis (GAFA), combining latent-space integration, unsupervised clustering, diffusion pseudotime analysis, lineage-resolved generalized additive modeling, GRN inference, and machine learning-based gene panel discovery. This framework enables systematic mapping of cell-state structure, reconstruction of differentiation and effector trajectories, and identification of transcriptional and regulatory features strongly associated with clinical outcomes. As a case study, we applied the pipeline to NK cell transcriptomes from six CML patients (two early relapse, two late relapse, two durable treatment-free remission—TFR; 15 samples) collected at TKI discontinuation and 6–12 months after therapy cessation. Results: We reanalyzed publicly available scRNA-seq data from a previously published CML cohort to evaluate NK-cell transcriptional programs associated with treatment-free remission and relapse. We resolved six transcriptionally distinct NK cell states spanning CD56bright-like cytokine-responsive, early activated, terminally mature, cytotoxic, lymphoid trafficking, and HLA-DR+ immunoregulatory populations, each exhibiting outcome-specific compositional differences. Pseudotime analysis revealed two major NK cell lineages—a maturation trajectory and a cytotoxic effector trajectory. TFR samples displayed balanced occupancy of both lineages, whereas early relapse samples showed marked depletion of the maturation branch and preferential accumulation in cytotoxic end states. AI-guided feature selection and random forest modeling identified an 18-gene panel that distinguished NK cells from TFR and relapse samples in an exploratory manner. Among them, CST7, FCER1G, GNLY, GZMA, and HLA-C were conventional NK-associated genes, whereas ACTB, CYBA, IFITM2, IFITM3, LYZ, MALAT1, MT2A, MYOM2, NFKBIA, PIM1, S100A8, S100B, and TSC22D3 were novel. The GRN inference further uncovered outcome-specific regulatory modules, with RUNX3, EOMES, ELK4, and REL regulons enriched in TFR, whereas FOSL2 and MAF regulons were enriched in relapse, and their downstream targets linked to IFN-γ signaling, metabolic reprogramming, and immunoregulatory feedback circuits. Conclusions: This AI-enabled single-cell analysis demonstrates how NK cell state composition, differentiation trajectories, and regulatory network rewiring collectively shape TFR versus relapse following TKI discontinuation in CML. The integrative pipeline provides a modular framework that could be extended to additional datasets for data-driven biomarker discovery and mechanistic stratification, and highlights candidate transcriptional regulators and NK cell programs that may be leveraged to improve remission durability, pending validation in larger patient cohorts. Full article
Show Figures

Figure 1

22 pages, 551 KB  
Review
Convergence of Artificial Intelligence and Wearables in Strength Training and Performance Monitoring: A Scoping Review
by Eleftherios Fyntikakis, Spyridon Plakias, Themistoklis Tsatalas, Minas A. Mina, Anthi Xenofondos and Christos Kokkotis
Appl. Sci. 2026, 16(7), 3565; https://doi.org/10.3390/app16073565 - 6 Apr 2026
Viewed by 396
Abstract
Background: Strength training (ST) is essential for enhancing athletic performance and reducing injury risk, yet traditional monitoring relies heavily on subjective assessment, limiting objective and individualized evaluation. Objective: This scoping review critically synthesizes current applications of artificial intelligence (AI) and wearable technologies (WT) [...] Read more.
Background: Strength training (ST) is essential for enhancing athletic performance and reducing injury risk, yet traditional monitoring relies heavily on subjective assessment, limiting objective and individualized evaluation. Objective: This scoping review critically synthesizes current applications of artificial intelligence (AI) and wearable technologies (WT) in ST, with emphasis on methodological approaches, data characteristics, explainability, and practical readiness. Methods: Searches of PubMed and Scopus identified 13 peer-reviewed studies (2015–2025). Evidence was charted and synthesized to compare AI models, wearable sensor configurations, validation strategies, and translational potential. Results: Studies employed classical machine learning, deep learning, and hybrid approaches alongside inertial, force, strain, and physiological sensors to support exercise classification, load estimation, fatigue detection, and performance monitoring. Deep learning models dominated movement recognition tasks, whereas simpler models often aligned better with small datasets and interpretability requirements. However, most studies relied on limited, homogeneous samples and internal validation, restricting generalizability and real-world applicability. Explainability was inconsistently addressed, particularly in higher-risk applications such as injury prediction. Conclusions: AI-enhanced wearables provide objective and individualized ST monitoring, but current evidence remains largely experimental. To ensure a practical application is implemented, standardized datasets, robust external validation, and greater integration of explainable AI are required to support and deliver trustworthy decision-making. Full article
(This article belongs to the Section Biomedical Engineering)
Show Figures

Figure 1

Back to TopTop