Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,840)

Search Parameters:
Keywords = risk decision model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
44 pages, 1787 KB  
Systematic Review
Energy Consumption Prediction in Battery Electric Vehicles: A Systematic Literature Review
by Jairo Castillo-Calderón and Emilio Larrodé-Pellicer
Energies 2026, 19(2), 371; https://doi.org/10.3390/en19020371 (registering DOI) - 12 Jan 2026
Abstract
Predicting energy consumption in battery electric vehicles (BEVs) is a complex task due to the large number of influencing factors and their interdependencies. Nevertheless, reliable energy consumption estimation is essential to reduce range anxiety, facilitate route planning, manage charging infrastructure, and support more [...] Read more.
Predicting energy consumption in battery electric vehicles (BEVs) is a complex task due to the large number of influencing factors and their interdependencies. Nevertheless, reliable energy consumption estimation is essential to reduce range anxiety, facilitate route planning, manage charging infrastructure, and support more effective travel decisions that lower operational risks in transportation, thereby fostering wider BEV adoption. In this context, the present study examines the existing literature on methodologies for predicting BEV energy consumption through a systematic literature review (SLR) following the Denyer and Tranfield protocol and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The analysis covers modelling approaches, computational tools, model accuracy metrics, variable topology, sampling frequency and analysis period, modelling scale, and data sources. In addition, this review incorporates a structured assessment of the methodological quality of the included studies and a systematic evaluation of risk of bias, enabling a critical appraisal of the reliability and generalisability of reported findings. A comprehensive classification of modelling methodologies and variables is proposed, providing an integrative reference framework for future research. Overall, this study addresses existing research gaps, identifies current methodological limitations, and outlines directions for future work on BEV energy consumption prediction. Full article
(This article belongs to the Special Issue Energy Consumption in the EU Countries: 4th Edition)
Show Figures

Graphical abstract

25 pages, 5863 KB  
Systematic Review
AI-Enhanced CBCT for Quantifying Orthodontic Root Resorption: Evidence from a Systematic Review and a Clinical Case of Severe Bilateral Canine Impaction
by Teresa Pinho, Letícia Costa and João Pedro Carvalho
Appl. Sci. 2026, 16(2), 771; https://doi.org/10.3390/app16020771 - 12 Jan 2026
Abstract
Background: Artificial intelligence (AI) integrated with cone-beam computed tomography (CBCT) has rapidly advanced the diagnostic capability of orthodontics, particularly for quantifying external root resorption (ERR). High-risk scenarios such as bilateral maxillary canine impaction require objective tools to guide treatment decisions and prevent irreversible [...] Read more.
Background: Artificial intelligence (AI) integrated with cone-beam computed tomography (CBCT) has rapidly advanced the diagnostic capability of orthodontics, particularly for quantifying external root resorption (ERR). High-risk scenarios such as bilateral maxillary canine impaction require objective tools to guide treatment decisions and prevent irreversible damage. Objectives: To evaluate the diagnostic accuracy and clinical applicability of AI-assisted CBCT for orthodontically induced ERR, and to demonstrate its value in a complex clinical case where decision-making regarding canine traction versus extraction required precise risk quantification and definition of biological limits. Methods: A systematic review following PRISMA 2020 guidelines was conducted in PubMed, ScienceDirect, and Cochrane Library (2015–September 2025). Eligible studies applied AI-enhanced CBCT to assess ERR in orthodontic patients. Additionally, a clinical case with bilaterally impacted maxillary canines was evaluated using CBCT with automated AI segmentation and manual refinement to quantify root volume changes and determine prognostic thresholds for treatment modification. Results: Nine studies met the inclusion criteria. AI-based imaging, predominantly convolutional neural networks, showed high diagnostic accuracy (up to 94%), improving reproducibility and reducing operator dependency. In the clinical case, volumetric monitoring showed rapid progression of ERR in the lateral incisors (LI) associated with a persistent unfavorable 3D spatial relationship between the canines and incisor roots, despite controlled distal traction with skeletal anchorage, leading to a timely change in the treatment plan and extraction of the severely compromised incisors with substitution by the canines. AI-generated data provided objective evidence supporting safer decision-making and prevented further structural deterioration. Conclusions: AI-enhanced CBCT enables early, objective, and quantifiable ERR assessment, strengthening prognosis-based decisions in orthodontics. Findings of this review and the clinical case highlight the translational relevance of AI for managing high-risk cases, such as maxillary canine impaction with extensive LI resorption, supporting future predictive AI models for safer canine traction. Full article
(This article belongs to the Special Issue Advancements and Updates in Digital Dentistry)
Show Figures

Figure 1

34 pages, 5602 KB  
Review
Liquid Biopsy in Early Screening of Cancers: Emerging Technologies and New Prospects
by Hanyu Zhu, Zhenyu Li, Kunxin Xie, Sajjaad Hassan Kassim, Cheng Cao, Keyu Huang, Zipeng Lu, Chenshan Ma, Ying Li, Kuirong Jiang and Lingdi Yin
Biomedicines 2026, 14(1), 158; https://doi.org/10.3390/biomedicines14010158 - 12 Jan 2026
Abstract
Liquid biopsy is moving beyond mutation-centric assays to multimodal frameworks that integrate cell-free DNA (cfDNA) signals with additional analytes such as circulating tumor cells (CTCs) and extracellular vesicles (EVs). In this review, we summarize emerging technologies across analytes for early cancer detection, emphasizing [...] Read more.
Liquid biopsy is moving beyond mutation-centric assays to multimodal frameworks that integrate cell-free DNA (cfDNA) signals with additional analytes such as circulating tumor cells (CTCs) and extracellular vesicles (EVs). In this review, we summarize emerging technologies across analytes for early cancer detection, emphasizing sequencing and error-suppression strategies and the growing evidence for multi-cancer early detection (MCED), tissue-of-origin (TOO) inference, diagnostic triage, and longitudinal surveillance. At low tumor fractions, fragmentomic and methylation features preserve tissue and chromatin context; when combined with radiomics using deep learning, they support blood-first, high-specificity risk stratification, increase positive predictive value (PPV), reduce unnecessary procedures, and enhance early prediction of treatment response and relapse. Building on these findings, we propose a pathway-aware workflow: initial blood-based risk scoring, followed by organ-directed imaging, and targeted secondary testing when indicated. We further recommend that model reports include not only discrimination metrics but also calibration, decision-curve analysis, PPV/negative predictive value (NPV) at fixed specificity, and TOO accuracy, alongside multi-site external validation and blinded dataset splits to improve generalizability. Overall, liquid biopsy is transitioning from signal discovery to deployable multimodal decision systems; standardized pre-analytical and analytical workflows, robust error suppression, and prospective real-world evaluations will be pivotal for clinical implementation. Full article
(This article belongs to the Special Issue Emerging Technologies in Liquid Biopsy of Cancers)
Show Figures

Figure 1

17 pages, 388 KB  
Article
Considering Glucagon-like Peptide-1 Receptor Agonists (GLP-1RAs) for Weight Loss: Insights from a Pragmatic Mixed-Methods Study of Patient Beliefs and Barriers
by Regina DePietro, Isabella Bertarelli, Chloe M. Zink, Shannon M. Canfield, Jamie Smith and Jane A. McElroy
Healthcare 2026, 14(2), 186; https://doi.org/10.3390/healthcare14020186 - 12 Jan 2026
Abstract
Background/Objective: Glucagon-like peptide-1 receptor agonists (GLP-1RAs) have received widespread attention as effective obesity treatments. However, limited research has examined the perspectives of patients contemplating GLP-1RAs. This study explored perceptions, motivations, and barriers among individuals considering GLP-1RA therapy for obesity treatment, with the [...] Read more.
Background/Objective: Glucagon-like peptide-1 receptor agonists (GLP-1RAs) have received widespread attention as effective obesity treatments. However, limited research has examined the perspectives of patients contemplating GLP-1RAs. This study explored perceptions, motivations, and barriers among individuals considering GLP-1RA therapy for obesity treatment, with the goal of informing patient-centered care and enhancing clinician engagement. Methods: Adults completed surveys and interviews between June and November 2025. In this pragmatic mixed-methods study, both survey and interview questions explored perceived benefits, barriers, and decision-making processes. Qualitative data, describing themes based on the Health Belief Model, were analyzed using Dedoose (version 9.0.107), and quantitative data were analyzed using SAS (version 9.4). Participant characteristics included marital status, income, educational attainment, employment status, insurance status, age, race/ethnicity, and sex. Anticipated length on GLP-1RA medication and selected self-reported health conditions (depression, anxiety, hypertension, heart disease, back pain, joint pain), reported physical activity level, and perceived weight loss competency were also recorded. Results: Among the 31 non-diabetic participants who were considering GLP-1RA medication for weight loss, cost emerged as the most significant barrier. Life course events, particularly (peri)menopause among women over 44, were commonly cited as contributors to weight gain. Participants expressed uncertainty about eligibility, long-term safety, and treatment expectations. Communication gaps were evident, as few participants initiated discussions and clinician outreach was rare, reflecting limited awareness and discomfort around the topic. Conclusions: Findings highlight that individuals considering GLP-1RA therapy face multifaceted emotional, financial, and informational barriers. Proactive, empathetic clinician engagement, through validation of prior efforts, clear communication of risks and benefits, and correction of misconceptions, can support informed decision-making and align treatment with patient goals. Full article
Show Figures

Figure 1

27 pages, 1843 KB  
Article
AI-Driven Modeling of Near-Mid-Air Collisions Using Machine Learning and Natural Language Processing Techniques
by Dothang Truong
Aerospace 2026, 13(1), 80; https://doi.org/10.3390/aerospace13010080 - 12 Jan 2026
Abstract
As global airspace operations grow increasingly complex, the risk of near-mid-air collisions (NMACs) poses a persistent and critical challenge to aviation safety. Traditional collision-avoidance systems, while effective in many scenarios, are limited by rule-based logic and reliance on transponder data, particularly in environments [...] Read more.
As global airspace operations grow increasingly complex, the risk of near-mid-air collisions (NMACs) poses a persistent and critical challenge to aviation safety. Traditional collision-avoidance systems, while effective in many scenarios, are limited by rule-based logic and reliance on transponder data, particularly in environments featuring diverse aircraft types, unmanned aerial systems (UAS), and evolving urban air mobility platforms. This paper introduces a novel, integrative machine learning framework designed to analyze NMAC incidents using the rich, contextual information contained within the NASA Aviation Safety Reporting System (ASRS) database. The methodology is structured around three pillars: (1) natural language processing (NLP) techniques are applied to extract latent topics and semantic features from pilot and crew incident narratives; (2) cluster analysis is conducted on both textual and structured incident features to empirically define distinct typologies of NMAC events; and (3) supervised machine learning models are developed to predict pilot decision outcomes (evasive action vs. no action) based on integrated data sources. The analysis reveals seven operationally coherent topics that reflect communication demands, pattern geometry, visibility challenges, airspace transitions, and advisory-driven interactions. A four-cluster solution further distinguishes incident contexts ranging from tower-directed approaches to general aviation pattern and cruise operations. The Random Forest model produces the strongest predictive performance, with topic-based indicators, miss distance, altitude, and operating rule emerging as influential features. The results show that narrative semantics provide measurable signals of coordination load and acquisition difficulty, and that integrating text with structured variables enhances the prediction of maneuvering decisions in NMAC situations. These findings highlight opportunities to strengthen radio practice, manage pattern spacing, improve mixed equipage awareness, and refine alerting in short-range airport area encounters. Full article
(This article belongs to the Section Air Traffic and Transportation)
Show Figures

Figure 1

36 pages, 741 KB  
Review
Artificial Intelligence Algorithms for Insulin Management and Hypoglycemia Prevention in Hospitalized Patients—A Scoping Review
by Eileen R. Faulds, Melanie Natasha Rayan, Matthew Mlachak, Kathleen M. Dungan, Ted Allen and Emily Patterson
Diabetology 2026, 7(1), 19; https://doi.org/10.3390/diabetology7010019 - 12 Jan 2026
Abstract
Background: Dysglycemia remains a persistent challenge in hospital care. Despite advances in outpatient diabetes technology, inpatient insulin management largely depends on intermittent point-of-care glucose testing, static insulin dosing protocols and rule-based decision support systems. Artificial intelligence (AI) offers potential to transform this care [...] Read more.
Background: Dysglycemia remains a persistent challenge in hospital care. Despite advances in outpatient diabetes technology, inpatient insulin management largely depends on intermittent point-of-care glucose testing, static insulin dosing protocols and rule-based decision support systems. Artificial intelligence (AI) offers potential to transform this care through predictive modeling and adaptive insulin control. Methods: Following Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guidelines, a scoping review was conducted to characterize AI algorithms for insulin dosing and glycemic management in hospitalized patients. An interdisciplinary team of clinicians and engineers reached consensus on AI definitions to ensure inclusion of machine learning, deep learning, and reinforcement learning approaches. A librarian-assisted search of five databases identified 13,768 citations. After screening and consensus review, 26 studies (2006–2025) met the inclusion criteria. Data were extracted on study design, population, AI methods, data inputs, outcomes, and implementation findings. Results: Studies included ICU (N = 13) and general ward (N = 9) patients, including patients with diabetes and stress hyperglycemia. Early randomized trials of model predictive control demonstrated improved mean glucose (5.7–6.2 mmol/L) and time in target range compared with standard care. Later machine learning models achieved strong predictive accuracy (AUROC 0.80–0.96) for glucose forecasting or hypoglycemia risk. Most algorithms used data from Medical Information Mart for Intensive Care (MIMIC) databases; few incorporated continuous glucose monitoring (CGM). Implementation and usability outcomes were seldom reported. Conclusions: Hospital AI-driven models showed strong algorithmic performance but limited clinical validation. Future co-designed, interpretable systems integrating CGM and real-time workflow testing are essential to advance safe, adaptive insulin management in hospital settings. Full article
Show Figures

Figure 1

17 pages, 519 KB  
Article
From Models to Metrics: A Governance Framework for Large Language Models in Enterprise AI and Analytics
by Darshan Desai and Ashish Desai
Analytics 2026, 5(1), 8; https://doi.org/10.3390/analytics5010008 - 11 Jan 2026
Abstract
Large language models (LLMs) and other foundation models are rapidly being woven into enterprise analytics workflows, where they assist with data exploration, forecasting, decision support, and automation. These systems can feel like powerful new teammates: creative, scalable, and tireless. Yet they also introduce [...] Read more.
Large language models (LLMs) and other foundation models are rapidly being woven into enterprise analytics workflows, where they assist with data exploration, forecasting, decision support, and automation. These systems can feel like powerful new teammates: creative, scalable, and tireless. Yet they also introduce distinctive risks related to opacity, brittleness, bias, and misalignment with organizational goals. Existing work on AI ethics, alignment, and governance provides valuable principles and technical safeguards, but enterprises still lack practical frameworks that connect these ideas to the specific metrics, controls, and workflows by which analytics teams design, deploy, and monitor LLM-powered systems. This paper proposes a conceptual governance framework for enterprise AI and analytics that is explicitly centered on LLMs embedded in analytics pipelines. The framework adopts a three-layered perspective—model and data alignment, system and workflow alignment, and ecosystem and governance alignment—that links technical properties of models to enterprise analytics practices, performance indicators, and oversight mechanisms. In practical terms, the framework shows how model and workflow choices translate into concrete metrics and inform real deployment, monitoring, and scaling decisions for LLM-powered analytics. We also illustrate how this framework can guide the design of controls for metrics, monitoring, human-in-the-loop structures, and incident response in LLM-driven analytics. The paper concludes with implications for analytics leaders and governance teams seeking to operationalize responsible, scalable use of LLMs in enterprise settings. Full article
Show Figures

Figure 1

26 pages, 60469 KB  
Article
Spatiotemporal Prediction of Ground Surface Deformation Using TPE-Optimized Deep Learning
by Maoqi Liu, Sichun Long, Tao Li, Wandi Wang and Jianan Li
Remote Sens. 2026, 18(2), 234; https://doi.org/10.3390/rs18020234 - 11 Jan 2026
Abstract
Surface deformation induced by the extraction of natural resources constitutes a non-stationary spatiotemporal process. Modeling surface deformation time series obtained through Interferometric Synthetic Aperture Radar (InSAR) technology using deep learning methods is crucial for disaster prevention and mitigation. However, the complexity of model [...] Read more.
Surface deformation induced by the extraction of natural resources constitutes a non-stationary spatiotemporal process. Modeling surface deformation time series obtained through Interferometric Synthetic Aperture Radar (InSAR) technology using deep learning methods is crucial for disaster prevention and mitigation. However, the complexity of model hyperparameter configuration and the lack of interpretability in the resulting predictions constrain its engineering applications. To enhance the reliability of model outputs and their decision-making value for engineering applications, this study presents a workflow that combines a Tree-structured Parzen Estimator (TPE)-based Bayesian optimization approach with ensemble inference. Using the Rhineland coalfield in Germany as a case study, we systematically evaluated six deep learning architectures in conjunction with various spatiotemporal coding strategies. Pairwise comparisons were conducted using a Welch t-test to evaluate the performance differences across each architecture under two parameter-tuning approaches. The Benjamini–Hochberg method was applied to control the false discovery rate (FDR) at 0.05 for multiple comparisons. The results indicate that TPE-optimized models demonstrate significantly improved performance compared to their manually tuned counterparts, with the ResNet+Transformer architecture yielding the most favorable outcomes. A comprehensive analysis of the spatial residuals further revealed that TPE optimization not only enhances average accuracy, but also mitigates the model’s prediction bias in fault zones and mineralize areas by improving the spatial distribution structure of errors. Based on this optimal architecture, we combined the ten highest-performing models from the optimization stage to generate a quantile-based susceptibility map, using the ensemble median as the central predictor. Uncertainty was quantified from three complementary perspectives: ensemble spread, class ambiguity, and classification confidence. Our analysis revealed spatial collinearity between physical uncertainty and absolute residuals. This suggests that uncertainty is more closely related to the physical complexity of geological discontinuities and human-disturbed zones, rather than statistical noise. In the analysis of super-threshold probability, the threshold sensitivity exhibited by the mining area reflects the widespread yet moderate impact of mining activities. By contrast, the fault zone continues to exhibit distinct high-probability zones, even under extreme thresholds. It suggests that fault-controlled deformation is more physically intense and poses a greater risk of disaster than mining activities. Finally, we propose an engineering decision strategy that combines uncertainty and residual spatial patterns. This approach transforms statistical diagnostics into actionable, tiered control measures, thereby increasing the practical value of susceptibility mapping in the planning of natural resource extraction. Full article
22 pages, 2896 KB  
Article
Probabilistic Photovoltaic Power Forecasting with Reliable Uncertainty Quantification via Multi-Scale Temporal–Spatial Attention and Conformalized Quantile Regression
by Guanghu Wang, Yan Zhou, Yan Yan, Zhihan Zhou, Zikang Yang, Litao Dai and Junpeng Huang
Sustainability 2026, 18(2), 739; https://doi.org/10.3390/su18020739 - 11 Jan 2026
Abstract
Accurate probabilistic forecasting of photovoltaic (PV) power generation is crucial for grid scheduling and renewable energy integration. However, existing approaches often produce prediction intervals with limited calibration accuracy, and the interdependence among meteorological variables is frequently overlooked. This study proposes a probabilistic forecasting [...] Read more.
Accurate probabilistic forecasting of photovoltaic (PV) power generation is crucial for grid scheduling and renewable energy integration. However, existing approaches often produce prediction intervals with limited calibration accuracy, and the interdependence among meteorological variables is frequently overlooked. This study proposes a probabilistic forecasting framework based on a Multi-scale Temporal–Spatial Attention Quantile Regression Network (MTSA-QRN) and an adaptive calibration mechanism to enhance uncertainty quantification and ensure statistically reliable prediction intervals. The framework employs a dual-pathway architecture: a temporal pathway combining Temporal Convolutional Networks (TCN) and multi-head self-attention to capture hierarchical temporal dependencies, and a spatial pathway based on Graph Attention Networks (GAT) to model nonlinear meteorological correlations. A learnable gated fusion mechanism adaptively integrates temporal–spatial representations, and weather-adaptive modules enhance robustness under diverse atmospheric conditions. Multi-quantile prediction intervals are calibrated using conformalized quantile regression to ensure reliable uncertainty coverage. Experiments on a real-world PV dataset (15 min resolution) demonstrate that the proposed method offers more accurate and sharper uncertainty estimates than competitive benchmarks, supporting risk-aware operational decision-making in power systems. Quantitative evaluation on a real-world 40 MW photovoltaic plant demonstrates that the proposed MTSA-QRN achieves a CRPS of 0.0400 before calibration, representing an improvement of over 55% compared with representative deep learning baselines such as Quantile-GRU, Quantile-LSTM, and Quantile-Transformer. After adaptive calibration, the proposed method attains a reliable empirical coverage close to the nominal level (PICP90 = 0.9053), indicating effective uncertainty calibration. Although the calibrated prediction intervals become wider, the model maintains a competitive CRPS value (0.0453), striking a favorable balance between reliability and probabilistic accuracy. These results demonstrate the effectiveness of the proposed framework for reliable probabilistic photovoltaic power forecasting. Full article
(This article belongs to the Topic Sustainable Energy Systems)
18 pages, 1418 KB  
Article
Breathprints for Breast Cancer: Evaluating a Non-Invasive Approach to BI-RADS 4 Risk Stratification in a Preliminary Study
by Ashok Prabhu Masilamani, Jayden K Hooper, Md Hafizur Rahman, Romy Philip, Palash Kaushik, Geoffrey Graham, Helene Yockell-Lelievre, Mojtaba Khomami Abadi and Sarkis H. Meterissian
Cancers 2026, 18(2), 226; https://doi.org/10.3390/cancers18020226 - 11 Jan 2026
Abstract
Background/Objectives: Breast cancer is the most common malignancy among women, and early detection is critical for improving outcomes. The Breast Imaging Reporting and Data System (BI-RADS) standardizes reporting, but the BI-RADS 4 category presents a major challenge, with malignancy risk ranging from [...] Read more.
Background/Objectives: Breast cancer is the most common malignancy among women, and early detection is critical for improving outcomes. The Breast Imaging Reporting and Data System (BI-RADS) standardizes reporting, but the BI-RADS 4 category presents a major challenge, with malignancy risk ranging from 2% to 95%. Consequently, most women in this category undergo biopsies that ultimately prove unnecessary. This study evaluated whether exhaled breath analysis could distinguish malignant from benign findings in BI-RADS 4 patients. Methods: Participants referred to the McGill University Health Centre Breast Center with BI-RADS 3–5 findings provided multiple breath specimens. Breathprints were captured using an electronic nose (eNose) powered breathalyzer, and diagnoses were confirmed by imaging and pathology. An autoencoder-based model fused the breath data with BI-RADS scores to predict malignancy. Model performance was assessed using repeated cross-validation with ensemble voting, prioritizing sensitivity to minimize false negatives. Results: The breath specimens of eighty-five participants, including sixty-eight patients with biopsy-confirmed benign lesions and seventeen patients with biopsy-confirmed breast cancer within the BI-RADS 4 cohort were analyzed. The model achieved a mean sensitivity of 88%, specificity of 75%, and a negative predictive value (NPV) of 97%. Results were consistent across BI-RADS 4 subcategories, with particularly strong sensitivity in higher-risk groups. Conclusions: This proof-of-concept study shows that exhaled breath analysis can reliably differentiate malignant from benign findings in BI-RADS 4 patients. With its high negative predictive value, this approach may serve as a non-invasive rule-out tool to reduce unnecessary biopsies, lessen patient burden, and improve diagnostic decision-making. Larger, multi-center studies are warranted. Full article
(This article belongs to the Section Methods and Technologies Development)
23 pages, 1159 KB  
Review
Beyond the Usual Suspects: A Narrative Review of High-Yield Non-Traditional Risk Factors for Atherosclerosis
by Dylan C. Yu, Yaser Ahmad, Maninder Randhawa, Anand S. Rai, Aritra Paul, Sara S. Elzalabany, Ryan Yu, Raj Wasan, Nayna Nanda, Navin C. Nanda and Jagadeesh K. Kalavakunta
J. Clin. Med. 2026, 15(2), 584; https://doi.org/10.3390/jcm15020584 - 11 Jan 2026
Abstract
Background: Cardiovascular risk models, such as the Framingham and atherosclerotic cardiovascular disease (ASCVD) calculators, have improved risk prediction but often fail to identify individuals who experience ASCVD events despite low or intermediate predicted risk. This suggests that underrecognized, non-traditional risk factors may contribute [...] Read more.
Background: Cardiovascular risk models, such as the Framingham and atherosclerotic cardiovascular disease (ASCVD) calculators, have improved risk prediction but often fail to identify individuals who experience ASCVD events despite low or intermediate predicted risk. This suggests that underrecognized, non-traditional risk factors may contribute significantly to the development of atherosclerosis. Objective: This narrative review synthesizes and summarizes recent evidence on high-yield non-traditional risk factors for atherosclerosis, with a focus on clinically significant, emerging, and applicable contributors beyond conventional frameworks. This review is distinct in that it aggregates a wide array of non-traditional risk factors while also consolidating recent data on ASCVD in more vulnerable populations. Unlike the existing literature, this manuscript integrates in a single comprehensive review various domains of non-traditional atherosclerotic risk factors, including inflammatory, metabolic, behavioral, environmental, and physical pathways. An additional unique highlight in the same manuscript is the discussion of non-traditional risk factors for atherosclerosis in more vulnerable populations, specifically South Asians. We also focus on clinically actionable factors that can guide treatment decisions for clinicians. Results: Key non-traditional risk factors identified include inflammation and biomarker-based risk factors such as C-reactive protein or interleukin-6 levels, metabolic and microbial risk factors, behavioral factors such as E-cigarette use, and environmental or infectious risk factors such as air and noise pollution. We explore certain physical exam findings associated with atherosclerotic burden, such as Frank’s sign and Achilles tendon thickness. Conclusions: Atherosclerosis is a multifactorial process influenced by diverse and often overlooked factors. Integrating non-traditional risks into clinical assessment may improve early detection, guide prevention and personalize care. Future risk prediction models should incorporate molecular, behavioral, and environmental data to reflect the complex nature of cardiovascular disease. Full article
(This article belongs to the Section Cardiovascular Medicine)
16 pages, 2349 KB  
Article
Machine Learning Prediction and Interpretability Analysis of Coal and Gas Outbursts
by Long Xu, Xiaofeng Ren and Hao Sun
Sustainability 2026, 18(2), 740; https://doi.org/10.3390/su18020740 - 11 Jan 2026
Abstract
Coal and gas outbursts constitute a major hazard for mining safety, which is critical for the sustainable development of China’s energy industry. Rapid, accurate, and reliable pre-diction is pivotal for preventing and controlling outburst incidents. Nevertheless, the mechanisms driving coal and gas outbursts [...] Read more.
Coal and gas outbursts constitute a major hazard for mining safety, which is critical for the sustainable development of China’s energy industry. Rapid, accurate, and reliable pre-diction is pivotal for preventing and controlling outburst incidents. Nevertheless, the mechanisms driving coal and gas outbursts involve highly complex influencing factors. Four main geological indicators were identified by examining the attributes of these factors and their association to outburst intensity. This study developed a machine learning-based prediction model for outburst risk. Five algorithms were evaluated: K Nearest Neighbors (KNN), Back Propagation (BP), Random Forest (RF), Support Vector Machine (SVM), and eXtreme Gradient Boosting (XGBoost). Model optimization was performed via Bayesian hyperparameter (BO) tuning. Model performance was assessed by the Receiver Operating Characteristic (ROC) curve; the optimized XGBoost model demonstrated strong predictive performance. To enhance model transparency and interpretability, the SHapley Additive exPlanations (SHAP) method was implemented. The SHAP analysis identified geological structure was the most important predictive feature, providing a practical decision support tool for mine executives to prevent and control outburst incidents. Full article
(This article belongs to the Section Hazards and Sustainability)
24 pages, 3327 KB  
Article
From Binary Scores to Risk Tiers: An Interpretable Hybrid Stacking Model for Multi-Class Loan Default Prediction
by Ghazi Abbas, Zhou Ying and Muzaffar Iqbal
Systems 2026, 14(1), 78; https://doi.org/10.3390/systems14010078 - 11 Jan 2026
Abstract
Accurate credit risk assessment for small firms and farmers is crucial for financial stability and inclusion; however, many models still rely on binary default labels, overlooking the continuum of borrower vulnerability. To address this, we propose Transformer–LightGBM–Stacked Logistic Regression (TL-StackLR), a hybrid stacking [...] Read more.
Accurate credit risk assessment for small firms and farmers is crucial for financial stability and inclusion; however, many models still rely on binary default labels, overlooking the continuum of borrower vulnerability. To address this, we propose Transformer–LightGBM–Stacked Logistic Regression (TL-StackLR), a hybrid stacking framework for multi-class loan default prediction. The framework combines three learners: a Feature Tokenizer Transformer (FT-Transformer) for feature interactions, LightGBM for non-linear pattern recognition, and a stacked LR meta-learner for calibrated probability fusion. We transform binary labels into three risk tiers, Low, Medium, and High, based on quantile-based stratification of default probabilities, aligning the model with real-world risk management. Evaluated on datasets from 3045 firms and 2044 farmers in China, TL-StackLR achieves state-of-the-art ROC-AUC scores of 0.986 (firms) and 0.972 (farmers), with superior calibration and discrimination across all risk classes, outperforming all standalone and partial-hybrid benchmarks. The framework provides SHapley Additive exPlanations (SHAP) interpretability, showing how key risk drivers, such as income, industry experience, and mortgage score for firms and loan purpose, Engel coefficient, and income for farmers, influence risk tiers. This transparency transforms TL-StackLR into a decision-support tool, enabling targeted interventions for inclusive lending, thus offering a practical foundation for equitable credit risk management. Full article
(This article belongs to the Section Artificial Intelligence and Digital Systems Engineering)
Show Figures

Figure 1

32 pages, 534 KB  
Article
Empirical Study on Automation, AI Trust, and Framework Readiness in Cybersecurity Incident Response
by Olufunsho I. Falowo and Jacques Bou Abdo
Algorithms 2026, 19(1), 62; https://doi.org/10.3390/a19010062 - 11 Jan 2026
Abstract
The accelerating integration of artificial intelligence (AI) into cybersecurity operations has introduced new challenges and opportunities for modernizing incident response (IR) practices. This study explores how cybersecurity practitioners perceive the adoption of intelligent automation and the readiness of legacy frameworks to address AI-driven [...] Read more.
The accelerating integration of artificial intelligence (AI) into cybersecurity operations has introduced new challenges and opportunities for modernizing incident response (IR) practices. This study explores how cybersecurity practitioners perceive the adoption of intelligent automation and the readiness of legacy frameworks to address AI-driven threats. A structured, two-part quantitative survey was conducted among 194 U.S.-based professionals, capturing perceptions on operational effectiveness, trust in autonomous systems, and the adequacy of frameworks such as NIST and SANS. Using binary response formats and psychometric validation items, the study quantified views on AI’s role in reducing mean time to detect and respond, willingness to delegate actions to autonomous agents, and the perceived obsolescence of static playbooks. Findings indicate broad support for the modernization of incident response frameworks to better align with emerging AI capabilities and evolving operational demands. The results reveal a clear demand for modular, adaptive frameworks that integrate AI-specific risk models and decision auditability. These insights provide empirical grounding for the design of next-generation IR models and contribute to the strategic discourse on aligning automation capabilities with ethical, scalable, and operationally effective cybersecurity response. Full article
18 pages, 831 KB  
Article
Utilizing Machine Learning Techniques for Computer-Aided COVID-19 Screening Based on Clinical Data
by Honglun Xu, Andrews T. Anum, Michael Pokojovy, Sreenath Chalil Madathil, Yuxin Wen, Md Fashiar Rahman, Tzu-Liang (Bill) Tseng, Scott Moen and Eric Walser
COVID 2026, 6(1), 17; https://doi.org/10.3390/covid6010017 - 9 Jan 2026
Viewed by 55
Abstract
The COVID-19 pandemic has highlighted the importance of rapid clinical decision-making to facilitate the efficient usage of healthcare resources. Over the past decade, machine learning (ML) has caused a tectonic shift in healthcare, empowering data-driven prediction and decision-making. Recent research demonstrates how ML [...] Read more.
The COVID-19 pandemic has highlighted the importance of rapid clinical decision-making to facilitate the efficient usage of healthcare resources. Over the past decade, machine learning (ML) has caused a tectonic shift in healthcare, empowering data-driven prediction and decision-making. Recent research demonstrates how ML was used to respond to the COVID-19 pandemic. This paper puts forth new computer-aided COVID-19 disease screening techniques using six classes of ML algorithms (including penalized logistic regression, random forest, artificial neural networks, and support vector machines) and evaluates their performance when applied to a real-world clinical dataset containing patients’ demographic information and vital indices (such as sex, ethnicity, age, pulse, pulse oximetry, respirations, temperature, BP systolic, BP diastolic, and BMI), as well as ICD-10 codes of existing comorbidities, as attributes to predict the risk of having COVID-19 for given patient(s). Variable importance metrics computed using a random forest model were used to reduce the number of important predictors to thirteen. Using prediction accuracy, sensitivity, specificity, and AUC as performance metrics, the performance of various ML methods was assessed, and the best model was selected. Our proposed model can be used in clinical settings as a rapid and accessible COVID-19 screening technique. Full article
Back to TopTop