Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (12,179)

Search Parameters:
Keywords = decision under risk

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 2396 KB  
Article
Spatiotemporal Evolution and Drivers of Harvest-Disrupting Rainfall Risk for Winter Wheat in the Huang–Huai–Hai Plain
by Zean Wang, Ying Zhou, Tingting Fang, Zhiqing Cheng, Junli Li, Fengwen Wang and Shuyun Yang
Agriculture 2026, 16(1), 46; https://doi.org/10.3390/agriculture16010046 - 24 Dec 2025
Abstract
Harvest-disrupting rain events (HDREs) are prolonged cloudy–rainy spells during winter wheat maturity that impede harvesting and drying, induce pre-harvest sprouting and grain mould, and threaten food security in the Huang–Huai–Hai Plain (HHHP), China’s core winter wheat region. Using daily meteorological records (1960–2019), remote [...] Read more.
Harvest-disrupting rain events (HDREs) are prolonged cloudy–rainy spells during winter wheat maturity that impede harvesting and drying, induce pre-harvest sprouting and grain mould, and threaten food security in the Huang–Huai–Hai Plain (HHHP), China’s core winter wheat region. Using daily meteorological records (1960–2019), remote sensing-derived land-use data and topography, we develop a hazard–exposure–vulnerability framework to quantify HDRE risk and its drivers at 1 km resolution. Results show that HDRE risk has increased markedly over the past six decades, with the area of medium-to-high risk rising from 26.9% to 73.1%. The spatial pattern evolved from a “high-south–low-north” structure to a concentrated high-risk belt in the central–northern HHHP, and the risk centroid migrated from Fuyang (Anhui) to Heze (Shandong), with an overall displacement of 124.57 km toward the north–northwest. GeoDetector analysis reveals a shift from a “humidity–temperature dominated” mechanism to a “sunshine–humidity–precipitation co-driven” mechanism; sunshine duration remains the leading factor (q > 0.8), and its interaction with relative humidity shows strong nonlinear enhancement (q = 0.91). High-risk hot spots coincide with low-lying plains and river valleys with dense winter wheat planting, indicating the joint amplification of meteorological conditions and underlying surface features. The results can support regional decision-making for harvest-season early warning, risk zoning, and disaster risk reduction in the HHHP. Full article
28 pages, 1109 KB  
Review
Hospital Influenza Outbreak Management in the Post-COVID Era: A Narrative Review of Evolving Practices and Feasibility Considerations
by Wei-Hsuan Huang, Yi-Fang Ho, Jheng-Yi Yeh, Po-Yu Liu and Po-Hsiu Huang
Healthcare 2026, 14(1), 50; https://doi.org/10.3390/healthcare14010050 - 24 Dec 2025
Abstract
Background: Hospital-acquired influenza remains a persistent threat that amplifies morbidity, mortality, length of stay, and operational strain, particularly among older and immunocompromised inpatients. The COVID-19 era reshaped control norms—normalizing N95 use during surges, ventilation improvements, and routine multiplex PCR—creating an opportunity to strengthen [...] Read more.
Background: Hospital-acquired influenza remains a persistent threat that amplifies morbidity, mortality, length of stay, and operational strain, particularly among older and immunocompromised inpatients. The COVID-19 era reshaped control norms—normalizing N95 use during surges, ventilation improvements, and routine multiplex PCR—creating an opportunity to strengthen hospital outbreak management. Methods: We conducted a targeted narrative review of WHO/CDC/Infectious Diseases Society of America (IDSA) guidance and peer-reviewed studies (January 2015–August 2025), emphasizing adult inpatient care. This narrative review synthesizes recent evidence and discusses theoretical implications for practice, rather than establishing formal guidelines. Evidence was synthesized into pragmatic practice statements on detection, diagnostics, isolation/cohorting, antivirals, chemoprophylaxis, vaccination, surveillance, and communication. Results: Early recognition and test-based confirmation are pivotal. For inpatients, nucleic-acid amplification tests are preferred; negative antigen tests warrant PCR confirmation, and lower-respiratory specimens improve yield in severe disease. A practical outbreak threshold is ≥2 epidemiologically linked, laboratory-confirmed cases within 72 h on the same ward. Effective control may require immediate isolation or cohorting with dedicated staff, strict droplet/respiratory protection, and daily active surveillance. Early oseltamivir (≤48 h from onset or on admission) reduces mortality and length of stay; short-course post-exposure prophylaxis for exposed patients or staff lowers secondary attack rates. Integrated vaccination efforts for healthcare personnel and high-risk patients reinforce workforce resilience and reduce transmission. Conclusions: A standardized, clinician-led bundle—early molecular testing, do-not-delay antivirals, decisive cohorting and Personal protective equipment (PPE), targeted chemoprophylaxis, vaccination, and disciplined communication— could help curb transmission, protect vulnerable patients and staff, and preserve capacity. Hospitals should codify COVID-era layered controls for seasonal influenza and rehearse unit-level outbreak playbooks to accelerate response and recovery. These recommendations target clinicians and infection-prevention leaders in acute-care hospitals. Full article
32 pages, 1481 KB  
Article
Optimal Carbon Emission Reduction Strategies Considering the Carbon Market
by Wenlin Huang and Daming Shan
Mathematics 2026, 14(1), 68; https://doi.org/10.3390/math14010068 - 24 Dec 2025
Abstract
In this study, we develop a stochastic optimal control model for corporate carbon management that synergistically combines emission reduction initiatives with carbon trading mechanisms. The model incorporates two control variables: the autonomous emission reduction rate and initial carbon allowance purchases, while accounting for [...] Read more.
In this study, we develop a stochastic optimal control model for corporate carbon management that synergistically combines emission reduction initiatives with carbon trading mechanisms. The model incorporates two control variables: the autonomous emission reduction rate and initial carbon allowance purchases, while accounting for both deterministic and stochastic carbon pricing scenarios. The solution is obtained through a two-step optimization procedure that addresses each control variable sequentially. In the first step, the problem is transformed into a Hamilton–Jacobi–Bellman (HJB) equation in the sense of viscosity solution. A key aspect of the methodology is deriving the corresponding analytical solution based on this equation’s structure. The second-step optimization results are shown to depend on the relationship between the risk-free interest rate and carbon price dynamics. Furthermore, we employ daily closing prices from 16 July 2021, to 31 December 2024, as the sample dataset to calibrate the parameters governing carbon allowance price evolution. The marginal abatement cost (MAC) curve is calibrated using data derived from the Emissions Prediction and Policy Analysis (EPPA) model, enabling the estimation of the emission reduction efficiency parameter. Additional policy-related parameters are obtained from relevant regulatory documents. The numerical results demonstrate how enterprises can implement the model’s outputs to inform carbon emission reduction decisions in practice and offer enterprises a decision-support tool that integrates theoretical rigor and practical applicability for achieving emission targets in the carbon market. Full article
35 pages, 3288 KB  
Article
Knowledge Graph-Based Causal Analysis of Aviation Accidents: A Hybrid Approach Integrating Retrieval-Augmented Generation and Prompt Engineering
by Xinyu Xiang, Xiyuan Chen and Jianzhong Yang
Aerospace 2026, 13(1), 16; https://doi.org/10.3390/aerospace13010016 - 24 Dec 2025
Abstract
The causal analysis of historical aviation accidents documented in investigation reports is important for the design, manufacture, operation, and maintenance of aircraft. However, given that most accident data are unstructured or semi-structured, identifying and extracting causal information remain labor intensive and inefficient. This [...] Read more.
The causal analysis of historical aviation accidents documented in investigation reports is important for the design, manufacture, operation, and maintenance of aircraft. However, given that most accident data are unstructured or semi-structured, identifying and extracting causal information remain labor intensive and inefficient. This gap is further deepened by tasks, such as system identification from component information, that require extensive domain-specific knowledge. In addition, there is a consequential demand for causation pattern analysis across multiple accidents and the extraction of critical causation chains. To bridge those gaps, this study proposes an aviation accident causation and relation analysis framework that integrates prompt engineering with a retrieval-augmented generation approach. A total of 343 real-world accident reports from the NTSB were analyzed to extract causation factors and their interrelations. An innovative causation classification schema was also developed to cluster the extracted causations. The clustering accuracy for the four main causation categories—Human, Aircraft, Environment, and Organization—reached 0.958, 0.865, 0.979, and 0.903, respectively. Based on the clustering results, a causation knowledge graph for aviation accidents was constructed, and by designing a set of safety evaluation indicators, “pilot—decision error” and “landing gear system malfunction” are identified as high-risk causations. For each high-risk causation, critical combinations of causation chains are identified and “Aircraft operator—policy or procedural deficiency/pilot—procedural violation/Runway contamination → pilot—decision error → pilot procedural violation/32 landing gear/57 wings” was identified as the critical causation combinations for “pilot—decision error”. Finally, safety recommendations for organizations and personnel were proposed based on the analysis results, which offer practical guidance for aviation risk prevention and mitigation. The proposed approach demonstrates the potential of combining AI techniques with domain knowledge to achieve scalable, data-driven causation analysis and strengthen proactive safety decision-making in aviation. Full article
(This article belongs to the Section Air Traffic and Transportation)
23 pages, 1640 KB  
Article
A Framework for Managing Digital Transformation Risks in Transport Systems: Linking Digital Maturity and Risk Categories
by Agnieszka A. Tubis
Appl. Sci. 2026, 16(1), 206; https://doi.org/10.3390/app16010206 - 24 Dec 2025
Abstract
Digital transformation is increasingly central to the development of transport systems, yet current research offers limited guidance on how digital maturity levels shape operational risk. Existing digital maturity models primarily support benchmarking and planning, but rarely integrate structured risk assessment. This study addresses [...] Read more.
Digital transformation is increasingly central to the development of transport systems, yet current research offers limited guidance on how digital maturity levels shape operational risk. Existing digital maturity models primarily support benchmarking and planning, but rarely integrate structured risk assessment. This study addresses this gap by proposing a framework that links digital maturity with the systematic identification and prioritisation of digital transformation risks. A Digital Maturity-Based Risk Assessment Framework (DMRisk-TS) is developed, classifying risks into three categories. Probability and severity are evaluated using fuzzy logic, while contextual modifiers account for the maturity gap and system coverage. The approach is demonstrated using a real tram transport system and the DMM-TRAM model. The analysis shows that risk profiles differ markedly across maturity levels. Low-maturity environments generate operational risks related to insufficient or non-integrated information. Transitioning between levels introduces implementation and data-integration risks. At high maturity levels, new systemic risks emerge, including error propagation, cyber vulnerabilities, and over-reliance on automated processes. DMRisk-TS offers a meaningful basis for understanding and managing risks in transport systems. The framework supports the prioritisation of mitigation actions, informs decisions on advancing maturity, and highlights that higher digitisation creates new classes of systemic risk. Full article
20 pages, 4787 KB  
Article
LLM-Enhanced Short-Term Electricity Price Forecasting Method for Australian Electricity Market
by Yutian Huang, Yachao Zhu, Gang Lei, Allen Wang and Jianguo Zhu
Appl. Sci. 2026, 16(1), 200; https://doi.org/10.3390/app16010200 - 24 Dec 2025
Abstract
This study investigates a large language model driven (LLM) framework for intelligent preprocessing and short-term electricity price forecasting in the Australian National Electricity Market (NEM). By integrating unstructured news features, weather signals, and cyclical calendar variables, the model captures both physical and informational [...] Read more.
This study investigates a large language model driven (LLM) framework for intelligent preprocessing and short-term electricity price forecasting in the Australian National Electricity Market (NEM). By integrating unstructured news features, weather signals, and cyclical calendar variables, the model captures both physical and informational drivers of price volatility. A hybrid approach combining quantile regression with conformal calibration achieves statistically significant improvements in accuracy and uncertainty calibration. The framework demonstrates the potential of integrating LLMs into operational forecasting pipelines to support electricity market decision-making and risk management. Full article
(This article belongs to the Section Energy Science and Technology)
28 pages, 2638 KB  
Article
Estimation of Vessel Collision Risk Under Uncertainty Using Interval Type-2 Fuzzy Inference Systems and Dempster–Shafer Evidence Theory
by Jinwan Park
J. Mar. Sci. Eng. 2026, 14(1), 34; https://doi.org/10.3390/jmse14010034 - 24 Dec 2025
Abstract
This study proposes a collision-risk assessment framework that combines an interval type-2 fuzzy inference system with Dempster–Shafer evidence theory to more reliably evaluate vessel collision risk under the uncertainty inherent in AIS-based marine navigation data. The fuzzy system models membership-function uncertainty through a [...] Read more.
This study proposes a collision-risk assessment framework that combines an interval type-2 fuzzy inference system with Dempster–Shafer evidence theory to more reliably evaluate vessel collision risk under the uncertainty inherent in AIS-based marine navigation data. The fuzzy system models membership-function uncertainty through a footprint of uncertainty and produces time-indexed basic probability assignments that are subsequently combined through a Dempster–Shafer–based temporal integration process. Robust combination rules are incorporated to mitigate the counterintuitive results often produced by classical evidence combination. Furthermore, Lenart’s time-based criterion and Fujii’s spatial safety domain are unified to construct a three-level risk labeling scheme, overcoming the limitations of conventional binary risk classification. Case studies using real AIS data demonstrate improved predictive accuracy and significantly reduced uncertainty, particularly when using the robust symmetric combination rule. Overall, the proposed framework provides a systematic approach for handling structural uncertainty in maritime environments and supports more reliable collision-risk prediction and safer navigational decision-making. Full article
(This article belongs to the Special Issue Advanced Control Strategies for Autonomous Maritime Systems)
19 pages, 829 KB  
Article
Logistics Performance Assessment in the Ceramic Industry: Applying Pareto Diagram and FMEA to Improve Operational Processes
by Carla Monique dos Santos Cavalcanti, Claudia Editt Tornero Becerra, Amanda Duarte Feitosa, André Philippi Gonzaga de Albuquerque, Fagner José Coutinho de Melo and Denise Dumke de Medeiros
Standards 2026, 6(1), 1; https://doi.org/10.3390/standards6010001 - 24 Dec 2025
Abstract
Logistics involves planning and managing resources to meet customer demands. Its effectiveness depends not only on time and process coordination but also on the performance of logistics operators, whose actions directly affect customer satisfaction. Although operational risks are inherent to logistics, customer-oriented service [...] Read more.
Logistics involves planning and managing resources to meet customer demands. Its effectiveness depends not only on time and process coordination but also on the performance of logistics operators, whose actions directly affect customer satisfaction. Although operational risks are inherent to logistics, customer-oriented service failures are often overlooked in traditional risk assessment. To address this gap, this study proposes an integrated approach that combines a Pareto Diagram and Failure Mode and Effects Analysis (FMEA) within the ISO 31000 risk assessment framework. This structured method enables the identification and prioritization of logistics failures based on customer complaints, thereby supporting data-driven decision-making and continuous service improvement. Applied to a real-world case in a ceramic production line specializing in tableware manufacturing, the method identified and evaluated key logistics failures; particularly those related to late deliveries and damaged goods. Based on these findings, improvement actions were proposed to reduce the recurrence of these issues. This study contributes a structured, practical, and replicable approach for organizations to introduce risk assessment practices and enhance the service quality of logistics management. This study advances the literature by shifting the focus from internal production failures to customer-driven service risks, offering strategic insights for improving reliability and operational performance. Full article
Show Figures

Figure 1

27 pages, 897 KB  
Review
Large Language Models for Cardiovascular Disease, Cancer, and Mental Disorders: A Review of Systematic Reviews
by Andreas Triantafyllidis, Sofia Segkouli, Stelios Kokkas, Anastasios Alexiadis, Evdoxia Eirini Lithoxoidou, George Manias, Athos Antoniades, Konstantinos Votis and Dimitrios Tzovaras
Healthcare 2026, 14(1), 45; https://doi.org/10.3390/healthcare14010045 - 24 Dec 2025
Abstract
Background/Objective: The use of Large Language Models (LLMs) has recently gained significant interest from the research community toward the development and adoption of Generative Artificial Intelligence (GenAI) solutions for healthcare. The present work introduces the first meta-review (i.e., review of systematic reviews) in [...] Read more.
Background/Objective: The use of Large Language Models (LLMs) has recently gained significant interest from the research community toward the development and adoption of Generative Artificial Intelligence (GenAI) solutions for healthcare. The present work introduces the first meta-review (i.e., review of systematic reviews) in the field of LLMs for chronic diseases, focusing particularly on cardiovascular, cancer, and mental diseases, to identify their value in patient care, and challenges for their implementation and clinical application. Methods: A literature search in the bibliographic databases of PubMed and Scopus was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, to identify systematic reviews incorporating LLMs. The original studies included in the reviews were synthesized according to their target disease, specific application, LLMs used, data sources, accuracy, and key outcomes. Results: The literature search identified 5 systematic reviews respecting our inclusion and exclusion criteria, which examined 81 unique LLM-based solutions. The highest percentage of the solutions targeted mental disease (86%), followed by cancer (7%) and cardiovascular disease (6%), implying a large research focus in mental health. Generative Pre-trained Transformer (GPT)-family models were used most frequently (~55%), followed by Bidirectional Encoder Representations from Transformers (BERT) variants (~40%). Key application areas included depression detection and classification (38%), suicidal ideation detection (7%), question answering based on treatment guidelines and recommendations (7%), and emotion classification (5%). Study aims and designs were highly heterogeneous, and methodological quality was generally moderate with frequent risk-of-bias concerns. Reported performance varied widely across domains and datasets, and many evaluations relied on fictional vignettes or non-representative data, limiting generalisability. The most significant found challenges in the development and evaluation of LLMs include inconsistent accuracy, bias detection and mitigation, model transparency, data privacy, need for continual human oversight, ethical concerns and guidelines, as well as the design and conduction of high-quality studies. Conclusions: While LLMs show promise for screening, triage, decision support, and patient education—particularly in mental health—the current literature is descriptive and constrained by data, transparency, and safety gaps. We recommend prioritizing rigorous real-world evaluations, diverse benchmark datasets, bias-auditing, and governance frameworks before LLM clinical deployment and large adoption. Full article
(This article belongs to the Special Issue Smart and Digital Health)
Show Figures

Figure 1

12 pages, 1111 KB  
Article
Perioperative Cerebral Protection and Monitoring of Acute Stanford Type A Aortic Dissection: A Retrospective Cohort Study
by Yi Jiang, Jianing Wang, Chang Liu, Yong Liu, Lin Mi, Tian Fang, Yongqing Cheng, Hoshun Chong, Dongjin Wang and Yunxing Xue
J. Cardiovasc. Dev. Dis. 2026, 13(1), 12; https://doi.org/10.3390/jcdd13010012 - 24 Dec 2025
Abstract
Background: Optimal cerebral protection strategies for acute Stanford type A aortic dissection (aTAAD) surgery remain controversial. This study aimed to evaluate the role of near-infrared spectroscopy (NIRS)-guided monitoring and its association with clinical outcomes. Methods: We retrospectively analyzed 619 patients undergoing aTAAD surgery [...] Read more.
Background: Optimal cerebral protection strategies for acute Stanford type A aortic dissection (aTAAD) surgery remain controversial. This study aimed to evaluate the role of near-infrared spectroscopy (NIRS)-guided monitoring and its association with clinical outcomes. Methods: We retrospectively analyzed 619 patients undergoing aTAAD surgery (Hemi-Arch, Total-Arch, or Arch-Stent procedures). Intraoperative cerebral oxygenation was monitored using NIRS, with the magnitude of desaturation quantified as ΔNIRS. We assessed correlations between ΔNIRS and nasopharyngeal temperature, employed generalized additive models (GAM) to analyze nonlinear relationships with major adverse cardiovascular events (MACE), and used piecewise logistic regression to identify procedure-specific ΔNIRS risk thresholds. Results: ΔNIRS showed a significant positive correlation with lower temperatures in Total-Arch (R = 0.486, p < 0.001) and Arch-Stent (R = 0.216, p < 0.001) groups. GAM analysis revealed a nonlinear, accelerating relationship between higher ΔNIRS and increased log odds of MACE in Hemi-Arch and Total-Arch groups. Procedure-specific ΔNIRS thresholds were identified: 8.5% for Hemi-Arch, 19.6% for Total-Arch, and 20.9% for Arch-Stent. Patients with ΔNIRS above these thresholds had significantly higher rates of stroke and MACE. Conclusions: This study identifies ΔNIRS as a significant, procedure-dependent intraoperative monitoring indicator in aTAAD surgery, and the proposed risk thresholds provide a rationale for real-time NIRS-guided clinical decision-making. Full article
(This article belongs to the Section Cardiac Surgery)
Show Figures

Figure 1

29 pages, 1839 KB  
Article
Efficient Selection of Investment Portfolios in Real-World Markets: A Multi-Objective Optimization Approach
by Antonio J. Hidalgo-Marín, Antonio J. Nebro and José García-Nieto
Algorithms 2026, 19(1), 20; https://doi.org/10.3390/a19010020 - 24 Dec 2025
Abstract
As financial markets become increasingly complex, optimizing investment portfolios under multiple conflicting objectives has become a central challenge for decision-makers. This paper presents a comprehensive benchmarking framework for multi-objective portfolio optimization based on metaheuristics, designed to operate on real-world financial data. This framework [...] Read more.
As financial markets become increasingly complex, optimizing investment portfolios under multiple conflicting objectives has become a central challenge for decision-makers. This paper presents a comprehensive benchmarking framework for multi-objective portfolio optimization based on metaheuristics, designed to operate on real-world financial data. This framework integrates preprocessing, and optimization using four state-of-the-art algorithms: NSGA-II, MOEA/D, SMS-EMOA, and SMPSO. Using historical data from over 11,000 assets listed on U.S. exchanges, including ARCA, NYSE, NASDAQ, OTC, AMEX, and BATS, we define a suite of benchmark scenarios with increasing dimensionality and constraint complexity. Our results highlight algorithmic strengths and limitations, reveal significant trade-offs between return and risk, and demonstrate the effectiveness of multi-objective metaheuristics in constructing diversified, high-performance investment portfolios. Each portfolio is encoded as a real-valued vector combining asset selection and allocation, enabling fine-grained diversification control. All datasets and source code are publicly available to ensure reproducibility. Full article
Show Figures

Figure 1

20 pages, 609 KB  
Article
Prescriptive Analytics for Sustainable Financial Systems: A Causal–Machine Learning Framework for Credit Risk Management and Targeted Marketing
by Jaeyung Huh
Systems 2026, 14(1), 16; https://doi.org/10.3390/systems14010016 - 24 Dec 2025
Abstract
Financial institutions increasingly rely on data-driven decision systems; however, many operational models remain purely predictive, failing to account for confounding biases inherent in observational data. In credit settings characterized by selective treatment assignment, this limitation can lead to erroneous policy assessments and the [...] Read more.
Financial institutions increasingly rely on data-driven decision systems; however, many operational models remain purely predictive, failing to account for confounding biases inherent in observational data. In credit settings characterized by selective treatment assignment, this limitation can lead to erroneous policy assessments and the accumulation of “methodological debt”. To address this issue, we propose an “Estimate → Predict & Evaluate” framework that integrates Double Machine Learning (DML) with practical MLOps strategies. The framework first employs DML to mitigate selection bias and estimate unbiased Conditional Average Treatment Effects (CATEs), which are then distilled into a lightweight Target Model for real-time decision-making. This architecture further supports Off-Policy Evaluation (OPE), creating a “Causal Sandbox” for simulating alternative policies without risky experimentation. We validated the framework using two real-world datasets: a low-confounding marketing dataset and a high-confounding credit risk dataset. While uplift-based segmentation successfully identified responsive customers in the marketing context, our DML-based approach proved indispensable in high-risk credit environments. It explicitly identified “Sleeping Dogs”—customers for whom intervention paradoxically increased delinquency risk—whereas conventional heuristic models failed to detect these adverse dynamics. The distilled model demonstrated superior stability and provided consistent inputs for OPE. These findings suggest that the proposed framework offers a systematic pathway for integrating causal inference into financial decision-making, supporting transparent, evidence-based, and sustainable policy design. Full article
Show Figures

Figure 1

13 pages, 2449 KB  
Article
AI Decision-Making Performance in Maternal–Fetal Medicine: Comparison of ChatGPT-4, Gemini, and Human Specialists in a Cross-Sectional Case-Based Study
by Matan Friedman, Amit Slouk, Noa Gonen, Laura Guzy, Yael Ganor Paz, Kira Nahum Sacks, Amihai Rottenstreich, Eran Weiner, Ohad Gluck and Ilia Kleiner
J. Clin. Med. 2026, 15(1), 117; https://doi.org/10.3390/jcm15010117 - 24 Dec 2025
Abstract
Background/Objectives: Large Language Models (LLMs), including ChatGPT-4 and Gemini, are increasingly incorporated into clinical care; however, their reliability within maternal–fetal medicine (MFM), a high-risk field in which diagnostic and management errors may affect both the pregnant patient and the fetus, remains uncertain. Evaluating [...] Read more.
Background/Objectives: Large Language Models (LLMs), including ChatGPT-4 and Gemini, are increasingly incorporated into clinical care; however, their reliability within maternal–fetal medicine (MFM), a high-risk field in which diagnostic and management errors may affect both the pregnant patient and the fetus, remains uncertain. Evaluating the alignment of AI-generated case management recommendations with those of MFM specialists, emphasizing accuracy, agreement, and clinical relevancy. Study Design and Setting: Cross-sectional study with blinded online evaluation (November–December 2024); evaluators were blinded to responder identity (AI vs. human), and case order and response labels were randomized for each evaluator using a computer-generated sequence to reduce order and identification bias. Methods: Twenty hypothetical MFM cases were constructed, allowing standardized presentation of complex scenarios without patient-identifiable data and enabling consistent comparison of AI-generated and human specialist recommendations. Responses were generated by ChatGPT-4, Gemini, and three MFM specialists, then assessed by 22 blinded board-certified MFM evaluators using a 10-point Likert scale. Agreement was measured with Spearman’s rho (ρ) and Cohen’s (κ); accuracy differences were measured with Wilcoxon signed-rank tests. Results: ChatGPT-4 exhibited moderate alignment (mean 6.6 ± 2.95; ρ = 0.408; κ = 0.232, p < 0.001), performing well in routine, guideline-driven scenarios (e.g., term oligohydramnios, well-controlled gestational hypertension, GDMA1). Gemini scored 7.0 ± 2.64, demonstrating effectively no consistent inter-rater agreement (κ = −0.024, p = 0.352), indicating that although mean scores were slightly higher, evaluators varied widely in how they judged individual Gemini responses. No significant difference was found between ChatGPT-4 and clinicians in median accuracy scores (Wilcoxon p = 0.18), while Gemini showed significantly lower accuracy (p < 0.01). Model performance varied primarily by case complexity: agreement was higher in straightforward, guideline-based scenarios and more variable in complex cases, whereas no consistent pattern was observed by gestational age or specific clinical domain across the 20 cases. Conclusions: AI shows promise in routine MFM decision-making but remains constrained in complex cases, where models sometimes under-prioritize maternal–fetal risk trade-offs or incompletely address alternative management pathways, warranting cautious integration into clinical practice. Generalizability is limited by the small number of simulated cases and the use of hypothetical vignettes rather than real-world clinical encounters. Full article
Show Figures

Figure 1

32 pages, 1696 KB  
Article
Financial Statement Fraud Detection Through an Integrated Machine Learning and Explainable AI Framework
by Tsolmon Sodnomdavaa and Gunjargal Lkhagvadorj
J. Risk Financial Manag. 2026, 19(1), 13; https://doi.org/10.3390/jrfm19010013 - 24 Dec 2025
Abstract
Financial statement fraud remains a substantial risk in environments marked by weak regulatory oversight and information asymmetry. This study develops a decision-centric framework that integrates machine learning, explainable artificial intelligence, and decision curve analysis to improve fraud detection under severe class imbalance. Using [...] Read more.
Financial statement fraud remains a substantial risk in environments marked by weak regulatory oversight and information asymmetry. This study develops a decision-centric framework that integrates machine learning, explainable artificial intelligence, and decision curve analysis to improve fraud detection under severe class imbalance. Using 969 firm-year observations from 132 Mongolian firms (2013–2024), we evaluate 21 financial ratios with models including Random Forest, XGBoost, LightGBM, MLP, TabNet, and a Stacking Ensemble trained with SMOTE and class-weighted learning. Performance was assessed using PR-AUC, F1-score, Recall, and DeLong-based significance testing. The Stacking Ensemble achieved the strongest results (PR-AUC = 0.93; F1 = 0.83), outperforming both classical and modern baseline models. Interpretability analyses (SHAP, LIME, and counterfactual explanations) consistently identified leverage, profitability, and liquidity indicators as dominant drivers of fraud risk, supported by a SHAP Stability Index of 0.87. Decision curve analysis showed that calibrated thresholds improved decision efficiency by 7–9% and reduced over-audit costs by 3–4%, while an audit cost simulation estimated annual savings of 80–100 million MNT. Overall, the proposed ML–XAI–DCA framework offers a transparent, interpretable, and cost-efficient approach for enhancing fraud detection in emerging-market contexts with limited textual disclosures. Full article
Show Figures

Figure 1

14 pages, 354 KB  
Review
Should Neurogenic Supine Hypertension Be Treated? Insights from Hypertension-Mediated Organ Damage Studies—A Narrative Review
by Cristiano Fava, Federica Stocchetti and Sara Bonafini
Biomedicines 2026, 14(1), 40; https://doi.org/10.3390/biomedicines14010040 - 24 Dec 2025
Abstract
Neurodegenerative synucleinopathies—including Parkinson’s disease, multiple system atrophy, pure autonomic failure, and dementia with Lewy bodies—often feature cardiovascular autonomic dysfunction. Neurogenic orthostatic hypotension (nOH) is common and symptomatic, while neurogenic supine hypertension (nSH) is less frequent but may carry long-term cardiovascular risks. Lifestyle measures [...] Read more.
Neurodegenerative synucleinopathies—including Parkinson’s disease, multiple system atrophy, pure autonomic failure, and dementia with Lewy bodies—often feature cardiovascular autonomic dysfunction. Neurogenic orthostatic hypotension (nOH) is common and symptomatic, while neurogenic supine hypertension (nSH) is less frequent but may carry long-term cardiovascular risks. Lifestyle measures are first-line for managing nSH, yet persistent hypertension unresponsive to nonpharmacological strategies presents a treatment dilemma. Limited trial data and unclear guidelines make it difficult to determine when antihypertensive therapy is appropriate. Evidence from studies on hypertension-mediated organ damage (HMOD)—assessed through markers such as carotid intima-media thickness, pulse wave velocity, left ventricular hypertrophy, estimated glomerular filtration rate, and white matter hyperintensities—suggests that nSH, rather than the underlying neurodegenerative disorder, drives vascular, cardiac, renal, and cerebral injury. Therefore, treatment decisions should be individualized. While antihypertensive therapy may help prevent subclinical organ damage, clinicians must balance this benefit against the risk of worsening nOH and further compromising overall prognosis. Full article
(This article belongs to the Section Neurobiology and Clinical Neuroscience)
Back to TopTop