error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,292)

Search Parameters:
Keywords = predictive uncertainty

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 13729 KB  
Article
Stage-Wise SOH Prediction Using an Improved Random Forest Regression Algorithm
by Wei Xiao, Jun Jia, Wensheng Gao, Haibo Li, Hong Xu, Weidong Zhong and Ke He
Electronics 2026, 15(2), 287; https://doi.org/10.3390/electronics15020287 - 8 Jan 2026
Abstract
In complex energy storage operating scenarios, batteries seldom undergo complete charge–discharge cycles required for periodic capacity calibration. Methods based on accelerated aging experiments can indicate possible aging paths; however, due to uncertainties like changing operating conditions, environmental variations, and manufacturing inconsistencies, the degradation [...] Read more.
In complex energy storage operating scenarios, batteries seldom undergo complete charge–discharge cycles required for periodic capacity calibration. Methods based on accelerated aging experiments can indicate possible aging paths; however, due to uncertainties like changing operating conditions, environmental variations, and manufacturing inconsistencies, the degradation information obtained from such experiments may not be applicable to the entire lifecycle. To address this, we developed a stage-wise state-of-health (SOH) prediction approach that combined offline training with online updating. During the offline training phase, multiple single-cell experiments were conducted under various combinations of depth of discharge (DOD) and C-rate. Multi-dimensional health features (HFs) were extracted, and an accelerated aging probability pAA was defined. Based on the correlation statistics between HFs, kHF, the SOH, and pAA, all cells in the dataset were divided into general early, middle, and late aging stages. For each stage, cells were further classified by their longevity (long, medium, and short), and multiple models were trained offline for each category. The results show that models trained on cells following similar aging paths achieve significantly better performance than a model trained on all data combined. Meanwhile, HF optimization was performed via a three-step process: an initial screening based on expert knowledge, a second screening using Spearman correlation coefficients, and an automatic feature importance ranking using a random forest regression (RFR) model. The proposed method is innovative in the following ways: (1) The stage-wise multi-model strategy significantly improves the SOH prediction accuracy across the entire lifecycle, maintaining the mean absolute percentage error (MAPE) within 1%. (2) The improved model provides uncertainty quantification, issuing a warning signal at least 50 cycles before the onset of accelerated aging. (3) The analysis of feature importance from the model outputs allows the indirect identification of the primary aging mechanisms at different stages. (4) The model is robust against missing or low-quality HFs. If certain features cannot be obtained or are of poor quality, the prediction process does not fail. Full article
Show Figures

Figure 1

24 pages, 1787 KB  
Article
Uncertainty-Aware Machine Learning for NBA Forecasting in Digital Betting Markets
by Matteo Montrucchio, Enrico Barbierato and Alice Gatti
Information 2026, 17(1), 56; https://doi.org/10.3390/info17010056 - 8 Jan 2026
Abstract
This study introduces a fully uncertainty-aware forecasting framework for NBA games that integrates team-level performance metrics, rolling-form indicators, and spatial shot-chart embeddings. The predictive backbone is a recurrent neural network equipped with Monte Carlo dropout, yielding calibrated sequential probabilities. The model is evaluated [...] Read more.
This study introduces a fully uncertainty-aware forecasting framework for NBA games that integrates team-level performance metrics, rolling-form indicators, and spatial shot-chart embeddings. The predictive backbone is a recurrent neural network equipped with Monte Carlo dropout, yielding calibrated sequential probabilities. The model is evaluated against strong baselines including logistic regression, XGBoost, convolutional models, a GRU sequence model, and both market-only and non-market-only benchmarks. All experiments rely on strict chronological partitioning (train ≤ 2022, validation 2023, test 2024), ablation tests designed to eliminate any circularity with bookmaker odds, and cross-season robustness checks spanning 2012–2024. Predictive performance is assessed through accuracy, Brier score, log-loss, AUC, and calibration metrics (ECE/MCE), complemented by SHAP-based interpretability to verify that only pre-game information influences predictions. To quantify economic value, calibrated probabilities are fed into a frictionless betting simulator using fractional-Kelly staking, an expected-value threshold, and bootstrap-based uncertainty estimation. Empirically, the uncertainty-aware model delivers systematically better calibration than non-Bayesian baselines and benefits materially from the combination of shot-chart embeddings and recent-form features. Economic value emerges primarily in less-efficient segments of the market: The fused predictor outperforms both market-only and non-market-only variants on moneylines, while spreads and totals show limited exploitable edge, consistent with higher pricing efficiency. Sensitivity studies across Kelly multipliers, EV thresholds, odds caps, and sequence lengths confirm that the findings are robust to modelling and decision-layer perturbations. The paper contributes a reproducible, decision-focused framework linking uncertainty-aware prediction to economic outcomes, clarifying when predictive lift can be monetized in NBA markets, and outlining methodological pathways for improving robustness, calibration, and execution realism in sports forecasting. Full article
Show Figures

Figure 1

20 pages, 4322 KB  
Article
Research on UDE Control Strategy for Permanent Magnet Synchronous Motors Based on Symmetry Principle
by Hui Song, Shulong Liu, Haiyan Song and Ziqi Fan
Symmetry 2026, 18(1), 116; https://doi.org/10.3390/sym18010116 - 8 Jan 2026
Abstract
Permanent Magnet Synchronous Motors (PMSMs) are central to high-performance servo drives, yet their control accuracy is often compromised by parameter uncertainties and external disturbances. While the Uncertainty and Disturbance Estimator (UDE) offers enhanced robustness by treating such uncertainties as lumped disturbances, it suffers [...] Read more.
Permanent Magnet Synchronous Motors (PMSMs) are central to high-performance servo drives, yet their control accuracy is often compromised by parameter uncertainties and external disturbances. While the Uncertainty and Disturbance Estimator (UDE) offers enhanced robustness by treating such uncertainties as lumped disturbances, it suffers from significant integral windup under output saturation, degrading dynamic response. This paper proposes a symmetry-principle-based UDE control strategy for the PMSM speed loop, which simplifies parameter tuning through derived analytical expressions for PI gains. To address the windup issue, two anti-windup algorithms are introduced and critically compared: a piecewise tracking back-calculation method and an integral final value prediction algorithm. The key finding is that the integral final value prediction algorithm demonstrates a superior performance. Simulation results show that it reduces the convergence time by 6.3 ms and the overshoot by 1.8% compared to the piecewise method. Experimental validation on an STM32F446-based platform confirms these findings. Under a 600 r/min step with load, the UDE controller with the integral final value prediction algorithm reduces speed overshoot by 15% compared to the piecewise algorithm and by 47% compared to the standard UDE controller without anti-windup. These results conclusively show that the proposed integrated strategy—combining symmetry-based UDE control with the integral final value prediction anti-windup algorithm—significantly improves the dynamic response, accuracy, and robustness of PMSM servo systems. Full article
Show Figures

Figure 1

29 pages, 522 KB  
Article
Crowdfunding as an E-Commerce Mechanism: A Deep Learning Approach to Predicting Success Using Reduced Generative AI Embeddings
by Hakan Gunduz, Muge Klein and Ela Sibel Bayrak Meydanoglu
J. Theor. Appl. Electron. Commer. Res. 2026, 21(1), 28; https://doi.org/10.3390/jtaer21010028 - 8 Jan 2026
Abstract
Crowdfunding platforms like Kickstarter have reshaped early-stage financing by allowing entrepreneurs to connect directly with potential supporters. As a fast-expanding part of digital commerce, crowdfunding offers significant opportunities but also substantial risks for both entrepreneurs and platform operators, making predictive analytics an essential [...] Read more.
Crowdfunding platforms like Kickstarter have reshaped early-stage financing by allowing entrepreneurs to connect directly with potential supporters. As a fast-expanding part of digital commerce, crowdfunding offers significant opportunities but also substantial risks for both entrepreneurs and platform operators, making predictive analytics an essential capability. Although crowdfunding shares some operational features with traditional e-commerce, its mix of financial uncertainty, emotionally charged storytelling, and fast-evolving social interactions makes it a distinct and more challenging forecasting problem. Accurately predicting campaign outcomes is especially difficult because of the high-dimensionality and diversity of the underlying textual and behavioral data. These factors highlight the need for scalable, intelligent data science methods that can jointly exploit structured and unstructured information. To address these issues, this study proposes a novel AI-based predictive framework that integrates a Convolutional Block Attention Module (CBAM)-enhanced symmetric autoencoder for compressing high-dimensional Generative AI (GenAI) BERT embeddings with meta-heuristic feature selection and advanced classification models. The framework systematically couples attention-driven feature compression with optimization techniques—Genetic Algorithm (GA), Jaya, and Artificial Rabbit Optimization (ARO)—and then applies Long Short-Term Memory (LSTM) and Gradient Boosting Machine (GBM) classifiers. Experiments on a large-scale Kickstarter dataset demonstrate that the proposed approach attains 77.8% accuracy while reducing feature dimensionality by more than 95%, surpassing standard baseline methods. In addition to its technical merits, the study yields practical insights for platform managers and campaign creators, enabling more informed choices in campaign design, promotional tactics, and backer targeting. Overall, this work illustrates how advanced AI methodologies can strengthen predictive analytics in digital commerce, thereby enhancing the strategic impact and long-term sustainability of crowdfunding ecosystems. Full article
Show Figures

Figure 1

27 pages, 10840 KB  
Article
Deep Multi-Task Forecasting of Net-Load and EV Charging with a Residual-Normalised GRU in IoT-Enabled Microgrids
by Muhammed Cavus, Jing Jiang and Adib Allahham
Energies 2026, 19(2), 311; https://doi.org/10.3390/en19020311 - 7 Jan 2026
Abstract
The increasing penetration of electric vehicles (EVs) and rooftop photovoltaics (PV) is intensifying the variability and uncertainty of residential net demand, thereby challenging real-time operation in smart grids and microgrids. The purpose of this study is to develop and evaluate an accurate and [...] Read more.
The increasing penetration of electric vehicles (EVs) and rooftop photovoltaics (PV) is intensifying the variability and uncertainty of residential net demand, thereby challenging real-time operation in smart grids and microgrids. The purpose of this study is to develop and evaluate an accurate and operationally relevant short-term forecasting framework that jointly models household net demand and EV charging behaviour. To this end, a Residual-Normalised Multi-Task GRU (RN-MTGRU) architecture is proposed, enabling the simultaneous learning of shared temporal patterns across interdependent energy streams while maintaining robustness under highly non-stationary conditions. Using one-minute resolution measurements of household demand, PV generation, EV charging activity, and weather variables, the proposed model consistently outperforms benchmark forecasting approaches across 1–30 min horizons, with the largest performance gains observed during periods of rapid load variation. Beyond predictive accuracy, the relevance of the proposed approach is demonstrated through a demand response case study, where forecast-informed control leads to substantial reductions in daily peak demand on critical days and a measurable annual increase in PV self-consumption. These results highlight the practical significance of the RN-MTGRU as a scalable forecasting solution that enhances local flexibility, supports renewable integration, and strengthens real-time decision-making in residential smart grid environments. Full article
(This article belongs to the Special Issue Developments in IoT and Smart Power Grids)
Show Figures

Figure 1

27 pages, 1856 KB  
Article
Waypoint-Sequencing Model Predictive Control for Ship Weather Routing Under Forecast Uncertainty
by Marijana Marjanović, Jasna Prpić-Oršić and Marko Valčić
J. Mar. Sci. Eng. 2026, 14(2), 118; https://doi.org/10.3390/jmse14020118 - 7 Jan 2026
Abstract
Ship weather routing optimization has evolved from deterministic great-circle navigation to sophisticated frameworks that account for dynamic environmental conditions and operational constraints. This paper presents a waypoint-sequencing Model Predictive Control (MPC) approach for energy-efficient ship weather routing under forecast uncertainty. The proposed rolling [...] Read more.
Ship weather routing optimization has evolved from deterministic great-circle navigation to sophisticated frameworks that account for dynamic environmental conditions and operational constraints. This paper presents a waypoint-sequencing Model Predictive Control (MPC) approach for energy-efficient ship weather routing under forecast uncertainty. The proposed rolling horizon framework integrates neural network-based vessel performance models with ensemble weather forecasts to enable real-time route adaptation while balancing fuel efficiency, navigational safety, and path smoothness objectives. The MPC controller operates with a 6 h control horizon and 24 h prediction horizon, re-optimizing every 6 h using updated meteorological forecasts. A multi-objective cost function prioritizes fuel consumption (60%), safety considerations (30%), and trajectory smoothness (10%), with an exponential discount factor (γ = 0.95) to account for increasing forecast uncertainty. The framework discretises planned routes into waypoints and optimizes heading angles and discrete speed options (12.0, 13.5, and 14.5 knots) at each control step. Validation using 21 transatlantic voyage scenarios with real hindcast weather data demonstrates the method’s capability to propagate uncertainties through ship performance models, yielding probabilistic estimates for attainable speed, fuel consumption, and estimated time of arrival (ETA). The methodology establishes a foundation for more advanced stochastic optimization approaches while offering immediate operational value through its computational tractability and integration with existing ship decision support systems. Full article
(This article belongs to the Special Issue The Control and Navigation of Autonomous Surface Vehicles)
Show Figures

Figure 1

22 pages, 4530 KB  
Article
Ray Tracing Calibration Based on Local Phase Error Estimates for Rail Transit Wireless Channel Modeling
by Meng Lan, Jianfeng Liu, Meng Mei and Zhongwei Xu
Appl. Sci. 2026, 16(2), 606; https://doi.org/10.3390/app16020606 - 7 Jan 2026
Abstract
Ray tracing (RT) has become an important method for train-to-ground (T2G) wireless channel modeling due to its physical interpretability. In rail transit scenarios, RT suffers from modeling errors that arise due to environmental reconstruction and uncertainties in electromagnetic parameters, as well as dynamic [...] Read more.
Ray tracing (RT) has become an important method for train-to-ground (T2G) wireless channel modeling due to its physical interpretability. In rail transit scenarios, RT suffers from modeling errors that arise due to environmental reconstruction and uncertainties in electromagnetic parameters, as well as dynamic phase errors caused by coherent multi-path superposition that is further triggered by such modeling errors. Phase errors significantly affect both the calibration accuracy and prediction precision of RT. Therefore, this paper proposes an intelligent RT calibration method based on local phase errors. The method builds a phase error distribution model and uses constraints from limited measurements to explicitly estimate and correct phase errors in RT-generated channel responses. Firstly, the method applies the Variational Expectation–Maximization (VEM) algorithm to optimize the phase error model, where the expectation step derives an approximate posterior distribution and the maximization step updates parameters conditioned on this posterior. Secondly, experiments are conducted using differentiable RT implemented in the Sionna library, which explicitly provides gradients of environmental and link parameters with respect to channel frequency responses, enabling end-to-end calibration. Finally, experimental results show that in railway scenarios, compared with calibration methods based on phase error-oblivious and uniform phase error, the proposed approach achieves average gains of about 10 dB at SNR = 0 dB and 20 dB at SNR = 30 dB. Full article
Show Figures

Figure 1

24 pages, 4670 KB  
Article
X-HEM: An Explainable and Trustworthy AI-Based Framework for Intelligent Healthcare Diagnostics
by Mohammad F. Al-Hammouri, Bandi Vamsi, Islam T. Almalkawi and Ali Al Bataineh
Computers 2026, 15(1), 33; https://doi.org/10.3390/computers15010033 - 7 Jan 2026
Abstract
Intracranial Hemorrhage (ICH) remains a critical life-threatening condition where timely and accurate diagnosis using non-contrast Computed Tomography (CT) scans is vital to reduce mortality and long-term disability. Deep learning methods have shown strong potential for automated hemorrhage detection, yet most existing approaches lack [...] Read more.
Intracranial Hemorrhage (ICH) remains a critical life-threatening condition where timely and accurate diagnosis using non-contrast Computed Tomography (CT) scans is vital to reduce mortality and long-term disability. Deep learning methods have shown strong potential for automated hemorrhage detection, yet most existing approaches lack confidence quantification and clinical interpretability, which limits their adoption in high-stakes care. This study presents X-HEM, an explainable hemorrhage ensemble model for reliable detection of Intracranial Hemorrhage (ICH) on non-contrast head CT scans. The aim is to improve diagnostic accuracy, interpretability, and confidence for real-time clinical decision support. X-HEM integrates three convolutional backbones (VGG16, ResNet50, DenseNet121) through soft voting. Bayesian uncertainty is estimated using Monte Carlo Dropout, while Grad-CAM++ and SHAP provide spatial and global interpretability. Training and validation were conducted on the RSNA ICH dataset, with external testing on CQ500. The model achieved AUCs of 0.96 (RSNA) and 0.94 (CQ500), demonstrated well-calibrated confidence (low Brier/ECE), and provided explanations that aligned with radiologist-marked regions. The integration of ensemble learning, Bayesian uncertainty, and dual explainability enables X-HEM to deliver confidence-aware, interpretable ICH predictions suitable for clinical use. Full article
Show Figures

Figure 1

22 pages, 321 KB  
Review
Molecular and Genetic Biomarkers in Prostate Cancer Active Surveillance: Recent Developments and Future Perspectives
by Stephanie F. Smith, Robert D. Mills, Colin S. Cooper and Daniel S. Brewer
Genes 2026, 17(1), 71; https://doi.org/10.3390/genes17010071 - 6 Jan 2026
Abstract
Background/Objectives: Active surveillance (AS) has become the standard of care for many men with localised prostate cancer, aiming to avoid the overtreatment of indolent disease while maintaining oncological safety. Despite improvements in diagnostic techniques, misclassification at diagnosis and the limited ability to predict [...] Read more.
Background/Objectives: Active surveillance (AS) has become the standard of care for many men with localised prostate cancer, aiming to avoid the overtreatment of indolent disease while maintaining oncological safety. Despite improvements in diagnostic techniques, misclassification at diagnosis and the limited ability to predict disease progression remain major challenges in AS. Novel molecular and genetic biomarkers, assessed through liquid biopsy approaches, offer the potential to refine patient selection and support risk-adapted monitoring in AS. Methods: We conducted a narrative review of biomarkers in the context of AS for prostate cancer, framing the discussion in terms of the challenges in AS and how biomarkers may address these. PubMed and Embase were searched for English-language peer-reviewed studies published between 2000 and 2025. International guidelines (AUA, EAU, NCCN, NICE) and reference lists were reviewed manually. Priority was given to large prospective cohorts, meta-analyses, and high-impact publications. Results: Blood-based assays such as PHI and the 4K score, urinary tests including ExoDx and SelectMDx, and the Prostate Urine Risk (PUR) signatures have all shown associations with disease progression or decisions to undergo earlier treatment. However, studies are often small, use surrogate endpoints, and lack validation in MRI-integrated cohorts. Biomarkers appear most informative in men with Gleason Grade 1 (GG1) disease, while evidence in GG2 cohorts is limited. Cost-effectiveness, heterogeneity of endpoints, and uncertainty in managing discordant biomarker and MRI results remain barriers to clinical adoption. Conclusions: Molecular and genetic biomarkers show promise for improving AS by reducing diagnostic misclassification and enhancing prediction of progression. Future research should define clinically relevant cut-offs, clarify integration with MRI, and evaluate longitudinal use. Demonstrating utility in contemporary cohorts could enable the development of biomarker-guided, personalised AS that maintains safety while minimising harm. Full article
29 pages, 849 KB  
Review
A Review of Spacecraft Aeroassisted Orbit Transfer Approaches
by Lu Yang, Yawen Jiang, Wenhua Cheng, Jinyan Xue, Yasheng Zhang and Shuailong Zhao
Appl. Sci. 2026, 16(2), 573; https://doi.org/10.3390/app16020573 - 6 Jan 2026
Abstract
Aerodynamic manoeuvring technology for spacecraft actively utilizes aerodynamic forces to alter orbital trajectories. This approach not only substantially reduces propellant consumption but also expands the range of accessible orbits, representing a key technological pathway to address the demands of increasingly complex yet cost-effective [...] Read more.
Aerodynamic manoeuvring technology for spacecraft actively utilizes aerodynamic forces to alter orbital trajectories. This approach not only substantially reduces propellant consumption but also expands the range of accessible orbits, representing a key technological pathway to address the demands of increasingly complex yet cost-effective space missions. The theoretical prototype of this technology was proposed by Howard London. Over the course of more than half a century of development, it has evolved into four distinct modes: aeroglide, aerocruise, aerobang, and aerogravity assist. These modes have been engineered and applied in scenarios such as in-orbit manoeuvring of reusable vehicles, rapid response to space missions, and interplanetary exploration. Our research centers on two core domains: trajectory optimization and control guidance. Trajectory optimization employs numerical methods such as pseudo-spectral techniques and sequential convex optimization to achieve multi-objective optimization of fuel and time under constraints, including heat flux and overload. Control guidance focuses on standard orbital guidance and predictive correction guidance, progressively evolving into adaptive and robust control to address atmospheric uncertainties and the challenges of strong nonlinear coupling. Although breakthroughs have been achieved in deep-space exploration missions, critical challenges remain, including constructing high-fidelity models, enhancing real-time computational efficiency, ensuring the explainability of artificial intelligence methods, and designing integrated framework architectures. As these technical hurdles are progressively overcome, this technology will find broader engineering applications in diverse space missions such as lunar return and in-orbit servicing, driving continuous innovation in the field of space dynamics and control. Full article
(This article belongs to the Section Aerospace Science and Engineering)
Show Figures

Figure 1

43 pages, 4289 KB  
Article
A Stochastic Model Approach for Modeling SAG Mill Production and Power Through Bayesian Networks: A Case Study of the Chilean Copper Mining Industry
by Manuel Saldana, Edelmira Gálvez, Mauricio Sales-Cruz, Eleazar Salinas-Rodríguez, Jonathan Castillo, Alessandro Navarra, Norman Toro, Dayana Arias and Luis A. Cisternas
Minerals 2026, 16(1), 60; https://doi.org/10.3390/min16010060 - 6 Jan 2026
Abstract
Semi-autogenous (SAG) milling represents one of the most energy-intensive and variable stages of copper mineral processing. Traditional deterministic models often fail to capture the nonlinear dependencies and uncertainty inherent in industrial operations such as granulometry, solids percentage in the feeding or hardness. This [...] Read more.
Semi-autogenous (SAG) milling represents one of the most energy-intensive and variable stages of copper mineral processing. Traditional deterministic models often fail to capture the nonlinear dependencies and uncertainty inherent in industrial operations such as granulometry, solids percentage in the feeding or hardness. This work develops and validates a stochastic model based on Discrete Bayesian networks (BNs) to represent the causal relationships governing SAG Production and SAG Power under uncertainty or partial knowledge of explanatory variables. Discretization is adopted for methodological reasons as well as for operational relevance, since SAG plant decisions are typically made using threshold-based categories. Using operational data from a Chilean mining operation, the model fitted integrates expert-guided structure learning (Hill-Climbing with BDeu/BIC scores) and Bayesian parameter estimation with Dirichlet priors. Although validation indicators show high predictive performance (R2 ≈ 0.85—0.90, RMSE < 0.5 bin, and micro-AUC ≈ 0.98), the primary purpose of the BN is not exact regression but explainable causal inference and probabilistic scenario evaluation. Sensitivity analysis identified water feed and solids percentage as key drivers of throughput (SAG Production), while rotational speed and pressure governed SAG Power behavior. The BN framework effectively balances accuracy and interpretability, offering an explainable probabilistic representation of SAG dynamics. These results demonstrate the potential of stochastic modeling to enhance process control and support uncertainty-aware decision making. Full article
Show Figures

Figure 1

24 pages, 3551 KB  
Article
Financial Performance Outcomes of AI-Adoption in Oil and Gas: The Mediating Role of Operational Efficiency
by Eldar Mardanov, Inese Mavlutova and Biruta Sloka
J. Risk Financial Manag. 2026, 19(1), 44; https://doi.org/10.3390/jrfm19010044 - 6 Jan 2026
Abstract
The oil and gas sector operates in a high-risk environment defined by capital intensity, regulatory uncertainty, and volatile commodity prices. Although Artificial Intelligence (AI) is widely promoted as a lever for profitability, the mechanisms through which AI adoption translate into financial outcomes remain [...] Read more.
The oil and gas sector operates in a high-risk environment defined by capital intensity, regulatory uncertainty, and volatile commodity prices. Although Artificial Intelligence (AI) is widely promoted as a lever for profitability, the mechanisms through which AI adoption translate into financial outcomes remain insufficiently specified in the oil and gas literature. Grounded in the Resource-Based View and Technology Adoption Theory, this study combines bibliometric mapping of 201 Scopus-indexed publications (2010–2025) with a focused comparative case analysis of important players (BP and Shell), based on publicly reported operational and financial indicators (e.g., operating cost, uptime-related evidence, and return on average capital employed—ROACE). Keyword co-occurrence analysis identifies five thematic clusters showing that efficiency-oriented AI use cases (optimization, automation, predictive maintenance, and digital twins) dominate the research landscape. A thematic synthesis of five highly cited studies further indicates that AI-enabled operational improvements are most consistently linked to measurable cost, productivity, or revenue effects. Case evidence suggests that large-scale predictive maintenance and digital twin programs can support capital efficiency by reducing unplanned downtime and structural costs, contributing to more resilient ROACE trajectories amid price swings. Overall, the findings support a conceptual pathway in which operational efficiency is a primary channel through which AI can create financial value, while underscoring the need for future firm-level empirical mediation tests using standardized KPIs. Full article
Show Figures

Figure 1

21 pages, 1410 KB  
Article
Do Large Language Models Know When They Lack Knowledge?
by Shuai Qin, Lianke Zhou, Liu Sun and Nianbin Wang
Electronics 2026, 15(2), 253; https://doi.org/10.3390/electronics15020253 - 6 Jan 2026
Viewed by 8
Abstract
Although Large Language Models (LLMs) excel in language tasks, producing fluent and seemingly high-quality text, their outputs are essentially probabilistic predictions rather than verified facts, rendering reliability unguaranteed. This issue is particularly pronounced when models lack the required knowledge, which significantly increases the [...] Read more.
Although Large Language Models (LLMs) excel in language tasks, producing fluent and seemingly high-quality text, their outputs are essentially probabilistic predictions rather than verified facts, rendering reliability unguaranteed. This issue is particularly pronounced when models lack the required knowledge, which significantly increases the risk of fabrications and misleading content. Therefore, understanding whether LLMs know when they lack knowledge is of critical importance. This work systematically evaluates leading LLMs on their ability to recognize knowledge insufficiency and examines several training-free techniques to foster this metacognitive capability, referred to as “integrity” throughout this research. For rigorous evaluation, this study firstly develops a new Question-Answering (Q&A) dataset called Honesty. Specifically, events emerging after the model’s deployment are utilized to generate “unknown questions,” ensuring they fall outside LLMs’ knowledge boundaries, while “known questions” are drawn from existing Q&A datasets, together constituting the Honesty dataset. Subsequently, based on this dataset, systematic experiments are conducted using multiple representative LLMs (e.g., GPT-4o and DeepSeek-V3). The results reveal that semantic understanding and reasoning capabilities are the core factors influencing “integrity.” Furthermore, we find that well-crafted prompts markedly improve models’ integrity, and integrating them with probability- or consistency-based uncertainty evaluation methods yields even stronger performance. These findings highlight the considerable potential of LLMs to express uncertainty when they lack knowledge, and we hope these observations can lay the groundwork for developing more reliable models. Full article
(This article belongs to the Special Issue Trustworthy LLM: AIGC Detection, Alignment and Evaluation)
Show Figures

Figure 1

23 pages, 15684 KB  
Article
XGBoost-Based Susceptibility Model Exhibits High Accuracy and Robustness in Plateau Forest Fire Prediction
by Chuang Yang, Ping Yao, Qiuhua Wang, Shaojun Wang, Dong Xing, Yanxia Wang and Ji Zhang
Forests 2026, 17(1), 74; https://doi.org/10.3390/f17010074 - 6 Jan 2026
Viewed by 28
Abstract
Forest fire susceptibility prediction is essential for effective management, yet considerable uncertainty persists under future climate change, especially in climate-sensitive plateau regions. This study integrated MODIS fire data with climatic, topographic, vegetation, and anthropogenic variables to construct an Extreme Gradient Boosting (XGBoost) model [...] Read more.
Forest fire susceptibility prediction is essential for effective management, yet considerable uncertainty persists under future climate change, especially in climate-sensitive plateau regions. This study integrated MODIS fire data with climatic, topographic, vegetation, and anthropogenic variables to construct an Extreme Gradient Boosting (XGBoost) model for the Yunnan Plateau, a region highly prone to forest fires. Compared with Support Vector Machine and Random Forest models, XGBoost showed superior ability to capture nonlinear relationships and delivered the best performance, achieving an AUC of 0.907 and an overall accuracy of 0.831. The trained model was applied to climate projections under SSP1-2.6, SSP2-4.5, and SSP5-8.5 to assess future fire susceptibility. Results indicated that high-susceptibility periods primarily occur in winter and spring, driven by minimum temperature, average temperature, and precipitation. High-susceptibility areas are concentrated in dry-hot valleys and mountain basins with elevated temperatures and dense human activity. Under future climate scenarios, both the probability and spatial extent of forest fires are projected to increase, with a marked expansion after 2050, especially under SSP5-8.5. Although the XGBoost model demonstrates strong generalizability for plateau regions, uncertainties remain due to static vegetation, coarse anthropogenic data, and differences among climate models. Full article
(This article belongs to the Section Natural Hazards and Risk Management)
Show Figures

Figure 1

15 pages, 1843 KB  
Article
Comparing Methods for Uncertainty Estimation of Paraganglioma Growth Predictions
by Evi M. C. Sijben, Vanessa Volz, Tanja Alderliesten, Peter A. N. Bosman, Berit M. Verbist, Erik F. Hensen and Jeroen C. Jansen
J. Otorhinolaryngol. Hear. Balance Med. 2026, 7(1), 3; https://doi.org/10.3390/ohbm7010003 - 6 Jan 2026
Viewed by 27
Abstract
Background: Paragangliomas of the head and neck are rare, benign and indolent to slow-growing tumors. Not all tumors require immediate active intervention, and surveillance is a viable management strategy in a large proportion of cases. Treatment decisions are based on several tumor- [...] Read more.
Background: Paragangliomas of the head and neck are rare, benign and indolent to slow-growing tumors. Not all tumors require immediate active intervention, and surveillance is a viable management strategy in a large proportion of cases. Treatment decisions are based on several tumor- and patient-related factors, with the tumor progression rate being a predominant determinant. Accurate prediction of tumor progression has the potential to significantly improve treatment decisions by helping to identify patients who are likely to require active treatment in the future. It furthermore enables better-informed timing for follow-up, allowing early intervention for those who will ultimately need it, and optimization of the use of resources (such as MRI scans). Crucial to this is having reliable estimates of the uncertainty associated with a future growth forecast, so that this can be taken into account in the decision-making process. Methods: For various tumor growth prediction models, two methods for uncertainty estimation were compared: a historical-based one and a Bayesian one. We also investigated how incorporating either tumor-specific or general estimates of auto-segmentation uncertainty impacts the results of growth prediction. The performance of the uncertainty estimates was examined both from a technical and a practical perspective. Study design: Method comparison study. Results: Data of 208 patients were used, comprising 311 paragangliomas and 1501 volume measurements, resulting in 2547 tumor growth predictions (a median of 10 predictions per tumor). As expected, the uncertainty increased with the length of the prediction horizon and decreased with the inclusion of more tumor measurement data in the prediction model. The historical method resulted in estimated confidence intervals where the actual value fell within the estimated 95% confidence interval 94% of the time. However, this method resulted in confidence intervals that were too wide to be clinically useful (often over 200% of the predicted volume), and showed poor ability to differentiate growing and stable tumors. The estimated confidence intervals of the Bayesian method were much narrower. However, the 95% credible intervals were too narrow, with the true tumor volume falling within them only 78% of the time, indicating underestimation of uncertainty and insufficient calibration. Despite this, the Bayesian method showed markedly better ability to distinguishing between growing and stable tumors, which has arguably the most practical value. When combining all growth models, the Bayesian method using tumor-specific auto-segmentation uncertainties resulted in an 86% correct classification of growing and non-growing tumors. Conclusions: Of the methods evaluated for predicting paraganglioma progression, the Bayesian method is the most useful in the considered context, because it shows the best ability to discriminate between growing and non-growing tumors. To determine how these methods could be used and what their value is for patients, they should be further evaluated in a clinical setting. Full article
(This article belongs to the Section Head and Neck Surgery)
Show Figures

Figure 1

Back to TopTop