Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (90)

Search Parameters:
Keywords = time series extrapolation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 16705 KB  
Article
Development of an Ozone (O3) Predictive Emissions Model Using the XGBoost Machine Learning Algorithm
by Esteban Hernandez-Santiago, Edgar Tello-Leal, Jailene Marlen Jaramillo-Perez and Bárbara A. Macías-Hernández
Big Data Cogn. Comput. 2026, 10(1), 15; https://doi.org/10.3390/bdcc10010015 (registering DOI) - 1 Jan 2026
Abstract
High concentrations of tropospheric ozone (O3) in urban areas pose a significant risk to human health. This study proposes an evaluation framework based on the XGBoost algorithm to predict O3 concentration, assessing the model’s capacity for seasonal extrapolation and [...] Read more.
High concentrations of tropospheric ozone (O3) in urban areas pose a significant risk to human health. This study proposes an evaluation framework based on the XGBoost algorithm to predict O3 concentration, assessing the model’s capacity for seasonal extrapolation and spatial transferability. The experiment uses hourly air pollution data (O3, NO, NO2, and NOx) and meteorological factors (temperature, relative humidity, barometric pressure, wind speed, and wind direction) from six monitoring stations in the Monterrey Metropolitan Area, Mexico (from 22 September 2022 to 21 September 2023). In the preprocessing phase, the datasets were extended via feature engineering, including cyclic variables, rolling windows, and lag features, to capture temporal dynamics. The prediction models were optimized using a random search, with time-series cross-validation to prevent data leakage. The models were evaluated across a concentration range of 0.001 to 0.122 ppm, demonstrating high predictive accuracy, with a coefficient of determination (R2) of up to 0.96 and a root-mean-square error (RMSE) of 0.0034 ppm when predicting summer (O3) concentrations without prior knowledge. Spatial generalization was robust in residential areas (R2 > 0.90), but performance decreased in the industrial corridor (AQMS-NL03). We identified that this decrease is related to local complexity through the quantification of domain shift (Kolmogorov–Smirnov test) and Shapley additive explanations (SHAP) diagnostics, since the model effectively learns atmospheric inertia in stable areas but struggles with the stochastic effects of NOx titration driven by industrial emissions. These findings position the proposed approach as a reliable tool for “virtual detection” while highlighting the crucial role of environmental topology in model implementation. Full article
(This article belongs to the Special Issue Machine Learning and AI Technology for Sustainable Development)
Show Figures

Figure 1

30 pages, 944 KB  
Article
WinStat: A Family of Trainable Positional Encodings for Transformers in Time Series Forecasting
by Cristhian Moya-Mota, Ignacio Aguilera-Martos, Diego García-Gil and Julián Luengo
Mach. Learn. Knowl. Extr. 2026, 8(1), 7; https://doi.org/10.3390/make8010007 - 29 Dec 2025
Viewed by 55
Abstract
Transformers for time series forecasting rely on positional encoding to inject temporal order into the permutation-invariant self-attention mechanism. Classical sinusoidal absolute encodings are fixed and purely geometric; learnable absolute encodings often overfit and fail to extrapolate, while relative or advanced schemes can impose [...] Read more.
Transformers for time series forecasting rely on positional encoding to inject temporal order into the permutation-invariant self-attention mechanism. Classical sinusoidal absolute encodings are fixed and purely geometric; learnable absolute encodings often overfit and fail to extrapolate, while relative or advanced schemes can impose substantial computational overhead without being sufficiently tailored to temporal data. This work introduces a family of window-statistics positional encodings that explicitly incorporate local temporal semantics into the representation of each timestamp. The base variant (WinStat) augments inputs with statistics computed over a sliding window; WinStatLag adds explicit lag-difference features; and hybrid variants (WinStatFlex, WinStatTPE, WinStatSPE) learn soft mixtures of window statistics with absolute, learnable, and semantic positional signals, preserving the simplicity of additive encodings while adapting to local structure and informative lags. We evaluate proposed encodings on four heterogeneous benchmarks against state-of-the-art proposals: Electricity Transformer Temperature (hourly variants), Individual Household Electric Power Consumption, New York City Yellow Taxi Trip Records, and a large-scale industrial time series from heavy machinery. All experiments use a controlled Transformer backbone with full self-attention to isolate the effect of positional information. Across datasets, the proposed methods consistently reduce mean squared error and mean absolute error relative to a strong Transformer baseline with sinusoidal positional encoding and state-of-the-art encodings for time series, with WinStatFlex and WinStatTPE emerging as the most effective variants. Ablation studies that randomly shuffle decoder inputs markedly degrade the proposed methods, supporting the conclusion that their gains arise from learned order-aware locality and semantic structure rather than incidental artifacts. A simple and reproducible heuristic for setting the sliding-window length—roughly one quarter to one third of the input sequence length—provides robust performance without the need for exhaustive tuning. Full article
(This article belongs to the Section Learning)
30 pages, 1939 KB  
Article
Integrating Machine Learning and Scenario Modelling for Robust Population Forecasting Under Crisis and Data Scarcity
by Michael Politis, Nicholas Christakis, Zoi Dorothea Pana and Dimitris Drikakis
Mathematics 2025, 13(24), 4024; https://doi.org/10.3390/math13244024 - 18 Dec 2025
Viewed by 196
Abstract
This study introduces a new ensemble framework for demographic forecasting that systematically incorporates stylised crisis scenarios into rate and population projections. While scenario reasoning is common in qualitative foresight, its quantitative application in demography remains underdeveloped. Our method combines autoregressive lags, global predictors, [...] Read more.
This study introduces a new ensemble framework for demographic forecasting that systematically incorporates stylised crisis scenarios into rate and population projections. While scenario reasoning is common in qualitative foresight, its quantitative application in demography remains underdeveloped. Our method combines autoregressive lags, global predictors, and robust regression with a trend-anchoring mechanism, enabling stable projections from short official time series (15–20 years in length). Scenario shocks are operationalised through binary event flags for pandemics, refugee inflows, and financial crises, which influence fertility, mortality, and migration models before translating into cohort and population trajectories. Results demonstrate that shocks with strong historical precedence, such as Germany’s migration surges, are convincingly reproduced and leave enduring effects on projected populations. Conversely, weaker or non-recurrent shocks, typical in Norway and Portugal, produce muted scenario effects, with baseline momentum dominating long-term outcomes. At the national level, total population aggregates mitigate temporary shocks, while cohort-level projections reveal more pronounced divergences. Limitations include the short length of the training series, the reduction of signals when shocks do not surpass historical peaks, and the loss of granularity due to age grouping. Nevertheless, the framework shows how robust statistical ensembles can extend demographic forecasting beyond simple trend extrapolation, providing a formal and transparent quantitative tool for stress-testing population futures under both crisis and stability. Full article
Show Figures

Figure 1

10 pages, 366 KB  
Case Report
Reduced Ejection Fraction of the Systemic Right Ventricle and Severe Tricuspid Regurgitation: Medication or Surgery?
by Anton V. Minaev, Timur Y. Danilov, Diana P. Paraskevova, Vera I. Dontsova, Inna I. Trunina, Viktor B. Samsonov, Sofya M. Tsoy, Alexander S. Voynov and Julia A. Sarkisyan
J. Cardiovasc. Dev. Dis. 2025, 12(12), 482; https://doi.org/10.3390/jcdd12120482 - 8 Dec 2025
Viewed by 275
Abstract
(1) Background: The systemic right ventricular (SRV) dysfunction and severe tricuspid regurgitation (TR) remain significant challenges in patients with congenitally corrected transposition of the great arteries (ccTGA) or following atrial switch procedures. Currently, there is no established, evidence-based medical therapy specifically designed for [...] Read more.
(1) Background: The systemic right ventricular (SRV) dysfunction and severe tricuspid regurgitation (TR) remain significant challenges in patients with congenitally corrected transposition of the great arteries (ccTGA) or following atrial switch procedures. Currently, there is no established, evidence-based medical therapy specifically designed for SRV failure, and treatment approaches are largely extrapolated from left ventricular heart failure (HF) guidelines. This therapeutic gap highlights the need for tailored pharmacologic strategies and optimized perioperative management in this unique population. The optimal timing of surgical intervention and the role of modern HF therapy are still under active investigation. (2) Methods: We present a case series of four patients (three adults and one child) with SRV dysfunction and severe TR, who underwent staged treatment consisting of optimized medical therapy followed by surgical tricuspid valve (TV) replacement. Medical therapy included positive inotropes, sacubitril/valsartan, sodium-glucose co-transporter 2 inhibitors (iSGLT2), beta-blockers, mineralocorticoid receptor antagonists (MRAs), and loop diuretics. (3) Results: All patients demonstrated clinical and hemodynamic improvement prior to surgery, with an increase in systemic ventricular ejection fraction (SVEF > 40%) and cardiac index. TV replacement was performed with favorable early postoperative outcomes and preserved ventricular function at mid-term follow-up. No mortality or major adverse events occurred during follow-up. One case of acute cystitis was associated with dapagliflozin. In all patients, postoperative SVEF remained >40%, and no recurrence of significant TR was observed. (4) Conclusions: A stepwise approach combining modern heart failure therapy and elective TV replacement in patients with SRV dysfunction and TR is safe and effective. Preoperative optimization leads to improved ventricular function and may enhance surgical outcomes. These findings support the integration of contemporary pharmacotherapy in the management strategy for SRV failure. Full article
Show Figures

Figure 1

22 pages, 2341 KB  
Article
A Multi-Expert Evolutionary Boosting Method for Proactive Control in Unstable Environments
by Alexander Musaev and Dmitry Grigoriev
Algorithms 2025, 18(11), 692; https://doi.org/10.3390/a18110692 - 2 Nov 2025
Viewed by 458
Abstract
Unstable technological processes, such as turbulent gas and hydrodynamic flows, generate time series that deviate sharply from the assumptions of classical statistical forecasting. These signals are shaped by stochastic chaos, characterized by weak inertia, abrupt trend reversals, and pronounced low-frequency contamination. Traditional extrapolators, [...] Read more.
Unstable technological processes, such as turbulent gas and hydrodynamic flows, generate time series that deviate sharply from the assumptions of classical statistical forecasting. These signals are shaped by stochastic chaos, characterized by weak inertia, abrupt trend reversals, and pronounced low-frequency contamination. Traditional extrapolators, including linear and polynomial models, therefore act only as weak forecasters, introducing systematic phase lag and rapidly losing directional reliability. To address these challenges, this study introduces an evolutionary boosting framework within a multi-expert system (MES) architecture. Each expert is defined by a compact genome encoding training-window length and polynomial order, and experts evolve across generations through variation, mutation, and selection. Unlike conventional boosting, which adapts only weights, evolutionary boosting adapts both the weights and the structure of the expert pool, allowing the system to escape local optima and remain responsive to rapid environmental shifts. Numerical experiments on real monitoring data demonstrate consistent error reduction, highlighting the advantage of short windows and moderate polynomial orders in balancing responsiveness with robustness. The results show that evolutionary boosting transforms weak extrapolators into a strong short-horizon forecaster, offering a lightweight and interpretable tool for proactive control in environments dominated by chaotic dynamics. Full article
(This article belongs to the Special Issue Evolutionary and Swarm Computing for Emerging Applications)
Show Figures

Figure 1

19 pages, 9081 KB  
Article
High-Order Time–Space Compact Difference Methods for Semi-Linear Sobolev Equations
by Bo Hou, Tianhua Wang, Guoqu Deng and Zhi Wang
Axioms 2025, 14(8), 646; https://doi.org/10.3390/axioms14080646 - 21 Aug 2025
Viewed by 490
Abstract
In this paper, high-order compact difference methods (HOCDMs) are proposed to solve the semi-linear Sobolev equations (SLSEs), which arise in various physical models, such as porous media flow and heat conduction. First, a two-level numerical method is given by applying the Crank–Nicolson (C-N) [...] Read more.
In this paper, high-order compact difference methods (HOCDMs) are proposed to solve the semi-linear Sobolev equations (SLSEs), which arise in various physical models, such as porous media flow and heat conduction. First, a two-level numerical method is given by applying the Crank–Nicolson (C-N) method in time and the fourth-order compact difference method in space. This method is shown to achieve second-order accuracy in time and fourth-order accuracy in space. Subsequently, we introduce the Richardson extrapolation technique to improve the temporal accuracy of the two-level method from second order to fourth order. Furthermore, we devise a fully fourth-order method in both time and space by applying the fourth-order difference method to discretize both temporal and spatial derivatives, and we provide a proof of its convergence. Finally, a series of numerical experiments is conducted to verify the effectiveness of the proposed methods. Full article
Show Figures

Figure 1

31 pages, 5327 KB  
Article
Wind Estimation Methods for Nearshore Wind Resource Assessment Using High-Resolution WRF and Coastal Onshore Measurements
by Taro Maruo and Teruo Ohsawa
Wind 2025, 5(3), 17; https://doi.org/10.3390/wind5030017 - 7 Jul 2025
Viewed by 1475
Abstract
Accurate wind resource assessment is essential for offshore wind energy development, particularly in nearshore sites where atmospheric stability and internal boundary layers significantly influence the horizontal wind distribution. In this study, we investigated wind estimation methods using a high-resolution, 100 m grid Weather [...] Read more.
Accurate wind resource assessment is essential for offshore wind energy development, particularly in nearshore sites where atmospheric stability and internal boundary layers significantly influence the horizontal wind distribution. In this study, we investigated wind estimation methods using a high-resolution, 100 m grid Weather Research and Forecasting (WRF) model and coastal onshore wind measurement data. Five estimation methods were evaluated, including a control WRF simulation without on-site measurement data (CTRL), observation nudging (NDG), two offline methods—temporal correction (TC) and the directional extrapolation method (DE)—and direct application of onshore measurement data (DA). Wind speed and direction data from four nearshore sites in Japan were used for validation. The results indicated that TC provided the most accurate wind speed estimate results with minimal bias and relatively high reproducibility of temporal variations. NDG exhibited a smaller standard deviation of bias and a slightly higher correlation with the measured time series than CTRL. DE could not reproduce temporal variations in the horizontal wind speed differences between points. These findings suggest that TC is the most effective method for assessing nearshore wind resources and is thus recommended for practical use. Full article
Show Figures

Figure 1

16 pages, 1722 KB  
Article
Integrated Wavelet-Grey-Neural Network Model for Heritage Structure Settlement Prediction
by Yonghong He, Pengwei Jin, Xin Wang, Shaoluo Shen and Jun Ma
Buildings 2025, 15(13), 2240; https://doi.org/10.3390/buildings15132240 - 26 Jun 2025
Viewed by 608
Abstract
To address the issue of insufficient prediction accuracy in traditional GM(1,1) models caused by significant nonlinear fluctuations in time-series data for ancient building structural health monitoring, this study proposes a wavelet decomposition-based GM(1,1)-BP neural network coupled prediction model. By constructing a multi-scale fusion [...] Read more.
To address the issue of insufficient prediction accuracy in traditional GM(1,1) models caused by significant nonlinear fluctuations in time-series data for ancient building structural health monitoring, this study proposes a wavelet decomposition-based GM(1,1)-BP neural network coupled prediction model. By constructing a multi-scale fusion framework, we systematically resolve the collaborative optimization between trend prediction and detail modeling. The methodology comprises four main phases: First, wavelet transform is employed to decompose original monitoring sequences into time-frequency components, obtaining low-frequency trends characterizing long-term deformation patterns and high-frequency details reflecting dynamic fluctuations. Second, GM(1,1) models are established for the trend extrapolation of low-frequency components, capitalizing on their advantages in limited-data modeling. Subsequently, BP neural networks are designed for the nonlinear mapping of high-frequency components, leveraging adaptive learning mechanisms to capture detail features induced by environmental disturbances and complex factors. Finally, a wavelet reconstruction fusion algorithm is developed to achieve the collaborative optimization of dual-channel prediction results. The model innovatively introduces a detail information correction mechanism that simultaneously overcomes the limitations of single grey models in modeling nonlinear fluctuations and enhances neural networks’ capability in capturing long-term trend features. Experimental validation demonstrates that the fused model reduces the Root Mean Square Error (RMSE) by 76.5% and 82.6% compared to traditional GM(1,1) and BP models, respectively, with the accuracy grade improving from level IV to level I. This achievement provides a multi-scale analytical approach for the quantitative interpretation of settlement deformation patterns in ancient architecture. The established “decomposition-prediction-fusion” technical framework holds significant application value for the preventive conservation of historical buildings. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

23 pages, 4656 KB  
Article
A Hybrid Intelligent Model for Olympic Medal Prediction Based on Data-Intelligence Fusion
by Ning Li, Junhao Li, Hejia Fang, Jian Wang, Qiao Yu and Yafei Shi
Technologies 2025, 13(6), 250; https://doi.org/10.3390/technologies13060250 - 13 Jun 2025
Cited by 3 | Viewed by 2288
Abstract
This study presents a hybrid intelligent model for predicting Olympic medal distribution at the 2028 Los Angeles Games, based on data-intelligence fusion (DIF). By integrating historical medal records, athlete performance metrics, debut medal-winning countries, and coaching resources, the model aims to provide accurate [...] Read more.
This study presents a hybrid intelligent model for predicting Olympic medal distribution at the 2028 Los Angeles Games, based on data-intelligence fusion (DIF). By integrating historical medal records, athlete performance metrics, debut medal-winning countries, and coaching resources, the model aims to provide accurate national medal forecasts. The model introduces a Performance Score (PS) system combining a Traditional Advantage Index (TAI) via K-means clustering, an Athlete Strength Index (ASI) using a backpropagation neural network, and a Host effect factor. Sub-models include an autoregressive integrated moving average model for time-series forecasting, logistic regression for predicting debut medal-winning countries, and random forest regression to quantify the “Great Coach” effect. The results project America winning 44 gold and 124 total medals, and China 44 gold and 94 total medals. The model demonstrates strong accuracy with root mean square errors of 3.21 (gold) and 4.32 (total medals), and mean-relative errors of 17.6% and 8.04%. Compared to the 2024 Paris Olympics, the model projects a notable reshuffling in 2028, with the United States expected to strengthen its overall lead as host while countries like France are predicted to experience significant declines in medal counts. Findings highlight the nonlinear impact of coaching and event expansion’s role in medal growth. This model offers a strategic tool for Olympic planning, advancing medal prediction from simple extrapolation to intelligent decision support. Full article
Show Figures

Figure 1

22 pages, 1792 KB  
Article
Ensemble Multi-Expert Forecasting: Robust Decision-Making in Chaotic Financial Markets
by Alexander Musaev and Dmitry Grigoriev
J. Risk Financial Manag. 2025, 18(6), 296; https://doi.org/10.3390/jrfm18060296 - 29 May 2025
Viewed by 1296
Abstract
Financial time series in volatile markets often exhibit non-stationary behavior and signatures of stochastic chaos, challenging traditional forecasting methods based on stationarity assumptions. In this paper, we introduce a novel multi-expert forecasting system (MES) that leverages ensemble machine learning techniques—including bagging, boosting, and [...] Read more.
Financial time series in volatile markets often exhibit non-stationary behavior and signatures of stochastic chaos, challenging traditional forecasting methods based on stationarity assumptions. In this paper, we introduce a novel multi-expert forecasting system (MES) that leverages ensemble machine learning techniques—including bagging, boosting, and stacking—to enhance prediction accuracy and support robust risk management decisions. The proposed framework integrates diverse “weak learner” models, ranging from linear extrapolation and multidimensional regression to sentiment-based text analytics, into a unified decision-making architecture. Each expert is designed to capture distinct aspects of the underlying market dynamics, while the supervisory module aggregates their outputs using adaptive weighting schemes that account for evolving error characteristics. Empirical evaluations using high-frequency currency data, notably for the EUR/USD pair, demonstrate that the ensemble approach significantly improves forecast reliability, as evidenced by higher winning probabilities and better net trading results compared to individual forecasting models. These findings contribute both to the theoretical understanding of ensemble forecasting under chaotic market conditions and to its practical application in financial risk management, offering a reproducible methodology for managing uncertainty in highly dynamic environments. Full article
Show Figures

Figure 1

31 pages, 3727 KB  
Article
Time-Domain Characterization of Linear Viscoelastic Behavior in Asphalt Mixtures: A Comparative Evaluation Through Discrete and Continuous Spectral Techniques
by Fei Zhang, Bingyuan Huo, Wanmei Gui, Chao Li, Heng Liu, Yongming Xing, Lan Wang and Pucun Bai
Polymers 2025, 17(10), 1299; https://doi.org/10.3390/polym17101299 - 9 May 2025
Viewed by 724
Abstract
This study systematically investigates continuous and discrete spectra methodologies for determining time-domain viscoelastic response functions (creep compliance and relaxation modulus) in asphalt mixtures. Through complex modulus testing of three asphalt mixtures (base asphalt mixture, SBS-modified asphalt mixture, and crumb rubber-modified asphalt mixture), we [...] Read more.
This study systematically investigates continuous and discrete spectra methodologies for determining time-domain viscoelastic response functions (creep compliance and relaxation modulus) in asphalt mixtures. Through complex modulus testing of three asphalt mixtures (base asphalt mixture, SBS-modified asphalt mixture, and crumb rubber-modified asphalt mixture), we established unified master curves using a Generalized Sigmoidal model with approximated Kramers–Kronig (K-K) relations. Discrete spectra can be obtained by Prony series of Maxwell/Kelvin modeling, while continuous spectra derived through integral transformation produced complementary response functions by numerical integration. Comparative analysis demonstrated that discrete and continuous spectra methods yield highly consistent predictions of the relaxation modulus and creep compliance within conventional time scales (10−7–105 s), with significant deviations emerging only at extreme temporal extremities. Compared to discrete spectra results, material parameters (relaxation modulus and creep compliance) derived from continuous spectra methods invariably asymptotically approach upper and lower plateaus. Notably, the maximum equilibrium values derived from continuous spectra methods consistently surpassed those obtained through discrete approaches, whereas the corresponding minimum values were consistently lower. This comparative analysis highlights the inherent limitations in the extrapolation reliability of computational methodologies, particularly regarding spectra method implementation. Furthermore, within the linear viscoelastic range, the crumb rubber-modified asphalt mixtures exhibited superior low-temperature cracking resistance, whereas the SBS-modified asphalt mixtures demonstrated enhanced high-temperature deformation resistance. This systematic comparative study not only establishes a critical theoretical foundation for the precise characterization of asphalt mixture viscoelasticity across practical engineering time scales through optimal spectral method selection, but also provides actionable guidance for region-specific material selection strategies. Full article
(This article belongs to the Special Issue Advances in Functional Rubber and Elastomer Composites, 3rd Edition)
Show Figures

Figure 1

16 pages, 4481 KB  
Article
An Informer Model for Very Short-Term Power Load Forecasting
by Zhihe Yang, Jiandun Li, Haitao Wang and Chang Liu
Energies 2025, 18(5), 1150; https://doi.org/10.3390/en18051150 - 26 Feb 2025
Cited by 6 | Viewed by 2253
Abstract
Facing the decarbonization trend in power systems, there appears to be a growing requirement on agile response and delicate supply from electricity suppliers. To accommodate this request, it is of key significance to precisely extrapolate the upcoming power load, which is well acknowledged [...] Read more.
Facing the decarbonization trend in power systems, there appears to be a growing requirement on agile response and delicate supply from electricity suppliers. To accommodate this request, it is of key significance to precisely extrapolate the upcoming power load, which is well acknowledged as VSTLF, i.e., Very Short-Term Power Load Forecasting. As a time series forecasting problem, the primary challenge of VSTLF is how to identify potential factors and their very long-term affecting mechanisms in load demands. With the help of a public dataset, this paper first locates several intensely related attributes based on Pearson’s correlation coefficient and then proposes an adaptive Informer network with the probability sparse attention to model the long-sequence power loads. Additionally, it uses the Shapley Additive Explanations (SHAP) for ablation and interpretation analysis. The experiment results show that the proposed model outperforms several state-of-the-art solutions on several metrics, e.g., 18.39% on RMSE, 21.70% on MAE, 21.24% on MAPE, and 2.11% on R2. Full article
(This article belongs to the Section F1: Electrical Power System)
Show Figures

Graphical abstract

17 pages, 5105 KB  
Article
Comparison of Hydraulic Fracturing and Deflagration Fracturing Under High-Temperature Conditions in Large-Sized Granite
by Hengtao Yang, Yan Zou, Bing Bai, Huiling Ci, Tiancheng Zhang, Zhiwei Zheng and Hongwu Lei
Appl. Sci. 2025, 15(5), 2307; https://doi.org/10.3390/app15052307 - 21 Feb 2025
Viewed by 1096
Abstract
Fracturing is an indispensable technique in geothermal energy development. Large-sized model tests of different fracturing methods are crucial for evaluating the fracturing effect and extrapolating the results to field applications. For common hydraulic and deflagration fracturing methods, 40 × 40 × 40 cm [...] Read more.
Fracturing is an indispensable technique in geothermal energy development. Large-sized model tests of different fracturing methods are crucial for evaluating the fracturing effect and extrapolating the results to field applications. For common hydraulic and deflagration fracturing methods, 40 × 40 × 40 cm3 granite samples were used to carry out fracturing tests under high-temperature conditions in this paper. Through the analysis of the fracturing parameters and multiscale fracture morphology, a series of key findings were summarized. Deflagration fracturing is more intense, notably unaffected by the principal stress difference, and is capable of generating fracture spaces tens of times larger than those created by hydraulic fracturing. Furthermore, high temperatures tend to produce more fracture zones rather than continuous cracks during hydraulic fracturing. In contrast, deflagration fracturing yields simpler and more regular fractures in granite at high temperatures. Finally, the influence of the borehole number and the quantity of the deflagration agent on the fracturing effect are briefly discussed. These findings provide valuable insights for enhancing reservoir stimulation in geothermal systems. Full article
Show Figures

Figure 1

23 pages, 21288 KB  
Article
Analysis of Detailed Series Based on the Estimation of Hydrogeological Parameters by Indirect Methods Based on Fluvial and Piezometric Fluctuations
by José Luis Herrero-Pacheco, Javier Carrasco and Pedro Carrasco
Water 2025, 17(4), 576; https://doi.org/10.3390/w17040576 - 17 Feb 2025
Cited by 1 | Viewed by 922
Abstract
Piezometers located near watercourses experiencing periodic fluctuations provide a means to analyse soil properties and derive key hydrogeological parameters through pressure wave transmission analysis, which is affected in amplitude and time (lag). These techniques are invaluable for hydrogeological characterizations, such as assessing pollutant [...] Read more.
Piezometers located near watercourses experiencing periodic fluctuations provide a means to analyse soil properties and derive key hydrogeological parameters through pressure wave transmission analysis, which is affected in amplitude and time (lag). These techniques are invaluable for hydrogeological characterizations, such as assessing pollutant diffusion, conducting construction projects below the water table, and evaluating flood zones. While traditionally applied to study tidal influences in coastal areas, this research introduces their application to channels indirectly affected by tidal oscillations due to downstream confluences with tidal waterways. This innovative approach combines the analysis of tidal barriers with the effects of storms and droughts. This study synthesises findings from an experimental monitoring field equipped with advanced recording technologies, allowing for high-resolution, long-term analysis. The dataset, spanning dry periods, major storms, and channel overflows, offers unprecedented precision and insight into aquifer responses. This study analyses the application of wave transmission calculations using continuous level recording in a river and in observation piezometers. Two methods of analysis are applied to the series generated, one based on the variation in the amplitude and the other based on the phase shift produced by the transmission of the wave through the aquifer, both related to the hydrogeological characteristics of the medium. This study concludes that the determination of the fluctuation period is key in the calculation, being particularly more precise in the analysis of the amplitude than in the analysis of the phase difference, which has led to disparate results in previous studies. The results obtained make it possible to reconstruct and extrapolate real or calculated series of rivers and piezometers as a function of distance from the diffusivity obtained. Using the fluctuation period and diffusivity, it is possible to construct the wave associated with any event based on data from just one river or piezometer. Full article
Show Figures

Figure 1

17 pages, 4178 KB  
Article
Towards Trustworthy AI in Healthcare: Epistemic Uncertainty Estimation for Clinical Decision Support
by Adrian Lindenmeyer, Malte Blattmann, Stefan Franke, Thomas Neumuth and Daniel Schneider
J. Pers. Med. 2025, 15(2), 58; https://doi.org/10.3390/jpm15020058 - 31 Jan 2025
Cited by 4 | Viewed by 2759
Abstract
Introduction: Widespread adoption of AI for medical decision-making is still hindered due to ethical and safety-related concerns. For AI-based decision support systems in healthcare settings, it is paramount to be reliable and trustworthy. Common deep learning approaches, however, have the tendency towards overconfidence [...] Read more.
Introduction: Widespread adoption of AI for medical decision-making is still hindered due to ethical and safety-related concerns. For AI-based decision support systems in healthcare settings, it is paramount to be reliable and trustworthy. Common deep learning approaches, however, have the tendency towards overconfidence when faced with unfamiliar or changing conditions. Inappropriate extrapolation beyond well-supported scenarios may have dire consequences highlighting the importance of the reliable estimation of local knowledge uncertainty and its communication to the end user. Materials and Methods: While neural network ensembles (ENNs) have been heralded as a potential solution to these issues for many years, deep learning methods, specifically modeling the amount of knowledge, promise more principled and reliable behavior. This study compares their reliability in clinical applications. We centered our analysis on experiments with low-dimensional toy datasets and the exemplary case study of mortality prediction for intensive care unit hospitalizations using Electronic Health Records (EHRs) from the MIMIC3 study. For predictions on the EHR time series, Encoder-Only Transformer models were employed. Knowledge uncertainty estimation is achieved with both ensemble and Spectral Normalized Neural Gaussian Process (SNGP) variants of the common Transformer model. We designed two datasets to test their reliability in detecting token level and more subtle discrepancies both for toy datasets and an EHR dataset. Results: While both SNGP and ENN model variants achieve similar prediction performance (AUROC: 0.85, AUPRC: 0.52 for in-hospital mortality prediction from a selected MIMIC3 benchmark), the former demonstrates improved capabilities to quantify knowledge uncertainty for individual samples/patients. Discussion/Conclusions: Methods including a knowledge model, such as SNGP, offer superior uncertainty estimation compared to traditional stochastic deep learning, leading to more trustworthy and safe clinical decision support. Full article
(This article belongs to the Section Methodology, Drug and Device Discovery)
Show Figures

Figure 1

Back to TopTop