Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (128)

Search Parameters:
Keywords = time series count data model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 337 KB  
Article
Cardiometabolic Mortality and Health System Expansion in Kuwait (2010–2022): A National Time-Series Analysis
by Ahmad Salman
J. Clin. Med. 2026, 15(7), 2697; https://doi.org/10.3390/jcm15072697 - 2 Apr 2026
Viewed by 357
Abstract
Background: Cardiometabolic diseases are a leading cause of premature mortality globally, yet longitudinal national mortality patterns remain insufficiently characterised in Gulf Cooperation Council settings. This study examines national trends in cardiometabolic mortality alongside health system financing, capacity, and utilization in Kuwait between [...] Read more.
Background: Cardiometabolic diseases are a leading cause of premature mortality globally, yet longitudinal national mortality patterns remain insufficiently characterised in Gulf Cooperation Council settings. This study examines national trends in cardiometabolic mortality alongside health system financing, capacity, and utilization in Kuwait between 2010 and 2022. Methods: A national ecological time-series analysis used Ministry of Health administrative data covering mortality, cardiac care unit (CCU) capacity and discharges, cardiovascular procedural volumes, and MOH expenditure. Cause-specific outcomes included circulatory disease, ischaemic heart disease (IHD), cerebrovascular disease, hypertensive disease, and diabetes mellitus. Ordinary least squares regression estimated annual trends; pre-COVID restricted models (2010–2019) separated secular from pandemic-period effects. Results: All-cause deaths rose significantly from 5448 (2010) to 8041 (2022; β = +373.5/year; p = 0.001), peaking at 10,938 in 2021. Circulatory disease mortality rates increased over the full series but not pre-COVID, indicating pandemic-era acceleration. IHD death counts rose significantly in both models (β = +68.4 and +67.0/year; p < 0.01); IHD rates showed no significant trend, implicating demographic growth. Diabetes demonstrated the strongest signal: significant increases in death counts (β = +36.5/year; p < 0.001) and mortality rates (β = +0.689/100,000/year; p = 0.002), rising progressively across all time blocks. Hypertensive mortality declined significantly (β = −0.113/year; p = 0.002). MOH expenditure, CCU capacity, and CCU discharges increased significantly, demonstrating sustained structural expansion of cardiovascular services. Conclusions: Rising cardiometabolic mortality—driven prominently by diabetes—occurred alongside sustained health system expansion in Kuwait, indicating that tertiary capacity growth alone is insufficient to offset underlying epidemiological pressures. These findings underscore the urgency of strengthening upstream cardiometabolic prevention, integrated diabetes surveillance, and long-term metabolic risk control as central pillars of sustainable NCD policy. Full article
21 pages, 4887 KB  
Article
Forecasting Spatial Inequalities in Cardiovascular Disease-Related Deaths: A Municipal-Level Assessment of Progress Toward SDG 3.4 in Serbia
by Suzana Lović Obradović, Dunja Demirović Bajrami and Marko Filipović
Forecasting 2026, 8(2), 29; https://doi.org/10.3390/forecast8020029 - 1 Apr 2026
Viewed by 382
Abstract
Non-communicable diseases (NCDs) are the leading causes of mortality in Serbia, with cardiovascular diseases (CVDs) accounting for a substantial share of premature mortality. In alignment with Sustainable Development Goal (SDG) Target 3.4, which aims to reduce premature mortality from NCD by one-third by [...] Read more.
Non-communicable diseases (NCDs) are the leading causes of mortality in Serbia, with cardiovascular diseases (CVDs) accounting for a substantial share of premature mortality. In alignment with Sustainable Development Goal (SDG) Target 3.4, which aims to reduce premature mortality from NCD by one-third by 2030 relative to 2015, this study forecasts changes in CVD mortality counts at the municipal level in Serbia. Time-series data for the period 2005–2022 were analyzed within a spatio-temporal forecasting framework implemented in the Space Time Pattern Mining toolbox in ArcGIS Pro (Version 3.1). Three established forecasting models (Curve Fit Forecast, Exponential Smoothing, and Forest-based) were applied, and the most accurate model for each municipality was selected using location-specific municipality-level validation. The results reveal pronounced spatial variation: approximately half of the municipalities (51.2%) are forecasted to experience a decline in CVD mortality counts by 2030, while others are expected to show increases or no statistically significant change. Forecasted differences range from a 15.1% decrease to a 13.9% increase across municipalities, indicating heterogeneous spatial trajectories and suggesting that achieving SDG Target 3.4 may remain challenging without targeted interventions across municipalities where mortality reductions are not forecasted. Although the study does not introduce new forecasting methods, it provides a novel spatially disaggregated application of multi-model forecasting to support municipality-level monitoring of SDG 3.4. The results underscore the need for geographically differentiated public health policies and demonstrate the value of spatial forecasting approaches for supporting equitable and targeted health planning. Full article
Show Figures

Figure 1

34 pages, 8947 KB  
Article
Lightweight Evidential Time Series Imputation Method for Bridge Structural Health Monitoring
by Die Liu, Jianxi Yang, Lihua Chen, Tingjun Xu, Youjia Zhang, Lei Zhou and Jingyuan Shen
Buildings 2026, 16(5), 1076; https://doi.org/10.3390/buildings16051076 - 9 Mar 2026
Viewed by 427
Abstract
Long-term data loss resulting from sensor malfunctions, communication interruptions, and other factors in Structural Health Monitoring (SHM) significantly undermines the reliability of damage identification and safety assessment. Existing methods—ranging from statistical approaches and low-rank matrix completion to traditional machine learning and deep learning [...] Read more.
Long-term data loss resulting from sensor malfunctions, communication interruptions, and other factors in Structural Health Monitoring (SHM) significantly undermines the reliability of damage identification and safety assessment. Existing methods—ranging from statistical approaches and low-rank matrix completion to traditional machine learning and deep learning imputation techniques—often suffer from either limited accuracy or excessive model size and slow inference, making deployment in resource-constrained scenarios difficult. To address these challenges, this paper proposes TEFN–Imputation, a lightweight and efficient time-series imputation model. This model utilizes observation-driven non-stationary normalization to mitigate the impact of time-varying characteristics and dimensional discrepancies. It employs linear projection for temporal length alignment and constructs BPA-style mass representations from dual perspectives of time and channel. Furthermore, it replaces strict Dempster–Shafer belief combination with an expectation-based evidential aggregation (readout), thereby significantly reducing computational overhead while enabling uncertainty-aware evidential indicators for interpretation rather than claiming a direct accuracy gain from uncertainty modeling. The observed accuracy and robustness improvements are primarily attributed to the normalization and dual temporal–channel modeling design under the same lightweight readout. Systematic experiments on two real-world bridge monitoring datasets, Z24 and Hell Bridge, demonstrate that TEFN consistently maintains low Mean Absolute Error (MAE) and minimal volatility across various combinations of training and testing missing rates, exhibiting high robustness against variations in missing rates and train–test mismatches. Concurrently, compared to RNN and large-scale Transformer baselines, TEFN reduces parameter count and CPU inference time by one to two orders of magnitude. Thus, it achieves a superior trade-off among accuracy, efficiency, and model scale, making it highly suitable for online SHM and imputation tasks in practical engineering applications. Across the settings on Z24, TEFN achieves a mean MAE of 0.218 with a standard deviation of 0.002, while using only 0.02 MB parameters and 2.73 ms per batch CPU inference. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

23 pages, 2990 KB  
Article
Forecasting-Aware Digital Twin Calibration for Reliable Multi-Horizon Traffic Prediction
by Zeyad AlJundi, Taqwa A. Alhaj, Fatin A. Elhaj, Inshirah Idris and Tasneem Darwish
Network 2026, 6(1), 13; https://doi.org/10.3390/network6010013 - 6 Mar 2026
Viewed by 619
Abstract
Digital twin systems are becoming an important tool in intelligent transportation management, as they provide simulation-based environments for monitoring, analyzing, and predicting traffic behavior. However, the predictive performance of traffic digital twins is often limited by the quality and temporal consistency of sensor-level [...] Read more.
Digital twin systems are becoming an important tool in intelligent transportation management, as they provide simulation-based environments for monitoring, analyzing, and predicting traffic behavior. However, the predictive performance of traffic digital twins is often limited by the quality and temporal consistency of sensor-level data generated from microscopic simulations. Most current calibration methods focus mainly on matching macroscopic traffic indicators, such as vehicle count and speed, without explicitly addressing the requirements of multi-horizon forecasting. This creates a gap between achieving realistic simulations and building reliable predictive models. This research proposes a forecasting-aware digital traffic twin framework that integrates microscopic SUMO simulation, controlled sensor-level observation modeling through geometric misalignment and noise injection, behavioral calibration, and deep temporal forecasting within a unified end-to-end structure. Unlike traditional calibration approaches, the proposed Genetic Algorithm (GA) reformulates calibration as a multi-step predictive optimization task. Simulation parameters are optimized by minimizing forecasting error produced by a lightweight proxy sequence model embedded within the calibration loop. In this way, calibration moves beyond simple statistical matching and instead emphasizes temporal learnability and forecasting stability, enabling the digital twin to generate traffic patterns more suitable for long-term prediction. Based on the calibrated traffic time series, both convolutional and recurrent deep learning models are evaluated under single-step and multi-step forecasting scenarios. To further examine generalizability, external validation is performed using the real-world PEMS-BAY dataset. The experimental findings demonstrate that forecasting-aware calibration reduces macroscopic traffic signal errors by around 50% for vehicle count and around 40% for average speed, improves temporal stability, and significantly enhances forecasting accuracy across both short-term and long-term horizons. Full article
(This article belongs to the Special Issue Emerging Trends and Applications in Vehicular Ad Hoc Networks)
Show Figures

Figure 1

19 pages, 2002 KB  
Article
Application of Machine Learning Approach to Classify Human Activity Level Based on Lifelog Data
by Si-Hwa Jeong, Woomin Nam and Keon Chul Park
Sensors 2026, 26(5), 1612; https://doi.org/10.3390/s26051612 - 4 Mar 2026
Viewed by 396
Abstract
The present paper provides a human activity-level classification model based on the patient’s lifelog collected from wearable devices. During about two months, the heart rate, step count, and calorie consumption for a total of 182 patients were collected from a wearable device. Using [...] Read more.
The present paper provides a human activity-level classification model based on the patient’s lifelog collected from wearable devices. During about two months, the heart rate, step count, and calorie consumption for a total of 182 patients were collected from a wearable device. Using the lifelog data, the machine learning models were developed to classify the physical activity status of patients into five levels. Three types of wearable data with heart rate, step count, and calorie consumption were pre-processed as integrated data in time series. A total of 80% of the integrated data was used as the training dataset, and the remaining 20% was used as the test dataset. Sixteen algorithms were evaluated, including 12 traditional machine learning models (SVM, KNN, RF, etc.) and 4 deep learning models (CNN, RNN, etc.), and cross-validation was performed by dividing the training dataset into 5 folds. By changing the parameters required for training, the models with optimal parameters were derived. The performance of the final models with the new patient lifelog data was evaluated, and it was shown that the classification for human activity level based on heart rate and step count can be performed with high accuracy. Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition: 3rd Edition)
Show Figures

Figure 1

26 pages, 8655 KB  
Article
Trends, Seasonality, and the Impact of COVID-19 on Clinical Staphylococcus aureus and MRSA Isolates in Western Mexico (2016–2025): A Time-Series Analysis at a University Referral Hospital
by Jaime Briseno-Ramírez, Pedro Martínez-Ayala, Adolfo Gómez-Quiroz, Brenda Berenice Avila-Cardenas, Brian Rafael Rubio-Mora, Roberto Miguel Damian-Negrete, Ana María López-Yáñez, Leonardo García-Miranda, Carlos Roberto Álvarez-Alba and Judith Carolina De Arcos-Jiménez
Antibiotics 2026, 15(3), 242; https://doi.org/10.3390/antibiotics15030242 - 25 Feb 2026
Viewed by 601
Abstract
Background/Objectives: Methicillin-resistant Staphylococcus aureus (MRSA) remains a major cause of both community-onset and hospital-acquired infections, yet longitudinal data from Latin American hospitals spanning the COVID-19 pandemic are scarce. We characterized temporal trends, seasonality, and the impact of the COVID-19 pandemic on MRSA prevalence [...] Read more.
Background/Objectives: Methicillin-resistant Staphylococcus aureus (MRSA) remains a major cause of both community-onset and hospital-acquired infections, yet longitudinal data from Latin American hospitals spanning the COVID-19 pandemic are scarce. We characterized temporal trends, seasonality, and the impact of the COVID-19 pandemic on MRSA prevalence and incidence density among clinical S. aureus isolates at a tertiary-care hospital in western Mexico over 9.5 years. Methods: We analyzed 6625 non-duplicate clinical S. aureus isolates (6609 with valid resistance data) from June 2016 to December 2025. Temporal trends were assessed using Mann–Kendall tests, Theil–Sen estimation, and binomial generalized linear models. Seasonality was evaluated through STL decomposition, generalized additive models, and Fourier analysis. An interrupted time series (ITS) model with GLS-AR(1) and Newey–West corrections compared three COVID-19 phases: pre-pandemic (2016–2020), high viral circulation (2020–2022), and post-peak stabilization (2022–2025). Exposure-adjusted incidence densities (per 1000 patient-days) were analyzed in parallel. Results: MRSA prevalence declined from 28.1% pre-pandemic to 14.0% post-peak (Mann–Kendall z = −9.03, p < 0.001; OR = 0.85 per year, 95% CI: 0.829–0.871). MRSA incidence density decreased by 50%, from 1.27 to 0.63 per 1000 patient-days, while aggregate S. aureus incidence density remained stable (z = −0.17, p = 0.868). The ITS joint Wald test confirmed a significant cumulative shift in MRSA trajectory post-pandemic (p = 0.019 counts; p = 0.012 incidence density), with a significant post-peak level drop (p = 0.008). S. aureus exhibited moderate seasonality peaking in May–July (GAM edf = 7.26, p < 0.001), whereas MRSA showed only marginal seasonal variation. Conclusions: MRSA declined markedly across the study period, with the steepest reduction following the Omicron peak. The decline persisted after adjustment for pandemic-related fluctuations in hospital volume, supporting periodic reassessment of empiric anti-MRSA prescribing policies in similar settings. Full article
Show Figures

Figure 1

22 pages, 3807 KB  
Review
Satellite Remote Sensing for Crop Yield Prediction: A Review
by Dorijan Radočaj, Mladen Jurišić, Ivan Plaščak and Lucija Galić
Agriculture 2026, 16(4), 417; https://doi.org/10.3390/agriculture16040417 - 12 Feb 2026
Cited by 2 | Viewed by 2306
Abstract
The rapid evolution of Earth observation satellite missions and computational methods made satellite remote sensing a foundation of state-of-the-art crop yield prediction. Therefore, the aim of this review is to analyze dominant drivers of crop yield prediction research based on satellite remote sensing, [...] Read more.
The rapid evolution of Earth observation satellite missions and computational methods made satellite remote sensing a foundation of state-of-the-art crop yield prediction. Therefore, the aim of this review is to analyze dominant drivers of crop yield prediction research based on satellite remote sensing, including dominant sensor types, satellite missions, crops, and specific research topics, as well as to identify present issues and research gaps. This review summarizes the bibliometric analysis of satellite-based crop yield prediction publications during 2000–2025, including 1174 articles that were indexed in the Web of Science Core Collection. Annual publication and citation trends, geographic patterns of research publications, prevalent satellite missions and sensor types, predominant crops used in research and trends in research themes were analyzed in the study. Findings show that there has been a consistent expansion of the study topic regarding publication count, with multispectral data, especially that of Sentinel-2, Landsat, and MODIS missions, being utilized in most of the literature in the field, while radar-based approaches are becoming increasingly important, providing complementary data to multispectral imagery. The review indicates a methodological shift in the models of simple regressions to machine learning, deep learning, and multi-sensor data fusion frameworks that use dense satellite imagery time series. Full article
Show Figures

Figure 1

23 pages, 38482 KB  
Article
Data-Driven Analysis of Systemic Indicators Linking Stroke-Associated Pneumonia, Delayed Cerebral Ischemia, and Outcome After Aneurysmal Subarachnoid Hemorrhage
by Vanessa Magdalena Swiatek, Conrad-Jakob Schiffner, Tom Tobias Kummer, Lea Ehrhardt, Klaus-Peter Stein, Ali Rashidi, Sylvia Saalfeld, Robert Werdehausen, I. Erol Sandalcioglu and Belal Neyazi
J. Clin. Med. 2026, 15(4), 1359; https://doi.org/10.3390/jcm15041359 - 9 Feb 2026
Viewed by 540
Abstract
Background/Objectives: Delayed cerebral ischemia (DCI) is a major cause of poor outcome after aneurysmal subarachnoid hemorrhage (aSAH). Beyond large-vessel vasospasm, DCI reflects a systemic, multifactorial process involving inflammation, hematologic dysregulation, and organ dysfunction. Stroke-associated pneumonia (SAP), a frequent aSAH complication linked to [...] Read more.
Background/Objectives: Delayed cerebral ischemia (DCI) is a major cause of poor outcome after aneurysmal subarachnoid hemorrhage (aSAH). Beyond large-vessel vasospasm, DCI reflects a systemic, multifactorial process involving inflammation, hematologic dysregulation, and organ dysfunction. Stroke-associated pneumonia (SAP), a frequent aSAH complication linked to stroke-induced immunodepression, may aggravate secondary ischemic injury. Unlike prior studies focusing on classical predictors alone, we included pneumonia and longitudinal respiratory parameters alongside inflammatory, hematologic, and renal markers. Using machine learning, this study aimed to identify predictors of DCI and functional outcome from routinely collected intensive care data. Methods: In this retrospective single-center study, 182 aSAH patients treated in a neurosurgical intensive care unit were included. Clinical data, SAP status, and longitudinal inflammatory, hematologic, renal, and respiratory parameters were extracted. DCI and functional outcome were assessed. Continuous variables were summarized as minimum, maximum, and mean values. Supervised machine learning models combining 12 feature selection methods and 12 classifiers were trained using five-fold cross-validation and evaluated by accuracy, F1-score, and AUC. Results: DCI occurred in 22% of patients, and SAP in 27%. The machine learning models achieved a mean accuracy of 59.7% (F1-score 58.8%, AUC 59.7%) for DCI prediction. No single dominant feature emerged; predictive patterns included leukocyte counts, CRP, erythrocyte indices, platelet variability, renal function, and oxygenation metrics. Functional outcome prediction performed moderately better (mean AUC 65.7%) and shared overlapping predictors. Conclusions: DCI reflects systemic instability in aSAH, with longitudinal inflammatory and respiratory variability outperforming static thresholds. Dynamic risk stratification may enable earlier detection of deterioration, supporting future time-series modeling and external validation. Full article
Show Figures

Figure 1

30 pages, 616 KB  
Article
Structural Preservation in Time Series Through Multiscale Topological Features Derived from Persistent Homology
by Luiz Carlos de Jesus, Francisco Fernández-Navarro and Mariano Carbonero-Ruz
Mathematics 2026, 14(3), 538; https://doi.org/10.3390/math14030538 - 2 Feb 2026
Viewed by 786
Abstract
A principled, model-agnostic framework for structural feature extraction in time series is presented, grounded in topological data analysis (TDA). The motivation stems from two gaps identified in the literature: First, compact and interpretable representations that summarise the global geometric organisation of trajectories across [...] Read more.
A principled, model-agnostic framework for structural feature extraction in time series is presented, grounded in topological data analysis (TDA). The motivation stems from two gaps identified in the literature: First, compact and interpretable representations that summarise the global geometric organisation of trajectories across scales remain scarce. Second, a unified, task-agnostic protocol for evaluating structure preservation against established non-topological families is still missing. To address these gaps, time-delay embeddings are employed to reconstruct phase space, sliding windows are used to generate local point clouds, and Vietoris–Rips persistent homology (up to dimension two) is computed. The resulting persistence diagrams are summarised with three transparent descriptors—persistence entropy, maximum persistence amplitude, and feature counts—and concatenated across delays and window sizes to yield a multiscale representation designed to complement temporal and spectral features while remaining computationally tractable. A unified experimental design is specified in which heterogeneous, regularly sampled financial series are preprocessed on native calendars and contrasted with competitive baselines spanning lagged, calendar-driven, difference/change, STL-based, delay-embedding PCA, price-based statistical, signature (FRUITS), and network-derived (NetF) features. Structure preservation is assessed through complementary criteria that probe spectral similarity, variance-scaled reconstruction fidelity, and the conservation of distributional shape (location, scale, asymmetry, tails). The study is positioned as an evaluation of representations, rather than a forecasting benchmark, emphasising interpretability, comparability, and methodological transparency while outlining avenues for adaptive hyperparameter selection and alternative filtrations. Full article
Show Figures

Figure 1

15 pages, 2333 KB  
Article
Prediction of Fatigue Damage Evolution in 3D-Printed CFRP Based on Ultrasonic Testing and LSTM
by Erzhuo Li, Sha Xu, Hongqing Wan, Hao Chen, Yali Yang and Yongfang Li
Appl. Sci. 2026, 16(2), 1139; https://doi.org/10.3390/app16021139 - 22 Jan 2026
Viewed by 325
Abstract
To address the prediction of fatigue damage for 3D-printed Carbon Fiber Reinforced Polymer (CFRP), this study used 3D-printing technology to fabricate CFRP specimens. Through multi-stage fatigue testing, samples with varying porosity levels were obtained. Based on porosity test results and ultrasonic attenuation coefficient [...] Read more.
To address the prediction of fatigue damage for 3D-printed Carbon Fiber Reinforced Polymer (CFRP), this study used 3D-printing technology to fabricate CFRP specimens. Through multi-stage fatigue testing, samples with varying porosity levels were obtained. Based on porosity test results and ultrasonic attenuation coefficient measurements of specimens under different fatigue cycle counts, a quantitative relationship model was established between the porosity and ultrasonic attenuation coefficient of 3D-printed CFRP. According to the porosity and fatigue-loading cycles obtained from tests, the Time-series Generative Adversarial Network (TimeGAN) algorithm was employed for data augmentation to meet the requirements for neural-network training. Subsequently, the Long Short-Term Memory (LSTM) neural network was utilized to predict the fatigue damage evolution of 3D-printed CFRP specimens. Research findings indicate that by integrating the established relationship between porosity and ultrasonic attenuation coefficient, non-destructive testing of material fatigue damage evolution based on ultrasonic attenuation coefficient can be achieved. Full article
Show Figures

Figure 1

32 pages, 11897 KB  
Article
A Time Series Analysis of Monthly Fire Counts in Ontario, Canada, with Consideration of Climate Teleconnections
by Emmanuella Boateng and Kevin Granville
Fire 2026, 9(1), 44; https://doi.org/10.3390/fire9010044 - 19 Jan 2026
Cited by 1 | Viewed by 703
Abstract
Climate change can impact various facets of a region’s fire regime, such as the frequency and timing of fire ignitions. This study examines the temporal trends of monthly fire counts in the Northwest and Northeast Regions of Ontario, Canada, between 1960 and 2023. [...] Read more.
Climate change can impact various facets of a region’s fire regime, such as the frequency and timing of fire ignitions. This study examines the temporal trends of monthly fire counts in the Northwest and Northeast Regions of Ontario, Canada, between 1960 and 2023. Fires ignited by human activities or lightning are analyzed separately. The significance of historical trends is investigated using the Cochrane–Orcutt method, which identifies decreasing trends in the number of human-caused fires for several months, including May through July. A complementary trend analysis of total area burned is also conducted. The forecasting of future months’ fire counts is explored using a Negative Binomial Autoregressive (NB-AR) model suitable for count time series data with overdispersion. In the NB-AR model, the use of climate teleconnections at a range of temporal lags as predictors is investigated, and their predictive skill is quantified through cross-validation estimates of Mean Absolute Error on a testing dataset. Considered teleconnections include the El Niño-Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), Arctic Oscillation (AO), North Atlantic Oscillation (NAO), and Atlantic Multidecadal Oscillation (AMO). The study finds the use of teleconnection predictors promising, with a notable benefit for forecasting human-caused fire counts but mixed results for forecasting lightning-caused fire counts. Full article
(This article belongs to the Special Issue Effects of Climate Change on Fire Danger)
Show Figures

Figure 1

29 pages, 4367 KB  
Article
SARIMA vs. Prophet: Comparative Efficacy in Forecasting Traffic Accidents Across Ecuadorian Provinces
by Wilson Chango, Ana Salguero, Tatiana Landivar, Roberto Vásconez, Geovanny Silva, Pedro Peñafiel-Arcos, Lucía Núñez and Homero Velasteguí-Izurieta
Computation 2026, 14(1), 5; https://doi.org/10.3390/computation14010005 - 31 Dec 2025
Viewed by 1197
Abstract
This study aimed to evaluate the comparative predictive efficacy of the SARIMA statistical model and the Prophet machine learning model for forecasting monthly traffic accidents across the 24 provinces of Ecuador, addressing a critical research gap in model selection for geographically and socioeconomically [...] Read more.
This study aimed to evaluate the comparative predictive efficacy of the SARIMA statistical model and the Prophet machine learning model for forecasting monthly traffic accidents across the 24 provinces of Ecuador, addressing a critical research gap in model selection for geographically and socioeconomically heterogeneous regions. By integrating classical time series modeling with algorithmic decomposition techniques, the research sought to determine whether a universally superior model exists or if predictive performance is inherently context-dependent. Monthly accident data from January 2013 to June 2025 were analyzed using a rolling-window evaluation framework. Model accuracy was assessed through Mean Absolute Percentage Error (MAPE) and Root Mean Square Error (RMSE) metrics to ensure consistency and comparability across provinces. The results revealed a global tie, with 12 provinces favoring SARIMA and 12 favoring Prophet, indicating the absence of a single dominant model. However, regional patterns of superiority emerged: Prophet achieved exceptional precision in coastal and urban provinces with stationary and high-volume time series—such as Guayas, which recorded the lowest MAPE (4.91%)—while SARIMA outperformed Prophet in the Andean highlands, particularly in non-stationary, medium-to-high-volume provinces such as Tungurahua (MAPE 6.07%) and Pichincha (MAPE 13.38%). Computational instability in MAPE was noted for provinces with extremely low accident counts (e.g., Galápagos, Carchi), though RMSE values remained low, indicating a metric rather than model limitation. Overall, the findings invalidate the notion of a universally optimal model and underscore the necessity of adopting adaptive, region-specific modeling frameworks that account for local geographic, demographic, and structural factors in predictive road safety analytics. Full article
Show Figures

Graphical abstract

49 pages, 9827 KB  
Article
A Novel Hybrid Model Using Demand Concentration Curves, Chaotic AFDB-SFS Algorithm and Bi-LSTM Networks for Heating Oil Price Prediction
by Seçkin Karasu
Electronics 2025, 14(24), 4814; https://doi.org/10.3390/electronics14244814 - 7 Dec 2025
Viewed by 723
Abstract
Nowadays, renewable energy sources are gaining importance, yet global energy demand is primarily met by burning fossil fuels. Fluctuations in fossil fuel availability, driven by geopolitical tensions, supply–demand changes, and natural disasters, can lead to sudden energy price spikes or supply shortages, adversely [...] Read more.
Nowadays, renewable energy sources are gaining importance, yet global energy demand is primarily met by burning fossil fuels. Fluctuations in fossil fuel availability, driven by geopolitical tensions, supply–demand changes, and natural disasters, can lead to sudden energy price spikes or supply shortages, adversely affecting the global economy. Despite its negative impact on carbon emissions and climate change, Heating Oil (HO) offers advantages over other fossil fuels in efficiency, reliability, and availability. Accurate time series prediction models for HO are crucial for stakeholders. This study proposes a novel hybrid model, integrating the Chaotic Adaptive Fitness-Distance Balance-based Stochastic Fractal Search (AFDB-SFS) algorithm with a Bidirectional Long-Short Term Memory (Bi-LSTM) network, for HO close price prediction. The dataset comprises daily observations of five financial time series (close, open, high, low, and volume) over 4260 trading days, yielding a total of 21,300 data points (4260 days × 5 variables). During the feature extraction stage, financial signal processing methods such as Demand Concentration Curve (DCC) and traditional technical indicators are utilized. A total of 189 features are extracted at appropriate intervals for each indicator. Due to the large number of features, the AFDB-SFS algorithm then efficiently identifies the most compatible feature subsets, optimizing the Bi-LSTM model based on three criteria: maximizing R2, minimizing RMSE, and minimizing feature count. Experimental results demonstrate the proposed hybrid model’s superior performance, achieving high accuracy (R2 of 0.9959 and RMSE of 0.0364), outperforming contemporary models in the literature. Furthermore, the model is successfully implemented on the Jetson Orin Nano Developer Platform, enabling real-time, high-frequency HO price predictions with ultra-low latency (1.01 ms for Bi-LSTM), showcasing its practical utility for edge computing applications in commodity markets. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

27 pages, 1582 KB  
Article
Advanced Computational Modeling and Machine Learning for Risk Stratification, Treatment Optimization, and Prognostic Forecasting in Appendiceal Neoplasms
by Jawad S. Alnajjar, Faisal A. Al-Harbi, Ahmed Khalifah Alsaif, Ghaida S. Alabdulaaly, Omar K. Aljubaili, Manal Alquaimi, Arwa F. Alrasheed, Mohammed N. AlAli, Maha A. Alghamdi and Ahmed Y. Azzam
Healthcare 2025, 13(23), 3074; https://doi.org/10.3390/healthcare13233074 - 26 Nov 2025
Viewed by 849
Abstract
Background: Appendiceal neoplasms account for less than 1% of gastrointestinal cancers but are increasing in incidence worldwide. Their marked histological variations and differences create multiple challenges for prognosis and management planning, as current staging systems are limited in certain aspects for capturing the [...] Read more.
Background: Appendiceal neoplasms account for less than 1% of gastrointestinal cancers but are increasing in incidence worldwide. Their marked histological variations and differences create multiple challenges for prognosis and management planning, as current staging systems are limited in certain aspects for capturing the entire disease complexity. Methods: We synthesized data from 18 large observational studies, including 67,001 patients diagnosed between 1973 and 2024. Using advanced computational modeling, we combined multiple statistical methods and machine learning techniques to improve risk stratification, survival prediction, treatment optimization, and forecasting. A novel overlap-aware weighting methodology was applied to prevent double-counting across overlapping registries. Results: Our multi-dimensional risk model outperformed TNM staging (C-index 0.758 vs. 0.689), identifying five prognostic groups with five-year overall survival ranging from 88.7% (low-risk neuroendocrine tumors (NETs)) to 27.3% (high-risk signet-ring cell carcinomas (SRCC)). Hierarchical survival analysis demonstrated marked variation across histological variants, with goblet cell adenocarcinoma showing the most favorable outcomes. Causal inference confirmed the survival benefit of hyperthermic intraperitoneal chemotherapy (HIPEC) in stage IV disease (five-year overall survival (OS) 87.4%) and highlighted disparities in outcomes by race and institutional volume. Time-series forecasting projected a 25% to 50% increase in incidence by 2030, highlighting the growing risk of global burden. Conclusions: By integrating multi-database evidence with advanced modeling and statistical methodologies, our findings demonstrate valuable insights and implications for individualized prognosis, better management decision-making, and health system planning. Our proposed approach and demonstrated methodologies are warranting better progression and advancements in precision oncology and utilization of computational modeling techniques in big data as well as digital health progression landscape. Full article
Show Figures

Figure 1

25 pages, 5931 KB  
Article
An Intelligent System for Pigeon Egg Management: Integrating a Novel Lightweight YOLO Model and Multi-Frame Fusion for Robust Detection and Positioning
by Yufan Cheng, Yao Liu, Qianhui Li, Tao Jiang, Chengyue Ji, Longshen Liu, Ya Zhong, Jinling Wu and Guanchi Chen
Sensors 2025, 25(23), 7132; https://doi.org/10.3390/s25237132 - 21 Nov 2025
Viewed by 930
Abstract
To address the issues of high breakage rates and substantial labor costs in pigeon egg farming, this study proposes an intelligent pigeon egg recognition and positioning system based on an improved YOLOv12n object detection algorithm and OpenCV barcode recognition technology. Visual sensors installed [...] Read more.
To address the issues of high breakage rates and substantial labor costs in pigeon egg farming, this study proposes an intelligent pigeon egg recognition and positioning system based on an improved YOLOv12n object detection algorithm and OpenCV barcode recognition technology. Visual sensors installed on feeding machines were used to collect real-time video data of pigeon cages, with images obtained through frame extraction. The images were annotated using LabelImg to construct a pigeon egg detection dataset containing 1500 training images, 215 validation images, and 215 test images. After data augmentation, the dataset was used to train the pigeon egg recognition model. Additionally, customized barcodes were designed according to actual farm conditions and recognized using OpenCV through preprocessing steps including grayscale conversion, filtering, and binarization to extract positional information. Experimental results demonstrate that the proposed YOLOv12n-pg recognition model requires only 4.9 GFLOPS computational load, contains 1.56 M parameters, and has a model size of 3.5 MB, significantly lower than other models in the YOLO-n series. In inference tests, it achieved 99.4% mAP50 and 83.6% mAP50-95. The implementation of a majority voting method in practical testing further reduced the missed detection rate. The system successfully records “cage location—egg count” information as key-value pairs in a database. This system effectively enables automated management of pigeon eggs, improves recognition performance, and demonstrates higher efficiency and accuracy compared to manual operations, thereby establishing a foundation for subsequent research in pigeon egg recognition. Full article
Show Figures

Figure 1

Back to TopTop