Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (256)

Search Parameters:
Keywords = many-to-many time series forecasting

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 2487 KB  
Article
Enhancing Predictive Performance of LSTM–Attention Models for Investment Risk Forecasting
by Amina Ladhari and Heni Boubaker
Risks 2026, 14(1), 13; https://doi.org/10.3390/risks14010013 - 5 Jan 2026
Viewed by 231
Abstract
For many decades, time-series forecasting has been applied to different problems by scientists and industries. Many models have been introduced for the purpose of forecasting. These advancements have significantly improved the accuracy and reliability of predictions, especially in complex scenarios where traditional methods [...] Read more.
For many decades, time-series forecasting has been applied to different problems by scientists and industries. Many models have been introduced for the purpose of forecasting. These advancements have significantly improved the accuracy and reliability of predictions, especially in complex scenarios where traditional methods struggled. As data availability continues to expand, the integration of machine learning techniques is likely to further enhance forecasting capabilities across various fields. Today, hybrid techniques are gaining popularity, as they combine the advantages of different approaches to deliver improved predictive performance and more advanced visualization analytics for decision support. These hybrid approaches can provide better prediction, and at the same time, they can develop a more sophisticated set of visualization analytics for decision support. Recently, the integration of cross-entropy, fuzzy logic, and attention mechanisms in hybrid forecasting models has enhanced their ability to capture complex and uncertain patterns in financial and energy markets. In this study, we propose a hybrid ANN–LSTM deep learning model optimized with cross-entropy, fuzzy logic, and an attention mechanism to enhance the forecasting of financial and energy time series, specifically Ethereum and natural gas prices. Our models combine the feature extraction strength of ANN with the temporal learning of LSTM, while cross-entropy improves convergence, fuzzy logic handles uncertainty, and attention refines feature weighting. Since inaccurate forecasts can lead to greater estimation uncertainty and increased financial and operational risk, improving predictive reliability is essential for effective risk mitigation. These techniques prove effective not only in improving estimation accuracy but also in minimizing financial risks and supporting more informed investment decisions. Full article
(This article belongs to the Special Issue Artificial Intelligence Risk Management)
Show Figures

Figure 1

17 pages, 3476 KB  
Article
Integer-Valued Time Series Model via Copula-Based Bivariate Skellam Distribution
by Mohammed Alqawba, Norou Diawara and Mame Mor Sene
J. Risk Financial Manag. 2026, 19(1), 27; https://doi.org/10.3390/jrfm19010027 - 2 Jan 2026
Viewed by 285
Abstract
Time series analysis is crucial for modeling and forecasting diverse real-world phenomena. Traditional models typically assume continuous-valued data; however, many applications involve integer-valued series, often including negative integers. This paper introduces an approach that combines copula theory with the bivariate Skellam distribution to [...] Read more.
Time series analysis is crucial for modeling and forecasting diverse real-world phenomena. Traditional models typically assume continuous-valued data; however, many applications involve integer-valued series, often including negative integers. This paper introduces an approach that combines copula theory with the bivariate Skellam distribution to handle such integer-valued data effectively. Copulas are widely recognized for capturing complex dependencies among variables. By integrating copulas, our proposed method respects integer constraints while modeling positive, negative, and temporal dependencies accurately. Through simulation and an empirical study on a real-life example, we demonstrate that our class of models performs well. This approach has broad applicability in areas such as finance, epidemiology, and environmental science, where modeling series with integer values, both positive and negative, is essential. Full article
(This article belongs to the Special Issue Mathematical Modelling in Economics and Finance)
Show Figures

Figure 1

24 pages, 2583 KB  
Article
Hybrid Demand Forecasting in Fuel Supply Chains: ARIMA with Non-Homogeneous Markov Chains and Feature-Conditioned Evaluation
by Daniel Kubek and Paweł Więcek
Energies 2025, 18(22), 6044; https://doi.org/10.3390/en18226044 - 19 Nov 2025
Viewed by 627
Abstract
In the context of growing data availability and increasing complexity of demand patterns in retail fuel distribution, selecting effective forecasting models for large collections of time series is becoming a key operational challenge. This study investigates the effectiveness of a hybrid forecasting approach [...] Read more.
In the context of growing data availability and increasing complexity of demand patterns in retail fuel distribution, selecting effective forecasting models for large collections of time series is becoming a key operational challenge. This study investigates the effectiveness of a hybrid forecasting approach combining ARIMA models with dynamically updated Markov Chains. Unlike many existing studies that focus on isolated or small-scale experiments, this research evaluates the hybrid model across a full set of approximately 150 time series collected from multiple petrol stations, without pre-clustering or manual selection. A comprehensive set of statistical and structural features is extracted from each time series to analyze their relation to forecast performance. The results show that the hybrid ARIMA–Markov approach significantly outperforms both individual statistical models and commonly applied machine learning methods in many cases, particularly for non-stationary or regime-shifting series. In 100% of cases, the hybrid model reduced the error compared to both baseline models—the median RMSE improvement over ARIMA was 13.03%, and 15.64% over the Markov model, with statistical significance confirmed by the Wilcoxon signed-rank test. The analysis also highlights specific time series features—such as entropy, regime shift frequency, and autocorrelation structure—as strong indicators of whether hybrid modeling yields performance gains. Feature-conditioning analyses (e.g., lag-1 autocorrelation, volatility, entropy) explain when hybridization helps, enabling a feature-aware workflow that selectively deploys model components and narrows parameter searches. The greatest benefits of applying the hybrid model were observed for time series characterized by high variability, moderate entropy of differences, and a well-defined temporal dependency structure—the correlation values between these features and the improvement in hybrid performance relative to ARIMA and Markov models reached 0.55–0.58, ensuring adequate statistical significance. Such approaches are particularly valuable in enterprise environments dealing with thousands of time series, where automated model configuration becomes essential. The findings position interpretable, adaptive hybrids as a practical default for short-horizon demand forecasting in fuel supply chains and, more broadly, in energy-use applications characterized by heterogeneous profiles and evolving regimes. Full article
(This article belongs to the Section A: Sustainable Energy)
Show Figures

Figure 1

22 pages, 5646 KB  
Article
Simulations of Damage Scenarios in Urban Areas: The Case of the Seismic Sequence of L’Aquila 2009
by Rosa Maria Sava, Rosalinda Arcoraci, Annalisa Greco, Alessandro Pluchino and Andrea Rapisarda
Buildings 2025, 15(21), 3980; https://doi.org/10.3390/buildings15213980 - 4 Nov 2025
Viewed by 849
Abstract
Simulation of damage scenarios is an important tool for seismic risk mitigation. While a detailed analysis of each building would be preferable to assess their vulnerability to seismic hazard, simplified yet robust methodologies are necessary at a large urban scale to overcome computational [...] Read more.
Simulation of damage scenarios is an important tool for seismic risk mitigation. While a detailed analysis of each building would be preferable to assess their vulnerability to seismic hazard, simplified yet robust methodologies are necessary at a large urban scale to overcome computational costs or data unavailability. Moreover, most damage assessments simulate single seismic shocks, though in many real sequences, with a series of aftershocks following the mainshocks, it is observed that buildings endure damage accumulation, which increases their vulnerability over time. The present study builds on a recently developed methodology for simulating urban-scale damage scenarios across seismic sequences, explicitly accounting for damage accumulation and the evolution of vulnerability. In particular, the availability of a dataset reporting the damage observed in the L’Aquila area (Italy) during the severe earthquake sequence of 2009, in combination with the georeferenced maps representing the spatial distribution of the ground motion, allows for the calibration of the methodology through the comparison between the simulations’ results and the sequence’s real data. Although calibrated on the L’Aquila dataset, the proposed procedure could also be applied to different urban areas, with both real and synthetic seismic sequences, enabling the forecasting of damage scenarios to support the development of effective strategies for seismic risk mitigation. Full article
Show Figures

Figure 1

19 pages, 373 KB  
Article
Time-Series Recommendation Quality, Algorithm Aversion, and Data-Driven Decisions: A Temporal Human–AI Interaction Perspective
by Shan Jiang, Tianyu Chen, Yufei Tan, Shiqi Gao and Lanhao Li
Mathematics 2025, 13(21), 3528; https://doi.org/10.3390/math13213528 - 4 Nov 2025
Viewed by 1776
Abstract
New AI technologies have empowered e-commerce personalized recommendation systems, many of which now leverage time-series forecasting to capture dynamic user preferences. However, buyers’ algorithm aversion hinders these systems from realizing their full potential in enabling data-driven decisions. Current research focuses heavily on artifact [...] Read more.
New AI technologies have empowered e-commerce personalized recommendation systems, many of which now leverage time-series forecasting to capture dynamic user preferences. However, buyers’ algorithm aversion hinders these systems from realizing their full potential in enabling data-driven decisions. Current research focuses heavily on artifact design and algorithm optimization to reduce aversion, with insufficient attention to the temporal dimensions of human–AI interaction (HAII). To address this gap, this study explores how recommendation accuracy, novelty, and diversity—key attributes in time-series recommendation contexts—influence buyers’ algorithm aversion from a temporal HAII perspective. Data from 205 online survey responses were analyzed using partial least squares structural equation modeling (PLS-SEM). Results reveal that accuracy (encompassing sequential prediction consistency), novelty (balanced with temporal relevance), and diversity (covering long-term preferences) negatively impact algorithm aversion, with perceived usefulness as a mediator. Reduced aversion further facilitates data-driven purchasing decisions. This study enriches the algorithm aversion literature by emphasizing temporal HAII in time-series recommendation scenarios, bridging human factors research with data-driven decision-making in e-commerce. Full article
Show Figures

Figure 1

27 pages, 2139 KB  
Article
Generalisation Bounds of Zero-Shot Economic Forecasting Using Time Series Foundation Models
by Jittarin Jetwiriyanon, Teo Susnjak and Surangika Ranathunga
Mach. Learn. Knowl. Extr. 2025, 7(4), 135; https://doi.org/10.3390/make7040135 - 3 Nov 2025
Cited by 1 | Viewed by 1971
Abstract
This study investigates the transfer learning capabilities of Time-Series Foundation Models (TSFMs) under the zero-shot setup, to forecast macroeconomic indicators. New TSFMs are continually emerging, offering significant potential to provide ready-trained and accurate forecasting models that generalise across a wide spectrum of domains. [...] Read more.
This study investigates the transfer learning capabilities of Time-Series Foundation Models (TSFMs) under the zero-shot setup, to forecast macroeconomic indicators. New TSFMs are continually emerging, offering significant potential to provide ready-trained and accurate forecasting models that generalise across a wide spectrum of domains. However, the transferability of their learning to many domains, especially economics, is not well understood. To that end, we study TSFM’s performance profile for economic forecasting, bypassing the need for training bespoke econometric models using extensive training datasets. Our experiments were conducted on a univariate case study dataset, in which we rigorously back-tested three state-of-the-art TSFMs (Chronos, TimeGPT, and Moirai) under data-scarce conditions and structural breaks. Our results demonstrate that appropriately engineered TSFMs can internalise rich economic dynamics, accommodate regime shifts, and deliver well-behaved uncertainty estimates out of the box, while matching and exceeding state-of-the-art multivariate models currently used in this domain. Our findings suggest that, without any fine-tuning and additional multivariate inputs, TSFMs can match or outperform classical models under both stable and volatile economic conditions. However, like all models, they are vulnerable to performance degradation during periods of rapid shocks, though they recover the forecasting accuracy faster than classical models. The findings offer guidance to practitioners on when zero-shot deployments are viable for macroeconomic monitoring and strategic planning. Full article
Show Figures

Graphical abstract

29 pages, 835 KB  
Article
Non-Negative Forecast Reconciliation: Optimal Methods and Operational Solutions
by Daniele Girolimetto
Forecasting 2025, 7(4), 64; https://doi.org/10.3390/forecast7040064 - 26 Oct 2025
Cited by 1 | Viewed by 1228
Abstract
In many different applications such as retail, energy, and tourism, forecasts for a set of related time series must satisfy both linear and non-negativity constraints, as negative values are meaningless in practice. Standard regression-based reconciliation approaches achieve coherence with linear constraints, but may [...] Read more.
In many different applications such as retail, energy, and tourism, forecasts for a set of related time series must satisfy both linear and non-negativity constraints, as negative values are meaningless in practice. Standard regression-based reconciliation approaches achieve coherence with linear constraints, but may generate negative forecasts, reducing interpretability and usability. This paper develops and evaluates three alternative strategies for non-negative forecast reconciliation. First, reconciliation is formulated as a non-negative least squares problem and solved with the operator splitting quadratic program, allowing flexible inclusion of additional constraints. Second, we propose an iterative non-negative reconciliation with immutable forecasts, offering a practical optimization-based alternative. Third, we investigate a family of set-negative-to-zero heuristics that achieve efficiency and interpretability at minimal computational cost. Using the Australian Tourism Demand dataset, we compare these approaches in terms of forecast accuracy and computation time. The results show that non-negativity constraints consistently improve accuracy compared to base forecasts. Overall, set-negative-to-zero achieve near-optimal performance with negligible computation time, the block principal pivoting algorithm provides a good accuracy–efficiency compromise, and the operator splitting quadratic program offers flexibility for incorporating additional constraints in large-scale applications. Full article
(This article belongs to the Special Issue Feature Papers of Forecasting 2025)
Show Figures

Figure 1

19 pages, 7070 KB  
Article
Research on the Application of Atmospheric Motion Vector from MetOp Satellite Series in CMA-GFS
by Jiali Ma, Yan Liu and Xiaomin Wan
Remote Sens. 2025, 17(21), 3519; https://doi.org/10.3390/rs17213519 - 23 Oct 2025
Viewed by 625
Abstract
Atmospheric motion vector (AMV) products from EUMETSAT’s MetOp satellite series, including MetOp-B, MetOp-C, and the MetOp-B/C tandem (MetOp-Dual), have been assimilated at many numerical weather prediction centers worldwide. However, they have not yet been applied in the China Meteorological Administration’s Global Forecast System [...] Read more.
Atmospheric motion vector (AMV) products from EUMETSAT’s MetOp satellite series, including MetOp-B, MetOp-C, and the MetOp-B/C tandem (MetOp-Dual), have been assimilated at many numerical weather prediction centers worldwide. However, they have not yet been applied in the China Meteorological Administration’s Global Forecast System (CMA-GFS). This study addresses this gap by developing assimilation techniques, including quality control and thinning methods for MetOp AMVs. Based on these techniques, one-month assimilation and forecasting experiments reveal that MetOp AMVs increased the AMV volume in CMA-GFS by 25%, filling certain gaps over polar and oceanic areas. Notable and steady improvements in the background of CMA-GFS have been found, particularly in polar and high-latitude regions. The usable forecast lead time for the global 500 hPa geopotential height is extended by 0.22 days, enhancing the reliability of medium-range forecasts. Furthermore, the more substantial improvements in short-range (0–3 days) forecasting, potentially benefit severe weather alerting. This study marks the first to successfully apply MetOp-B, MetOp-C and MetOp-Dual products in CMA-GFS, confirming their value for improving the performance of the system. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

25 pages, 1741 KB  
Article
Event-Aware Multimodal Time-Series Forecasting via Symmetry-Preserving Graph-Based Cross-Regional Transfer Learning
by Shu Cao and Can Zhou
Symmetry 2025, 17(11), 1788; https://doi.org/10.3390/sym17111788 - 22 Oct 2025
Viewed by 1067
Abstract
Forecasting real-world time series in domains with strong event sensitivity and regional variability poses unique challenges, as predictive models must account for sudden disruptions, heterogeneous contextual factors, and structural differences across locations. In tackling these challenges, we draw on the concept of symmetry [...] Read more.
Forecasting real-world time series in domains with strong event sensitivity and regional variability poses unique challenges, as predictive models must account for sudden disruptions, heterogeneous contextual factors, and structural differences across locations. In tackling these challenges, we draw on the concept of symmetry that refers to the balance and invariance patterns across temporal, multimodal, and structural dimensions, which help reveal consistent relationships and recurring patterns within complex systems. This study is based on two multimodal datasets covering 12 tourist regions and more than 3 years of records, ensuring robustness and practical relevance of the results. In many applications, such as monitoring economic indicators, assessing operational performance, or predicting demand patterns, short-term fluctuations are often triggered by discrete events, policy changes, or external incidents, which conventional statistical and deep learning approaches struggle to model effectively. To address these limitations, we propose an event-aware multimodal time-series forecasting framework with graph-based regional transfer built upon an enhanced PatchTST backbone. The framework unifies multimodal feature extraction, event-sensitive temporal reasoning, and graph-based structural adaptation. Unlike Informer, Autoformer, FEDformer, or PatchTST, our model explicitly addresses naive multimodal fusion, event-agnostic modeling, and weak cross-regional transfer by introducing an event-aware Multimodal Encoder, a Temporal Event Reasoner, and a Multiscale Graph Module. Experiments on diverse multi-region multimodal datasets demonstrate that our method achieves substantial improvements over eight state-of-the-art baselines in forecasting accuracy, event response modeling, and transfer efficiency. Specifically, our model achieves a 15.06% improvement in the event recovery index, a 15.1% reduction in MAE, and a 19.7% decrease in event response error compared to PatchTST, highlighting its empirical impact on tourism event economics forecasting. Full article
Show Figures

Figure 1

23 pages, 4444 KB  
Article
StreamTS: A Streamline Solution Towards Zero-Shot Time Series Forecasting with Large Language Models
by Wei Song, Yi Fang, Xinyu Gu, Wenbo Zhang, Zhixiang Liu, Yu Cheng and Mario Di Mauro
Electronics 2025, 14(20), 4088; https://doi.org/10.3390/electronics14204088 - 17 Oct 2025
Viewed by 1275
Abstract
Time series forecasting (TSF) is gaining significance in various applications. In recent years, many pre-trained large language models (LLMs) have been proposed, and some of them have been adapted for use in TSF. When applying LLMs to TSF, existing strategies with complex adapters [...] Read more.
Time series forecasting (TSF) is gaining significance in various applications. In recent years, many pre-trained large language models (LLMs) have been proposed, and some of them have been adapted for use in TSF. When applying LLMs to TSF, existing strategies with complex adapters and data preceding modules can increase training time. We introduce StreamTS, a highly streamlined time series forecasting framework built upon LLMs and decomposition-based learning. First, time series are decomposed into a trend component and a seasonal component after instance normalization. Then, a pre-trained LLM facilitated by the proposed BC-Prompt is used for future long-term trend prediction. Concurrently, a linear model simplifies the fitting of future short-term seasonal term. The predicted trend and seasonal series are finally added to generate the forecasting results. In our efforts to achieve zero-shot forecasting, we replace the linear prediction part with a statistical learning method. Extensive experiments demonstrate that our proposed framework outperforms many TSF-specific models across various datasets and achieves significant improvements over LLM-based TSF methods. Full article
(This article belongs to the Special Issue Advances in Data-Driven Artificial Intelligence)
Show Figures

Figure 1

22 pages, 3339 KB  
Article
An AutoML Algorithm: Multiple-Steps Ahead Forecasting of Correlated Multivariate Time Series with Anomalies Using Gated Recurrent Unit Networks
by Ying Su and Morgan C. Wang
AI 2025, 6(10), 267; https://doi.org/10.3390/ai6100267 - 14 Oct 2025
Cited by 1 | Viewed by 1253
Abstract
Multiple time series forecasting is critical in domains such as energy management, economic analysis, web traffic prediction and air pollution monitoring to support effective resource planning. Traditional statistical learning methods, including Vector Autoregression (VAR) and Vector Autoregressive Integrated Moving Average (VARIMA), struggle with [...] Read more.
Multiple time series forecasting is critical in domains such as energy management, economic analysis, web traffic prediction and air pollution monitoring to support effective resource planning. Traditional statistical learning methods, including Vector Autoregression (VAR) and Vector Autoregressive Integrated Moving Average (VARIMA), struggle with nonstationarity, temporal dependencies, inter-series correlations, and data anomalies such as trend shifts, seasonal variations, and missing data. Furthermore, their effectiveness in multi-step ahead forecasting is often limited. This article presents an Automated Machine Learning (AutoML) framework that provides an end-to-end solution for researchers who lack in-depth knowledge of time series forecasting or advanced programming skills. This framework utilizes Gated Recurrent Unit (GRU) networks, a variant of Recurrent Neural Networks (RNNs), to tackle multiple correlated time series forecasting problems, even in the presence of anomalies. To reduce complexity and facilitate the AutoML process, many model parameters are pre-specified, thereby requiring minimal tuning. This design enables efficient and accurate multi-step forecasting while addressing issues including missing values and structural shifts. We also examine the advantages and limitations of GRU-based RNNs within the AutoML system for multivariate time series forecasting. Model performance is evaluated using multiple accuracy metrics across various forecast horizons. The empirical results confirm our proposed approach’s ability to capture inter-series dependencies and handle anomalies in long-range forecasts. Full article
Show Figures

Figure 1

17 pages, 887 KB  
Article
Comparison of Linear and Beta Autoregressive Models in Forecasting Nonstationary Percentage Time Series
by Carlo Grillenzoni
Forecasting 2025, 7(4), 57; https://doi.org/10.3390/forecast7040057 - 13 Oct 2025
Viewed by 775
Abstract
Positive percentage time series are present in many empirical applications; they take values in the continuous interval (0,1) and are often modeled with linear dynamic models. Risks of biased predictions (outside the admissible range) and problems of heteroskedasticity in the presence of asymmetric [...] Read more.
Positive percentage time series are present in many empirical applications; they take values in the continuous interval (0,1) and are often modeled with linear dynamic models. Risks of biased predictions (outside the admissible range) and problems of heteroskedasticity in the presence of asymmetric distributions are ignored by practitioners. Alternative models are proposed in the statistical literature; the most suitable is the dynamic beta regression which belongs to generalized linear models (GLM) and uses the logit transformation as a link function. However, owing to the Jensen inequality, this approach may also not be optimal in prediction; thus, the aim of the present paper is the in-depth forecasting comparison of linear and beta autoregressions. Simulation experiments and applications to nonstationary time series (the US unemployment rate and BR hydroelectric energy) are carried out. Rolling regression for time-varying parameters is applied to both linear and beta models, and a prediction criterion for the joint selection of model order and sample size is defined. Full article
(This article belongs to the Special Issue Feature Papers of Forecasting 2025)
Show Figures

Figure 1

33 pages, 7835 KB  
Article
PyGEE-ST-MEDALUS: AI Spatiotemporal Framework Integrating MODIS and Sentinel-1/-2 Data for Desertification Risk Assessment in Northeastern Algeria
by Zakaria Khaldi, Jingnong Weng, Franz Pablo Antezana Lopez, Guanhua Zhou, Ilyes Ghedjatti and Aamir Ali
Remote Sens. 2025, 17(19), 3350; https://doi.org/10.3390/rs17193350 - 1 Oct 2025
Viewed by 1255
Abstract
Desertification threatens the sustainability of dryland ecosystems, yet many existing monitoring frameworks rely on static maps, coarse spatial resolution, or lack temporal forecasting capacity. To address these limitations, this study introduces PyGEE-ST-MEDALUS, a novel spatiotemporal framework combining the full MEDALUS desertification model with [...] Read more.
Desertification threatens the sustainability of dryland ecosystems, yet many existing monitoring frameworks rely on static maps, coarse spatial resolution, or lack temporal forecasting capacity. To address these limitations, this study introduces PyGEE-ST-MEDALUS, a novel spatiotemporal framework combining the full MEDALUS desertification model with deep learning (CNN, LSTM, DeepMLP) and machine learning (RF, XGBoost, SVM) techniques on the Google Earth Engine (GEE) platform. Applied across Tebessa Province, Algeria (2001–2028), the framework integrates MODIS and Sentinel-1/-2 data to compute four core indices—climatic, soil, vegetation, and land management quality—and create the Desertification Sensitivity Index (DSI). Unlike prior studies that focus on static or spatial-only MEDALUS implementations, PyGEE-ST-MEDALUS introduces scalable, time-series forecasting, yielding superior predictive performance (R2 ≈ 0.96; RMSE < 0.03). Over 71% of the region was classified as having high to very high sensitivity, driven by declining vegetation and thermal stress. Comparative analysis confirms that this study advances the state-of-the-art by integrating interpretable AI, near-real-time satellite analytics, and full MEDALUS indicators into one cloud-based pipeline. These contributions make PyGEE-ST-MEDALUS a transferable, efficient decision-support tool for identifying degradation hotspots, supporting early warning systems, and enabling evidence-based land management in dryland regions. Full article
Show Figures

Graphical abstract

24 pages, 5189 KB  
Article
Spatiotemporal Deep Learning to Forecast Storm Surge Water Levels and Storm Trajectory: Case Study Hurricane Harvey
by Junqin Hou, Muhammad K. Akbar, Manar D. Samad and Lizhi Ouyang
J. Mar. Sci. Eng. 2025, 13(9), 1780; https://doi.org/10.3390/jmse13091780 - 15 Sep 2025
Viewed by 1660
Abstract
Using Hurricane Harvey as a case study, this paper uses the hurricane track, wind velocity and pressure, bathymetry, Manning’s n coefficients, tidal forcing, and storm surge results generated by the ADCIRC+SWAN model as input to construct a uniform spatiotemporal deep learning model for [...] Read more.
Using Hurricane Harvey as a case study, this paper uses the hurricane track, wind velocity and pressure, bathymetry, Manning’s n coefficients, tidal forcing, and storm surge results generated by the ADCIRC+SWAN model as input to construct a uniform spatiotemporal deep learning model for storm surge forecasting. The model transforms inputs into embeddings and performs feature fusion and extraction. The regression layer of the model outputs the predicted values of storm surge water elevation, station water level time series, and hurricane tracks with attributes. To analyze the model’s adaptability and robustness as a surrogate model to ADCIRC, ablation experiments are conducted on up to 10 input variables to investigate the impact of various inputs on the results. Heat maps between 3, 6, 9, and 12 h horizon prediction and targets revealed excellent performance for the large scale of nodes and multiple inputs on the training set, validation set, and test set as the surrogate model. When the model is used to forecast water levels of 12 observation stations, the 9 h forecasting horizon is generally equal to or better than the ADCIRC simulation results. When the model is used to predict hurricane tracks and attributes, the 12 h forecast horizon is relatively close to the observed values, achieving satisfactory results. This model is developed and tested using Hurricane Harvey data and storm surge results as a case study. To develop a generalized prediction model would require a large amount of data and storm surge results from many hurricanes. Full article
(This article belongs to the Section Physical Oceanography)
Show Figures

Figure 1

46 pages, 47184 KB  
Article
Goodness of Fit in the Marginal Modeling of Round-Trip Times for Networked Robot Sensor Transmissions
by Juan-Antonio Fernández-Madrigal, Vicente Arévalo-Espejo, Ana Cruz-Martín, Cipriano Galindo-Andrades, Adrián Bañuls-Arias and Juan-Manuel Gandarias-Palacios
Sensors 2025, 25(17), 5413; https://doi.org/10.3390/s25175413 - 2 Sep 2025
Viewed by 1560
Abstract
When complex computations cannot be performed on board a mobile robot, sensory data must be transmitted to a remote station to be processed, and the resulting actions must be sent back to the robot to execute, forming a repeating cycle. This involves stochastic [...] Read more.
When complex computations cannot be performed on board a mobile robot, sensory data must be transmitted to a remote station to be processed, and the resulting actions must be sent back to the robot to execute, forming a repeating cycle. This involves stochastic round-trip times in the case of non-deterministic network communications and/or non-hard real-time software. Since robots need to react within strict time constraints, modeling these round-trip times becomes essential for many tasks. Modern approaches for modeling sequences of data are mostly based on time-series forecasting techniques, which impose a computational cost that may be prohibitive for real-time operation, do not consider all the delay sources existing in the sw/hw system, or do not work fully online, i.e., within the time of the current round-trip. Marginal probabilistic models, on the other hand, often have a lower cost, since they discard temporal dependencies between successive measurements of round-trip times, a suitable approximation when regime changes are properly handled given the typically stationary nature of these round-trip times. In this paper we focus on the hypothesis tests needed for marginal modeling of the round-trip times in remotely operated robotic systems with the presence of abrupt changes in regimes. We analyze in depth three common models, namely Log-logistic, Log-normal, and Exponential, and propose some modifications of parameter estimators for them and new thresholds for well-known goodness-of-fit tests, which are aimed at the particularities of our setting. We then evaluate our proposal on a dataset gathered from a variety of networked robot scenarios, both real and simulated; through >2100 h of high-performance computer processing, we assess the statistical robustness and practical suitability of these methods for these kinds of robotic applications. Full article
Show Figures

Figure 1

Back to TopTop