Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,018)

Search Parameters:
Keywords = multivariate time series

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 492 KB  
Review
Artificial Intelligence for Blood Glucose Level Prediction in Type 1 Diabetes: Methods, Evaluation, and Emerging Advances
by Heydar Khadem, Hoda Nemat, Jackie Elliott and Mohammed Benaissa
Sensors 2026, 26(9), 2675; https://doi.org/10.3390/s26092675 (registering DOI) - 25 Apr 2026
Abstract
Blood glucose level (BGL) prediction, by providing early warnings regarding unsatisfactory glycaemic control and maximising the amount of time BGL remains in the target range, can contribute to minimising both acute and chronic complications related to diabetes. This paper aims to provide an [...] Read more.
Blood glucose level (BGL) prediction, by providing early warnings regarding unsatisfactory glycaemic control and maximising the amount of time BGL remains in the target range, can contribute to minimising both acute and chronic complications related to diabetes. This paper aims to provide an overview of data-driven approaches for BGL prediction in type 1 diabetes mellitus (T1DM). This review summarises different aspects of developing and evaluating data-driven prediction models, including model strategy, model input, prediction horizon, and prediction performance. It also examines applications of recent artificial intelligence (AI) techniques, including deep learning, transfer learning, ensemble learning, and causal analysis in the management of T1DM. Recent studies indicate that machine learning approaches often outperform classical time-series forecasting models in BGL prediction, particularly when using multivariate inputs. These findings also highlight the potential of advanced AI methods to improve prediction accuracy. Moreover, applying appropriate statistical analyses is essential to enable valid comparisons between different BGL prediction models, especially given the considerable inter-individual variability among people with T1DM. The development of efficient methods for integrating affecting variables into BGL prediction requires further research. Given the promising performance of advanced AI techniques and the rapid growth of AI innovation, continued exploration of cutting-edge AI strategies will be crucial for further improving BGL prediction models. Full article
Show Figures

Figure 1

22 pages, 2892 KB  
Article
STFNet: A Specialized Time-Frequency Domain Feature Extraction Neural Network for Long-Term Wind Power Forecasting
by Tingxiao Ding, Xiaochun Hu, Yan Chen, Rongbin Liu, Jin Su, Rongxing Jiang and Yiming Qin
Energies 2026, 19(9), 2080; https://doi.org/10.3390/en19092080 (registering DOI) - 25 Apr 2026
Abstract
The rapid expansion of renewable energy has raised the demand for accurate, long-term wind power forecasting. However, wind power series are strongly affected by meteorological factors and exhibit pronounced volatility, making long-term prediction challenging. To model these characteristics more comprehensively, we propose STFNet, [...] Read more.
The rapid expansion of renewable energy has raised the demand for accurate, long-term wind power forecasting. However, wind power series are strongly affected by meteorological factors and exhibit pronounced volatility, making long-term prediction challenging. To model these characteristics more comprehensively, we propose STFNet, a dual-branch neural architecture that integrates time-domain and frequency-domain modeling. STFNet contains two key modules: (1) an MLFE module, which explicitly captures lag effects and non-stationary transitions through parallel multi-scale convolutions and a difference-convolution branch and further enhances multivariate dependency learning via cross-variable interaction modeling, and (2) an FGFE module, which applies DCT to capture long-cycle trends and uses a learnable low-pass filter for noise suppression. Experiments on two real-world wind farm datasets (LY and HG) show that STFNet consistently outperforms strong baselines, achieving average MSE reductions of 15.9–26.6% while maintaining a high computational efficiency. Ablation studies further confirm the effectiveness of each module, indicating the strong practical potential of STFNet for wind farm operation and management. Full article
Show Figures

Figure 1

28 pages, 1065 KB  
Article
Normalising Flow Enhanced GARCH Models: A Two-Stage Framework for Flexible Innovation Modelling in Financial Time Series
by Abdullah Hassan, Farai Mlambo and Wilson Tsakane Mongwe
Risks 2026, 14(5), 100; https://doi.org/10.3390/risks14050100 - 24 Apr 2026
Abstract
We introduce the Normalising Flow GARCH (NF-GARCH), a two-stage hybrid framework that enhances traditional GARCH models by replacing restrictive parametric innovation distributions with learned densities via normalising flows. Our approach preserves the interpretability of standard variance dynamics while addressing the common issue of [...] Read more.
We introduce the Normalising Flow GARCH (NF-GARCH), a two-stage hybrid framework that enhances traditional GARCH models by replacing restrictive parametric innovation distributions with learned densities via normalising flows. Our approach preserves the interpretability of standard variance dynamics while addressing the common issue of innovation misspecification. In the first stage, we estimate standard GARCH variants (sGARCH, TGARCH, and gjrGARCH) to extract standardised residuals. In the second stage, a Masked Autoregressive Flow learns the underlying residual distribution, with samples from the flow subsequently driving the GARCH recursion for out-of-sample forecasting. Evaluated on 13 daily financial series (six FX pairs and seven equities), NF-GARCH demonstrates systematic, statistically significant improvements in forecast accuracy for skewed-t baselines. Wilcoxon signed-rank tests confirm superior performance specifically for gjrGARCH-sstd and sGARCH-sstd specifications. While the framework offers enhanced flexibility and generative realism, we observe that computational overhead is increased, and the log-variance specification of eGARCH exhibits instability when paired with flow-based innovations. These results suggest that while NF-GARCH effectively captures empirical tail behaviour in univariate settings, future research should explore conditional flow architectures and multivariate extensions to account for time-varying innovation shapes. For risk management, gains are most relevant where skewed-t baselines are used and where closer residual realism supports scenario analysis; effect sizes remain modest relative to model risk and implementation cost. Full article
(This article belongs to the Special Issue Volatility Modeling in Financial Market)
21 pages, 1463 KB  
Article
PiTransformer: A Gated Patch-Wise Inverted Transformer for Stochastic Multivariate Time Series Forecasting
by Lin Zhu and Kai Cheng
Mathematics 2026, 14(9), 1418; https://doi.org/10.3390/math14091418 - 23 Apr 2026
Abstract
Multivariate time series forecasting presents a challenging problem in stochastic modeling, particularly under non-stationary conditions with low signal-to-noise ratios. While recent inverted architectures enhance cross-variable dependency modeling, the conventional point-wise inversion strategy often compromises local temporal patterns. To address this limitation, we propose [...] Read more.
Multivariate time series forecasting presents a challenging problem in stochastic modeling, particularly under non-stationary conditions with low signal-to-noise ratios. While recent inverted architectures enhance cross-variable dependency modeling, the conventional point-wise inversion strategy often compromises local temporal patterns. To address this limitation, we propose PiTransformer, a gated patch-wise inverted framework for multivariate time series modeling. Specifically, a Patch-wise Inverted Embedding (PIE) mechanism is introduced to segment temporal sequences into regional patches prior to inversion, enabling the preservation of localized temporal structures. In addition, a Variable–Temporal Gating (VTG) module is incorporated to regulate feature interactions based on the information bottleneck principle, thereby suppressing spurious correlations in noisy environments. Empirical evaluations on diverse benchmarks—including financial and energy datasets—demonstrate that PiTransformer achieves consistent improvements in predictive accuracy and stability over competitive baselines. These results suggest that the proposed framework provides a robust and interpretable approach for modeling high-dimensional stochastic time series under non-stationary conditions. Full article
(This article belongs to the Section E: Applied Mathematics)
Show Figures

Figure 1

23 pages, 2091 KB  
Article
A Photovoltaic Power Prediction Method Based on Wavelet Convolutional Neural Networks and Improved Transformer
by Yibo Zhou, Zihang Liu, Zhen Cheng, Hanglin Mi, Zhaoyang Qin and Kangyangyong Cao
Energies 2026, 19(9), 2040; https://doi.org/10.3390/en19092040 - 23 Apr 2026
Abstract
The output power of photovoltaic (PV) systems is influenced by various environmental factors, exhibiting strong nonlinearity and non-stationarity, which poses significant challenges for accurate forecasting. To address these issues, this paper proposes a short-term PV power forecasting method based on wavelet convolutional neural [...] Read more.
The output power of photovoltaic (PV) systems is influenced by various environmental factors, exhibiting strong nonlinearity and non-stationarity, which poses significant challenges for accurate forecasting. To address these issues, this paper proposes a short-term PV power forecasting method based on wavelet convolutional neural networks and an improved Transformer. First, the Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) is employed to decompose the original PV power sequence into several intrinsic mode functions (IMFs). Fuzzy entropy is then utilized to evaluate the complexity of each component, and subsequences with similar entropy values are reconstructed to reduce the non-stationarity of the original series. Subsequently, Pearson correlation coefficients and the maximal information coefficient (MIC) are applied to capture both linear and nonlinear relationships between each reconstructed component and meteorological features, enabling the selection of strongly correlated variables. On this basis, a wavelet convolutional network (WTConv) is introduced to perform multi-scale decomposition and frequency-band feature extraction on the reconstructed components by integrating wavelet transform with convolution operations, effectively expanding the receptive field and extracting deep-seated features of the sequences. Finally, an improved iTransformer model is adopted for time-series modeling, leveraging its inverted encoding structure and self-attention mechanism to fully capture long-term dependencies among multivariate variables. The proposed model is validated using actual power data from a PV plant in Ningxia, China, across four seasons. Comprehensive experiments, including ablation studies, comparative analyses, loss function convergence evaluation, and Diebold–Mariano significance tests, are conducted to thoroughly assess the model’s effectiveness and superiority. Experimental results demonstrate that the proposed model achieves excellent prediction accuracy and stability in spring, summer, autumn, and winter, showing strong potential for engineering applications. Full article
(This article belongs to the Section A2: Solar Energy and Photovoltaic Systems)
Show Figures

Figure 1

35 pages, 2319 KB  
Review
An Overview of the Application of Modern Statistical Techniques in Semiconductor Manufacturing
by Hsuan-Yu Chen and Chiachung Chen
Appl. Syst. Innov. 2026, 9(4), 83; https://doi.org/10.3390/asi9040083 - 21 Apr 2026
Viewed by 282
Abstract
The semiconductor industry has long relied on Statistical Process Control (SPC) for yield and reliability management. In early technology nodes, classic univariate tools such as Shewhart charts, cumulative sums (CUSUM), exponentially weighted moving averages (EWMA), and the Cp/Cpk exponent could effectively monitor a [...] Read more.
The semiconductor industry has long relied on Statistical Process Control (SPC) for yield and reliability management. In early technology nodes, classic univariate tools such as Shewhart charts, cumulative sums (CUSUM), exponentially weighted moving averages (EWMA), and the Cp/Cpk exponent could effectively monitor a finite set of key variables. However, sub-5nm and emerging 3 nm technologies have fundamentally changed the statistical environment. Advanced patterning, high-aspect-ratio etching, atomic layer deposition (ALD), chemical-mechanical polishing (CMP), and novel materials have drastically narrowed the process window. At these scales, nanometer-level deviations in critical dimensions (CD), overlay, or surface roughness can significantly impact yield. Simultaneously, modern wafer fabs generate massive amounts of high-frequency sensor data and high-dimensional metrology data. Traditional SPC assumptions—such as independence, normality, low dimensionality, and stationarity—often do not hold. Semiconductor data exhibits: (i) extremely high-dimensionality and strong intervariate correlations; (ii) a hierarchical structure encompassing fab → tooling → chamber → recipe → batch → wafer → field; and (iii) metrological delays and sampling limitations leading to incomplete and asynchronous observations. To address these challenges, this paper reviews advanced statistical methods applicable to wafer fabrication. These methods include multivariate statistical process control (MSPC) approaches such as Hotelling T2 statistics, PCA/PLS combining T2 and Q statistics, contribution diagnostics, time-series drift and change point detection, and Bayesian hierarchical modeling for uncertainty-aware monitoring in data-limited scenarios. Furthermore, we discuss how to integrate these methods with fault detection and classification (FDC), line-to-line monitoring (R2R), advanced process control (APC), and manufacturing execution systems (MES). This paper focuses on scalable, interpretable, and maintainable implementations that transform statistical analysis from a passive monitoring tool into an active component of data-driven fab control. Full article
Show Figures

Figure 1

21 pages, 1496 KB  
Article
A Decomposition-Based Deep Learning Model for Multivariate Water Quality Prediction
by Qiliang Zhu, Xueting Yu and Hongtao Fu
Sustainability 2026, 18(8), 4129; https://doi.org/10.3390/su18084129 - 21 Apr 2026
Viewed by 170
Abstract
The extensive deployment of automatic water quality monitoring stations has generated substantial volumes of time-series data. Effectively utilizing these data is crucial for enhancing prediction accuracy. To address the limitations of existing models in capturing complex inter-indicator relationships and multi-scale temporal features, this [...] Read more.
The extensive deployment of automatic water quality monitoring stations has generated substantial volumes of time-series data. Effectively utilizing these data is crucial for enhancing prediction accuracy. To address the limitations of existing models in capturing complex inter-indicator relationships and multi-scale temporal features, this paper proposes a hybrid prediction model integrating time series decomposition with deep learning techniques. Adopting a “decomposition–prediction–reconstruction” paradigm, the model first decomposes the raw time series into trend, seasonal, and residual components using STL (Seasonal–Trend decomposition using LOESS). For the trend component, an improved Graph Convolutional Network (GCN) is designed to explicitly model the spatial dependencies among different water quality indicators. For the seasonal component, the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) method is employed for multi-scale signal analysis, followed by a coupled Long Short-Term Memory–Convolutional Neural Network (LSTM-CNN) unit to capture both long-term dependencies and local features. To validate the efficacy of the proposed model, experiments were conducted on three real-world water quality datasets from different watersheds. Experimental results demonstrate that the proposed model outperforms mainstream baseline models, including StemGCN, LSTM-CNN, CEEMDAN-LSTM-CNN, and Attention-CLX. Across the three datasets, the model consistently outperforms the best-performing baseline, achieving reductions in MAE ranging from 13.8% to 24.5% and up to a 45.3% reduction in RMSE on a single dataset, while the highest correlation coefficient between predicted and observed values reaches 0.855. These findings demonstrate that the proposed decomposition–integration framework effectively enhances the accuracy and stability of multivariate water quality prediction, offering a promising tool for supporting sustainable water resource management. Full article
(This article belongs to the Special Issue Advances in Management of Hydrology, Water Resources and Ecosystem)
Show Figures

Figure 1

20 pages, 1137 KB  
Article
Diagonal Adaptive Graph: Revisiting Channel Dependency in Multivariate Time Series Forecasting
by Xiang Li, Yanping Zheng and Zhewei Wei
Information 2026, 17(4), 394; https://doi.org/10.3390/info17040394 - 21 Apr 2026
Viewed by 203
Abstract
Adaptive graph learning has become a widely adopted paradigm for multivariate time series forecasting when explicit physical topology is unavailable. In these approaches, node embeddings are typically used to construct dense adjacency matrices based on pairwise similarity, implicitly coupling representation learning with relational [...] Read more.
Adaptive graph learning has become a widely adopted paradigm for multivariate time series forecasting when explicit physical topology is unavailable. In these approaches, node embeddings are typically used to construct dense adjacency matrices based on pairwise similarity, implicitly coupling representation learning with relational modeling. However, we observe that under identical training settings but different random initializations, the learned adjacency matrices can vary substantially while predictive performance remains nearly unchanged, indicating that the relational structure is often underdetermined by the forecasting objective. This observation suggests a mismatch between similarity-based structural learning and the forecasting objective. In this work, we revisit node embeddings from a sequence approximation perspective and propose a Diagonal Adaptive Graph (DiAG) module that restricts adaptive learning to diagonal elements. The diagonal coefficients are derived from channel-independent predictions, while off-diagonal interactions are constructed from the similarity of input sequences. This design decouples representation learning from relational modeling, allowing variables to adaptively switch between channel-independent and channel-dependent regimes. Experiments on multiple datasets show that DiAG improves forecasting performance without modifying the channel-independent backbones. These results indicate that channel-dependent forecasting can be achieved as a prediction-driven refinement over channel-independent backbones, without requiring fully learned dense relational structures. Full article
(This article belongs to the Special Issue Deep Learning Approach for Time Series Forecasting)
Show Figures

Figure 1

24 pages, 1441 KB  
Article
Unsupervised Detection of Pathological Gait Patterns via Instantaneous Center of Rotation Analysis
by Ludwin Molina Arias and Magdalena Smoleń
Appl. Sci. 2026, 16(8), 3976; https://doi.org/10.3390/app16083976 - 19 Apr 2026
Viewed by 225
Abstract
This study introduces a novel unsupervised framework, ICR-LLS, for detecting pathological gait patterns using instantaneous center of rotation (ICR) trajectories of the shank in the sagittal plane. ICR trajectories were computed from two-dimensional kinematic data captured at the lateral femoral epicondyle and lateral [...] Read more.
This study introduces a novel unsupervised framework, ICR-LLS, for detecting pathological gait patterns using instantaneous center of rotation (ICR) trajectories of the shank in the sagittal plane. ICR trajectories were computed from two-dimensional kinematic data captured at the lateral femoral epicondyle and lateral malleolus for both shanks, producing four-dimensional multivariate time series for each gait trial. Pairwise trajectory dissimilarities were quantified using circularly aligned Dynamic Time Warping (DTW), preserving temporal and spatial structure. The resulting dissimilarity matrix was embedded into a three-dimensional space using a force-directed network layout, enabling intuitive visualization of inter-subject gait relationships. Density-based clustering (DBSCAN), enhanced with a consensus-based ensemble approach, was employed to automatically identify clusters representing typical (healthy) gait patterns and outliers corresponding to pathological deviations. The framework is evaluated on a public dataset comprising individuals with Parkinson’s disease (PD) and healthy controls, achieving a normalized mutual information (NMI) of 0.449 and a Separation-to-Compactness Ratio (SCR) of 6.754, indicating a meaningful cluster structure. In addition, classification-oriented metrics yield an accuracy of 90%, sensitivity of 70%, and specificity of 96.7%, supporting the method’s effectiveness in distinguishing pathological gait. By combining minimal 2D kinematic inputs with unsupervised learning, ICR-LLS provides an interpretable framework for the exploratory analysis of gait variability, and although further validation is required, the findings suggest that ICR trajectories may serve as a meaningful biomechanical descriptor for characterizing pathological locomotion. Full article
Show Figures

Figure 1

29 pages, 14649 KB  
Article
TSC-Mamba: Adaptive Decomposition and Channel Interaction Fusion for Time Series Forecasting
by Chenjie Zhao, Xiaobo Wang and Ling Zhang
Mathematics 2026, 14(8), 1363; https://doi.org/10.3390/math14081363 - 18 Apr 2026
Viewed by 162
Abstract
Multivariate time series forecasting (TSF) is a fundamental task in intelligent systems, yet accurate and efficient modeling remains challenging under high dimensionality, non-stationarity, and complex cross-variate dependencies. The Mamba architecture provides an efficient linear-time backbone, but it still suffers from a multivariate representational [...] Read more.
Multivariate time series forecasting (TSF) is a fundamental task in intelligent systems, yet accurate and efficient modeling remains challenging under high dimensionality, non-stationarity, and complex cross-variate dependencies. The Mamba architecture provides an efficient linear-time backbone, but it still suffers from a multivariate representational bottleneck caused by unified state modeling. To address this limitation, we propose TSC-Mamba, a Mamba-centered framework built on a “Decoupling and Specialization” paradigm and organized as a cohesive “Decompose–Propagate–Correlate” pipeline. Specifically, the Adaptive Decomposition Fusion Module separates predictable low-frequency trends from high-frequency residual dynamics, while the Channel Interaction Fusion Module explicitly models structured cross-variate dependencies through an efficient low-rank mechanism. Experiments on eight public benchmark datasets show that TSC-Mamba achieves an average error reduction of up to 3.5% over the direct baseline S-Mamba while strictly maintaining linear complexity. Ablation studies validate the effectiveness of both modules, and Wilcoxon signed-rank analysis further confirms that the gains over S-Mamba are statistically significant. Additional experiments indicate strong run-to-run stability, robustness to input-length variation, improved generalization under partially visible variates, and more concentrated empirical predictive bands than S-Mamba. These results show that structured responsibility allocation is an effective strategy for enhancing state-space models in multivariate TSF. Full article
Show Figures

Figure 1

38 pages, 3155 KB  
Article
Decoding the Energy-Economy-Carbon Nexus: A TFT-ASTGCN Deep Learning Approach for Spatiotemporal Carbon Forecasting in the Yellow River Basin, China
by Yuanyi Hu, Chenjun Zhang, Xiangyang Zhao and Shiyu Mao
Energies 2026, 19(8), 1950; https://doi.org/10.3390/en19081950 - 17 Apr 2026
Viewed by 172
Abstract
This study systematically examines the low-carbon transition challenges faced by the Yellow River Basin, a core strategic energy base in China with a coal-dominated energy system, under the dual carbon goals. Existing studies based on traditional econometric models or single-province analyses are mostly [...] Read more.
This study systematically examines the low-carbon transition challenges faced by the Yellow River Basin, a core strategic energy base in China with a coal-dominated energy system, under the dual carbon goals. Existing studies based on traditional econometric models or single-province analyses are mostly limited to static analysis, failing to simultaneously capture the nonlinear spatiotemporal evolution, cross-regional spillover effects, and long-term changing trends of carbon emissions in the basin. To fill this gap, this study builds an Energy–Economy–Carbon (EEC) analytical framework, and develops an integrated TFT-ASTGCN deep learning framework. Specifically, we employ the Temporal Fusion Transformer (TFT) for high-precision multivariate time-series simulation and peak forecasting, while the Attention-based Spatial–Temporal Graph Convolutional Network (ASTGCN) is used to identify complex spatial dependencies of inter-provincial emissions. The empirical results confirm that: (1) Basin carbon emissions show significant coal-driven carbon lock-in, with initial decoupling between economic growth and emissions. (2) Most provinces will maintain rising emissions under the current development mode, posing severe challenges to carbon peaking. (3) Asymmetric spatial spillover effects are prominent, underscoring cross-regional collaborative governance as a critical pathway for achieving an early and stable carbon peak in the basin. Full article
(This article belongs to the Special Issue Economic and Technological Advances Shaping the Energy Transition)
48 pages, 1406 KB  
Article
Scalable Likelihood Inference for Student-t Copula Count Time Series
by Quynh Nhu Nguyen and Victor De Oliveira
Stats 2026, 9(2), 43; https://doi.org/10.3390/stats9020043 (registering DOI) - 17 Apr 2026
Viewed by 113
Abstract
Count time series often exhibit extremal dependence that may not be adequately captured by Gaussian copula models. We develop a likelihood-based framework for count-valued time series using Student-t copulas with latent ARMA dependence. The latent process is constructed through a scale-mixture representation [...] Read more.
Count time series often exhibit extremal dependence that may not be adequately captured by Gaussian copula models. We develop a likelihood-based framework for count-valued time series using Student-t copulas with latent ARMA dependence. The latent process is constructed through a scale-mixture representation of a Gaussian ARMA process, preserving the second-order dependence structure while introducing tail dependence and greater persistence of extreme events. Likelihood inference requires evaluating high-dimensional truncated multivariate t probabilities, which is computationally demanding under heavy tails and strong serial dependence. To address this challenge, we develop scalable likelihood approximations tailored to the time series structure. In particular, we formulate a time series version of minimax exponential tilting for multivariate t probabilities, termed Time Series Minimax Exponential Tilting (TMET), which exploits the exact conditional representation of the latent ARMA process. The resulting algorithm reduces computational complexity from cubic to near-linear in the series length while retaining the high accuracy of minimax exponential tilting. For comparison, we also extend two widely used Gaussian copula approximations—the continuous extension (CE) method and the Geweke–Hajivassiliou–Keane (GHK) simulator—to the Student-t copula setting. Simulation studies show that TMET outperforms CE and GHK, particularly under strong dependence, heavy tails, and low-count regimes. The framework also supports predictive inference and residual diagnostics. An application to weekly rotavirus counts illustrates how the Student-t copula provides a flexible extension of the Gaussian copula while retaining stable inference even when tail dependence is weak or absent. Full article
Show Figures

Figure 1

20 pages, 3693 KB  
Article
LSTM-Based Reduced-Order Modeling of Secondary Loop of Nuclear-Powered Propulsion Actuation System
by Kaiyu Li, Lizhi Jiang, Xinxin Cai, Fengyun Li, Gang Xie, Zhiwei Zheng, Wenlin Wang, Hongxing Lu and Guohua Wu
Actuators 2026, 15(4), 225; https://doi.org/10.3390/act15040225 - 16 Apr 2026
Viewed by 166
Abstract
The dynamic response of the secondary circuit system in nuclear propulsion plants is critical to the power output, safety, and energy efficiency of nuclear-powered ships. High-fidelity thermo-hydraulic simulation models can accurately capture system transients but are computationally expensive and unsuitable for real-time applications. [...] Read more.
The dynamic response of the secondary circuit system in nuclear propulsion plants is critical to the power output, safety, and energy efficiency of nuclear-powered ships. High-fidelity thermo-hydraulic simulation models can accurately capture system transients but are computationally expensive and unsuitable for real-time applications. To address this limitation, this study proposes a reduced-order dynamic parameter prediction method that integrates high-fidelity simulation with deep learning. A multi-operating-condition simulation model of a typical nuclear-powered ship secondary circuit system is developed to generate time-series data covering load ramping and propulsion mode switching. Based on this dataset, a conventional recurrent neural network (RNN) and a multilayer long short-term memory (LSTM) network are constructed for multivariate autoregressive prediction of 17 key dynamic parameters, and their performances are systematically compared. Results show that the LSTM significantly outperforms the RNN in capturing long-term temporal dependencies, achieving average RMSE and MAPE values of 0.0228% and 0.365%, respectively. The proposed model completes 50-step-ahead prediction within 0.84 s, satisfying real-time requirements. The hybrid simulation-driven and data-driven framework provides a practical solution for intelligent monitoring and control optimization of nuclear-powered ship propulsion systems. Full article
Show Figures

Figure 1

31 pages, 7153 KB  
Article
Balancing Accuracy and Efficiency in the Temporal Resampling of Met-Ocean Data
by Sara Ramos-Marin and C. Guedes Soares
Oceans 2026, 7(2), 35; https://doi.org/10.3390/oceans7020035 - 16 Apr 2026
Viewed by 286
Abstract
Harmonising heterogeneous met-ocean time series to a common temporal resolution is a prerequisite for integrated marine renewable energy assessments. Such datasets often differ in their sampling frequency, statistical distribution, and non-stationarity, complicating joint analysis. This study presents a practical multi-criteria framework for selecting [...] Read more.
Harmonising heterogeneous met-ocean time series to a common temporal resolution is a prerequisite for integrated marine renewable energy assessments. Such datasets often differ in their sampling frequency, statistical distribution, and non-stationarity, complicating joint analysis. This study presents a practical multi-criteria framework for selecting temporal interpolation strategies for met-ocean datasets, explicitly balancing prediction accuracy and computational efficiency. Six environmental variables relevant to offshore renewable energy—wind speed, significant wave height, energy period, peak period, global horizontal irradiance, and upper-ocean thermal gradients—are analysed using ten-year reanalysis datasets for the Madeira Archipelago. Six commonly used deterministic time-domain interpolation methods are evaluated within a unified validation framework combining training–test splits, k-fold cross-validation, and Monte Carlo resampling. Their performances are quantified using the relative root mean square error and computational time, integrated through a composite performance score. The results show that makima interpolation provides the most consistent compromise between accuracy and efficiency for most variables in dense, regularly sampled met-ocean datasets, while spline-based approaches perform better for highly skewed solar irradiance. Preprocessing steps, such as detrending and distribution normalisation, yield only marginal improvements for dense, regularly sampled datasets, and method rankings remain stable under moderate changes in accuracy–speed weightings. Rather than proposing a universal interpolator, this work delivers a reproducible decision-support workflow for temporal resampling of multi-variable met-ocean datasets, supporting early-stage marine renewable energy assessments. Full article
(This article belongs to the Special Issue Offshore Renewable Energy and Related Environmental Science)
Show Figures

Figure 1

28 pages, 6564 KB  
Article
A Diffusion-Based Time-Frequency Dual-Stream Contrastive Learning Model for Multivariate Time Series Anomaly Detection
by Kuo Wu, Changming Xu, Ranran Zhang, Wei Lu, Yuan Ma, Ende Zhang and Kaiwen Tan
Entropy 2026, 28(4), 448; https://doi.org/10.3390/e28040448 - 15 Apr 2026
Viewed by 341
Abstract
Multivariate time series anomaly detection holds critical application value in key domains such as industrial system monitoring, financial risk management, and medical surveillance. However, existing approaches face two major challenges: reconstruction-based or prediction-based models tend to adapt to anomalous patterns during training, thereby [...] Read more.
Multivariate time series anomaly detection holds critical application value in key domains such as industrial system monitoring, financial risk management, and medical surveillance. However, existing approaches face two major challenges: reconstruction-based or prediction-based models tend to adapt to anomalous patterns during training, thereby weakening the distinction between normal and abnormal samples; furthermore, the non-stationary nature of time series leads to distribution shifts between training and testing data, impairing model generalization. To address these issues, this paper proposes the TFCID model. The model innovatively leverages diffusion principles to effectively impute missing time series data while capturing significant frequency-domain features. In the temporal processing stream, an unconditional diffusion model combined with imputation masking is employed to achieve high-precision imputation of randomly missing values, effectively preventing anomalies from interfering with model training. In the frequency-domain processing stream, an amplitude-aware frequency-domain masked autoencoder is introduced to specifically capture periodic or trend-based pattern anomalies. The model mitigates distribution shift by constraining the discrepancy between temporal and frequency-domain representations via adversarial contrastive learning, and uses this discrepancy as a robust anomaly scoring metric. Experimental results on five public benchmark datasets show that TFCID significantly outperforms state-of-the-art methods in detection accuracy (F1-Score), validating its effectiveness in anomaly detection tasks. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Back to TopTop