Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (841)

Search Parameters:
Keywords = multivariant time series

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 855 KiB  
Article
SegmentedCrossformer—A Novel and Enhanced Cross-Time and Cross-Dimensional Transformer for Multivariate Time Series Forecasting
by Zijiang Yang and Tad Gonsalves
Forecasting 2025, 7(3), 41; https://doi.org/10.3390/forecast7030041 - 3 Aug 2025
Viewed by 42
Abstract
Multivariate Time Series Forecasting (MTSF) has been innovated with a series of models in the last two decades, ranging from traditional statistical approaches to RNN-based models. However, recent contributions from deep learning to time series problems have made huge progress with a series [...] Read more.
Multivariate Time Series Forecasting (MTSF) has been innovated with a series of models in the last two decades, ranging from traditional statistical approaches to RNN-based models. However, recent contributions from deep learning to time series problems have made huge progress with a series of Transformer-based models. Despite the breakthroughs by attention mechanisms applied to deep learning areas, many challenges remain to be solved with more sophisticated models. Existing Transformers known as attention-based models outperform classical models with abilities to capture temporal dependencies and better strategies for learning dependencies among variables as well as in the time domain in an efficient manner. Aiming to solve those issues, we propose a novel Transformer—SegmentedCrossformer (SCF), a Transformer-based model that considers both time and dependencies among variables in an efficient manner. The model is built upon the encoder–decoder architecture in different scales and compared with the previous state of the art. Experimental results on different datasets show the effectiveness of SCF with unique advantages and efficiency. Full article
(This article belongs to the Section Forecasting in Computer Science)
Show Figures

Figure 1

24 pages, 2815 KiB  
Article
Blockchain-Powered LSTM-Attention Hybrid Model for Device Situation Awareness and On-Chain Anomaly Detection
by Qiang Zhang, Caiqing Yue, Xingzhe Dong, Guoyu Du and Dongyu Wang
Sensors 2025, 25(15), 4663; https://doi.org/10.3390/s25154663 - 28 Jul 2025
Viewed by 263
Abstract
With the increasing scale of industrial devices and the growing complexity of multi-source heterogeneous sensor data, traditional methods struggle to address challenges in fault detection, data security, and trustworthiness. Ensuring tamper-proof data storage and improving prediction accuracy for imbalanced anomaly detection for potential [...] Read more.
With the increasing scale of industrial devices and the growing complexity of multi-source heterogeneous sensor data, traditional methods struggle to address challenges in fault detection, data security, and trustworthiness. Ensuring tamper-proof data storage and improving prediction accuracy for imbalanced anomaly detection for potential deployment in the Industrial Internet of Things (IIoT) remain critical issues. This study proposes a blockchain-powered Long Short-Term Memory Network (LSTM)–Attention hybrid model: an LSTM-based Encoder–Attention–Decoder (LEAD) for industrial device anomaly detection. The model utilizes an encoder–attention–decoder architecture for processing multivariate time series data generated by industrial sensors and smart contracts for automated on-chain data verification and tampering alerts. Experiments on real-world datasets demonstrate that the LEAD achieves an F0.1 score of 0.96, outperforming baseline models (Recurrent Neural Network (RNN): 0.90; LSTM: 0.94; and Bi-directional LSTM (Bi-LSTM, 0.94)). We simulate the system using a private FISCO-BCOS network with a multi-node setup to demonstrate contract execution, anomaly data upload, and tamper alert triggering. The blockchain system successfully detects unauthorized access and data tampering, offering a scalable solution for device monitoring. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

21 pages, 1622 KiB  
Article
Enhancing Wearable Fall Detection System via Synthetic Data
by Minakshi Debnath, Sana Alamgeer, Md Shahriar Kabir and Anne H. Ngu
Sensors 2025, 25(15), 4639; https://doi.org/10.3390/s25154639 - 26 Jul 2025
Viewed by 355
Abstract
Deep learning models rely heavily on extensive training data, but obtaining sufficient real-world data remains a major challenge in clinical fields. To address this, we explore methods for generating realistic synthetic multivariate fall data to supplement limited real-world samples collected from three fall-related [...] Read more.
Deep learning models rely heavily on extensive training data, but obtaining sufficient real-world data remains a major challenge in clinical fields. To address this, we explore methods for generating realistic synthetic multivariate fall data to supplement limited real-world samples collected from three fall-related datasets: SmartFallMM, UniMib, and K-Fall. We apply three conventional time-series augmentation techniques, a Diffusion-based generative AI method, and a novel approach that extracts fall segments from public video footage of older adults. A key innovation of our work is the exploration of two distinct approaches: video-based pose estimation to extract fall segments from public footage, and Diffusion models to generate synthetic fall signals. Both methods independently enable the creation of highly realistic and diverse synthetic data tailored to specific sensor placements. To our knowledge, these approaches and especially their application in fall detection represent rarely explored directions in this research area. To assess the quality of the synthetic data, we use quantitative metrics, including the Fréchet Inception Distance (FID), Discriminative Score, Predictive Score, Jensen–Shannon Divergence (JSD), and Kolmogorov–Smirnov (KS) test, and visually inspect temporal patterns for structural realism. We observe that Diffusion-based synthesis produces the most realistic and distributionally aligned fall data. To further evaluate the impact of synthetic data, we train a long short-term memory (LSTM) model offline and test it in real time using the SmartFall App. Incorporating Diffusion-based synthetic data improves the offline F1-score by 7–10% and boosts real-time fall detection performance by 24%, confirming its value in enhancing model robustness and applicability in real-world settings. Full article
Show Figures

Figure 1

17 pages, 3415 KiB  
Article
A Hybrid Multi-Step Forecasting Approach for Methane Steam Reforming Process Using a Trans-GRU Network
by Qinwei Zhang, Xianyao Han, Jingwen Zhang and Pan Qin
Processes 2025, 13(7), 2313; https://doi.org/10.3390/pr13072313 - 21 Jul 2025
Viewed by 285
Abstract
During the steam reforming of methane (SRM) process, elevated CH4 levels after the reaction often signify inadequate heat supply or incomplete reactions within the reformer, jeopardizing process stability. In this paper, a novel multi-step forecasting method using a Trans-GRU network was proposed [...] Read more.
During the steam reforming of methane (SRM) process, elevated CH4 levels after the reaction often signify inadequate heat supply or incomplete reactions within the reformer, jeopardizing process stability. In this paper, a novel multi-step forecasting method using a Trans-GRU network was proposed for predicting the methane content outlet of the SRM reformer. First, a novel feature selection based on the maximal information coefficient (MIC) was applied to identify critical input variables and determine their optimal input order. Additionally, the Trans-GRU network enables the simultaneous capture of multivariate correlations and the learning of global sequence representations. The experimental results based on time-series data from a real SRM process demonstrate that the proposed approach significantly improves the accuracy of multi-step methane content prediction. Compared to benchmark models, including the TCN, Transformer, GRU, and CNN-LSTM, the Trans-GRU consistently achieves the lowest root mean squared error (RMSE) and mean absolute error (MAE) values across all prediction steps (1–6). Specifically, at the one-step horizon, it yields an RMSE of 0.0120 and an MAE of 0.0094. This high performance remains robust across the 2–6-step predictions. The improved predictive capability supports the stable operation and predictive optimization strategies of the steam reforming process in hydrogen production. Full article
(This article belongs to the Section Chemical Processes and Systems)
Show Figures

Figure 1

26 pages, 2055 KiB  
Article
Comparative Analysis of Time-Series Forecasting Models for eLoran Systems: Exploring the Effectiveness of Dynamic Weighting
by Jianchen Di, Miao Wu, Jun Fu, Wenkui Li, Xianzhou Jin and Jinyu Liu
Sensors 2025, 25(14), 4462; https://doi.org/10.3390/s25144462 - 17 Jul 2025
Viewed by 321
Abstract
This paper presents an advanced time-series forecasting methodology that integrates multiple machine learning models to improve data prediction in enhanced long-range navigation (eLoran) systems. The analysis evaluates five forecasting approaches: multivariate linear regression, long short-term memory (LSTM) networks, random forest (RF), a fusion [...] Read more.
This paper presents an advanced time-series forecasting methodology that integrates multiple machine learning models to improve data prediction in enhanced long-range navigation (eLoran) systems. The analysis evaluates five forecasting approaches: multivariate linear regression, long short-term memory (LSTM) networks, random forest (RF), a fusion model combining LSTM and RF, and a dynamic weighting (DW) model. The results demonstrate that the DW model achieves the highest prediction accuracy while maintaining strong computational efficiency, making it particularly suitable for real-time applications with stringent performance requirements. Although the LSTM model effectively captures temporal dependencies, it demands considerable computational resources. The hybrid model utilises the strengths of LSTM and RF to enhance the accuracy but involves extended training times. By contrast, the DW model dynamically adjusts the relative contributions of LSTM and RF on the basis of the data characteristics, thereby enhancing the accuracy while significantly reducing the computational demands. Demonstrating exceptional performance on the ASF2 dataset, the DW model provides a well-balanced solution that combines precision with operational efficiency. This research offers valuable insights into optimising additional secondary phase factor (ASF) prediction in eLoran systems and highlights the broader applicability of real-time forecasting models. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

22 pages, 1441 KiB  
Article
Utility of Domain Adaptation for Biomass Yield Forecasting
by Jonathan M. Vance, Bryan Smith, Abhishek Cherukuru, Khaled Rasheed, Ali Missaoui, John A. Miller, Frederick Maier and Hamid Arabnia
AgriEngineering 2025, 7(7), 237; https://doi.org/10.3390/agriengineering7070237 - 14 Jul 2025
Viewed by 397
Abstract
Previous work used machine learning (ML) to estimate past and current alfalfa yields and showed that domain adaptation (DA) with data synthesis shows promise in classifying yields as high, medium, or low. The current work uses similar techniques to forecast future alfalfa yields. [...] Read more.
Previous work used machine learning (ML) to estimate past and current alfalfa yields and showed that domain adaptation (DA) with data synthesis shows promise in classifying yields as high, medium, or low. The current work uses similar techniques to forecast future alfalfa yields. A novel technique is proposed for forecasting alfalfa time series data that exploits stationarity and predicts differences in yields rather than the yields themselves. This forecasting technique generally provides more accurate forecasts than the established ARIMA family of forecasters for both univariate and multivariate time series. Furthermore, this ML-based technique is potentially easier to use than the ARIMA family of models. Also, previous work is extended by showing that DA with data synthesis also works well for predicting continuous values, not just for classification. The novel scale-invariant tabular synthesizer (SITS) is proposed, and it is competitive with or superior to other established synthesizers in producing data that trains strong models. This synthesis algorithm leads to R scores over 100% higher than an established synthesizer in this domain, while ML-based forecasters beat the ARIMA family with symmetric mean absolute percent error (sMAPE) scores as low as 12.81%. Finally, ML-based forecasting is combined with DA (ForDA) to create a novel pipeline that improves forecast accuracy with sMAPE scores as low as 9.81%. As alfalfa is crucial to the global food supply, and as climate change creates challenges with managing alfalfa, this work hopes to help address those challenges and contribute to the field of ML. Full article
Show Figures

Figure 1

18 pages, 1756 KiB  
Article
Ultra-Short-Term Wind Power Prediction Based on Fused Features and an Improved CNN
by Hui Li, Siyao Li, Hua Li and Liang Bai
Processes 2025, 13(7), 2236; https://doi.org/10.3390/pr13072236 - 13 Jul 2025
Viewed by 252
Abstract
It is difficult for a single feature in wind power data to fully reflect the multifactor coupling relationship with wind power, while the forecast model hyperparameters rely on empirical settings, which affects the prediction accuracy. In order to effectively predict the continuous power [...] Read more.
It is difficult for a single feature in wind power data to fully reflect the multifactor coupling relationship with wind power, while the forecast model hyperparameters rely on empirical settings, which affects the prediction accuracy. In order to effectively predict the continuous power in the future time period, an ultra-short-term prediction model of wind power based on fused features and an improved convolutional neural network (CNN) is proposed. Firstly, the historical power data are decomposed using dynamic modal decomposition (DMD) to extract their modal features. Then, considering the influence of meteorological factors on power prediction, the historical meteorological data in the sample data are extracted using kernel principal component analysis (KPCA). Finally, the decomposed power modal and the extracted meteorological components are reconstructed into multivariate time-series features; the snow ablation optimisation algorithm (SAO) is used to optimise the convolutional neural network (CNN) for wind power prediction. The results show that the root-mean-square error of the prediction result is 31.9% lower than that of the undecomposed one after using DMD decomposition; for the prediction of the power of two different wind farms, the root-mean-square error of the improved CNN model is reduced by 39.8% and 30.6%, respectively, compared with that of the original model, which shows that the proposed model has a better prediction performance. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

23 pages, 5716 KiB  
Article
Transfer Learning-Based LRCNN for Lithium Battery State of Health Estimation with Small Samples
by Yuchao Xiong, Tiangang Lv, Liya Gao, Jingtian Hu, Zhe Zhang and Haoming Liu
Processes 2025, 13(7), 2223; https://doi.org/10.3390/pr13072223 - 11 Jul 2025
Viewed by 299
Abstract
Traditional data-driven approaches to lithium battery state of health (SOH) estimation face the challenges of difficult feature extraction, insufficient prediction accuracy and weak generalization. To address these issues, this study proposes a novel prediction framework with transfer learning-based linear regression (LR) and a [...] Read more.
Traditional data-driven approaches to lithium battery state of health (SOH) estimation face the challenges of difficult feature extraction, insufficient prediction accuracy and weak generalization. To address these issues, this study proposes a novel prediction framework with transfer learning-based linear regression (LR) and a convolutional neural network (CNN) under limited data. In this framework, first, variable inertia weight-based improved particle swarm optimization for variational mode decomposition (VIW-PSO-VMD) is proposed to mitigate the volatility of the “capacity resurgence point” and extract its time-series features. Then, the T-Pearson correlation analysis is introduced to comprehensively analyze the correlations between multivariate features and lithium battery SOH data and accurately extract strongly correlated features to learn the common features of lithium batteries. On this basis, a combination model is proposed, applying LR to extract the trend features and combining them with the multivariate strongly correlated features via a CNN. Transfer learning based on temporal feature analysis is used to improve the cross-domain learning capabilities of the model. We conduct case studies on a NASA dataset and the University of Maryland dataset. The results show that the proposed method is effective in improving the lithium battery SOH estimation accuracy under limited data. Full article
(This article belongs to the Special Issue Transfer Learning Methods in Equipment Reliability Management)
Show Figures

Figure 1

16 pages, 1730 KiB  
Article
Retail Demand Forecasting: A Comparative Analysis of Deep Neural Networks and the Proposal of LSTMixer, a Linear Model Extension
by Georgios Theodoridis and Athanasios Tsadiras
Information 2025, 16(7), 596; https://doi.org/10.3390/info16070596 - 11 Jul 2025
Viewed by 591
Abstract
Accurate retail demand forecasting is integral to the operational efficiency of any retail business. As demand is described over time, the prediction of demand is a time-series forecasting problem which may be addressed in a univariate manner, via statistical methods and simplistic machine [...] Read more.
Accurate retail demand forecasting is integral to the operational efficiency of any retail business. As demand is described over time, the prediction of demand is a time-series forecasting problem which may be addressed in a univariate manner, via statistical methods and simplistic machine learning approaches, or in a multivariate fashion using generic deep learning forecasters that are well-established in other fields. This study analyzes, optimizes, trains and tests such forecasters, namely the Temporal Fusion Transformer and the Temporal Convolutional Network, alongside the recently proposed Time-Series Mixer, to accurately forecast retail demand given a dataset of historical sales in 45 stores with their accompanied features. Moreover, the present work proposes a novel extension of the Time-Series Mixer architecture, the LSTMixer, which utilizes an additional Long Short-Term Memory block to achieve better forecasts. The results indicate that the proposed LSTMixer model is the better predictor, whilst all the other aforementioned models outperform the common statistical and machine learning methods. An ablation test is also performed to ensure that the extension within the LSTMixer design is responsible for the improved results. The findings promote the use of deep learning models for retail demand forecasting problems and establish LSTMixer as a viable and efficient option. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Economics and Business Management)
Show Figures

Figure 1

21 pages, 1871 KiB  
Article
Fusion of Recurrence Plots and Gramian Angular Fields with Bayesian Optimization for Enhanced Time-Series Classification
by Maria Mariani, Prince Appiah and Osei Tweneboah
Axioms 2025, 14(7), 528; https://doi.org/10.3390/axioms14070528 - 10 Jul 2025
Viewed by 514
Abstract
Time-series classification remains a critical task across various domains, demanding models that effectively capture both local recurrence structures and global temporal dependencies. We introduce a novel framework that transforms time series into image representations by fusing recurrence plots (RPs) with both Gramian Angular [...] Read more.
Time-series classification remains a critical task across various domains, demanding models that effectively capture both local recurrence structures and global temporal dependencies. We introduce a novel framework that transforms time series into image representations by fusing recurrence plots (RPs) with both Gramian Angular Summation Fields (GASFs) and Gramian Angular Difference Fields (GADFs). This fusion enriches the structural encoding of temporal dynamics. To ensure optimal performance, Bayesian Optimization is employed to automatically select the ideal image resolution, eliminating the need for manual tuning. Unlike prior methods that rely on individual transformations, our approach concatenates RP, GASF, and GADF into a unified representation and generalizes to multivariate data by stacking transformation channels across sensor dimensions. Experiments on seven univariate datasets show that our method significantly outperforms traditional classifiers such as one-nearest neighbor with Dynamic Time Warping, Shapelet Transform, and RP-based convolutional neural networks. For multivariate tasks, the proposed fusion model achieves macro F1 scores of 91.55% on the UCI Human Activity Recognition dataset and 98.95% on the UCI Room Occupancy Estimation dataset, outperforming standard deep learning baselines. These results demonstrate the robustness and generalizability of our framework, establishing a new benchmark for image-based time-series classification through principled fusion and adaptive optimization. Full article
Show Figures

Figure 1

18 pages, 3088 KiB  
Article
Incremental Multi-Step Learning MLP Model for Online Soft Sensor Modeling
by Yihan Wang, Jiahao Tao and Liang Zhao
Sensors 2025, 25(14), 4303; https://doi.org/10.3390/s25144303 - 10 Jul 2025
Viewed by 253
Abstract
Industrial production often involves complex time-varying operating conditions that result in continuous time-series production data. The traditional soft sensor approach has difficulty adjusting to such dynamic changes, which makes model performance less optimal. Furthermore, online analytical systems have significant operational and maintenance costs [...] Read more.
Industrial production often involves complex time-varying operating conditions that result in continuous time-series production data. The traditional soft sensor approach has difficulty adjusting to such dynamic changes, which makes model performance less optimal. Furthermore, online analytical systems have significant operational and maintenance costs and entail a substantial delay in measurement output, limiting their ability to provide real-time control. In order to deal with these challenges, this paper introduces a multivariate multi-step predictive multilayer perceptron regression soft-sensing model, referred to as incremental MVMS-MLP. This model incorporates incremental learning strategies to enhance its adaptability and accuracy in multivariate predictions. As part of the method, a pre-trained MVMS-MLP model is developed, which integrates multivariate multi-step prediction with MLP regression to handle temporal data. Through the use of incremental learning, an incremental MVMS-MLP model is constructed from this pre-trained model. The effectiveness of the proposed method is demonstrated by benchmark problems and real-world industrial case studies. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

24 pages, 3200 KiB  
Article
A Spatial–Temporal Time Series Decomposition for Improving Independent Channel Forecasting
by Yue Yu, Pavel Loskot, Wenbin Zhang, Qi Zhang and Yu Gao
Mathematics 2025, 13(14), 2221; https://doi.org/10.3390/math13142221 - 8 Jul 2025
Viewed by 308
Abstract
Forecasting multivariate time series is a pivotal task in controlling multi-sensor systems. The joint forecasting of all channels may be too complex, whereas forecasting the channels independently may cause important spatial inter-dependencies to be overlooked. In this paper, we improve the performance of [...] Read more.
Forecasting multivariate time series is a pivotal task in controlling multi-sensor systems. The joint forecasting of all channels may be too complex, whereas forecasting the channels independently may cause important spatial inter-dependencies to be overlooked. In this paper, we improve the performance of single-channel forecasting algorithms by designing an interpretable front-end that extracts the spatial–temporal components from the input multivariate time series. Specifically, the multivariate samples are first segmented into equal-sized matrix symbols. The symbols are decomposed into the frequency-separated Intrinsic Mode Functions (IMFs) using a 2D Empirical-Mode Decomposition (EMD). The IMF components in each channel are then forecasted independently using relatively simple univariate predictors (UPs) such as DLinear, FITS, and TCN. The symbol size is determined to maximize the temporal stationarity of the EMD residual trend using Bayesian optimization. In addition, since the overall performance is usually dominated by a few of the weakest predictors, it is shown that the forecasting accuracy can be further improved by reordering the corresponding channels to make more correlated channels more adjacent. However, channel reordering requires retraining the affected predictors. The main advantage of the proposed forecasting framework for multivariate time series is that it retains the interpretability and simplicity of single-channel forecasting methods while improving their accuracy by capturing information about the spatial-channel dependencies. This has been demonstrated numerically assuming a 64-channel EEG dataset. Full article
Show Figures

Figure 1

33 pages, 2533 KiB  
Article
VBTCKN: A Time Series Forecasting Model Based on Variational Mode Decomposition with Two-Channel Cross-Attention Network
by Zhiguo Xiao, Changgen Li, Huihui Hao, Siwen Liang, Qi Shen and Dongni Li
Symmetry 2025, 17(7), 1063; https://doi.org/10.3390/sym17071063 - 4 Jul 2025
Viewed by 421
Abstract
Time series forecasting serves a critical function in domains such as energy, meteorology, and power systems by leveraging historical data to predict future trends. However, existing methods often prioritize long-term dependencies while neglecting the integration of local features and global patterns, resulting in [...] Read more.
Time series forecasting serves a critical function in domains such as energy, meteorology, and power systems by leveraging historical data to predict future trends. However, existing methods often prioritize long-term dependencies while neglecting the integration of local features and global patterns, resulting in limited accuracy for short-term predictions of non-stationary multivariate sequences. To address these challenges, this paper proposes a time series forecasting model named VBTCKN based on variational mode decomposition and a dual-channel cross-attention network. First, the model employs variational mode decomposition (VMD) to decompose the time series into multiple frequency-complementary modal components, thereby reducing sequence volatility. Subsequently, the BiLSTM channel extracts temporal dependencies between sequences, while the transformer channel captures dynamic correlations between local features and global patterns. The cross-attention mechanism dynamically fuses features from both channels, enhancing complementary information integration. Finally, prediction results are generated through Kolmogorov–Arnold networks (KAN). Experiments conducted on four public datasets demonstrated that VBTCKN outperformed other state-of-the-art methods in both accuracy and robustness. Compared with BiLSTM, VBTCKN reduced RMSE by 63.32%, 68.31%, 57.98%, and 90.76%, respectively. Full article
Show Figures

Graphical abstract

20 pages, 1198 KiB  
Article
Semi-Supervised Deep Learning Framework for Predictive Maintenance in Offshore Wind Turbines
by Valerio F. Barnabei, Tullio C. M. Ancora, Giovanni Delibra, Alessandro Corsini and Franco Rispoli
Int. J. Turbomach. Propuls. Power 2025, 10(3), 14; https://doi.org/10.3390/ijtpp10030014 - 4 Jul 2025
Viewed by 428
Abstract
The increasing deployment of wind energy systems, particularly offshore wind farms, necessitates advanced monitoring and maintenance strategies to ensure optimal performance and minimize downtime. Supervisory Control And Data Acquisition (SCADA) systems have become indispensable tools for monitoring the operational health of wind turbines, [...] Read more.
The increasing deployment of wind energy systems, particularly offshore wind farms, necessitates advanced monitoring and maintenance strategies to ensure optimal performance and minimize downtime. Supervisory Control And Data Acquisition (SCADA) systems have become indispensable tools for monitoring the operational health of wind turbines, generating vast quantities of time series data from various sensors. Anomaly detection techniques applied to this data offer the potential to proactively identify deviations from normal behavior, providing early warning signals of potential component failures. Traditional model-based approaches for fault detection often struggle to capture the complexity and non-linear dynamics of wind turbine systems. This has led to a growing interest in data-driven methods, particularly those leveraging machine learning and deep learning, to address anomaly detection in wind energy applications. This study focuses on the development and application of a semi-supervised, multivariate anomaly detection model for horizontal axis wind turbines. The core of this study lies in Bidirectional Long Short-Term Memory (BI-LSTM) networks, specifically a BI-LSTM autoencoder architecture, to analyze time series data from a SCADA system and automatically detect anomalous behavior that could indicate potential component failures. Moreover, the approach is reinforced by the integration of the Isolation Forest algorithm, which operates in an unsupervised manner to further refine normal behavior by identifying and excluding additional anomalous points in the training set, beyond those already labeled by the data provider. The research utilizes a real-world dataset provided by EDP Renewables, encompassing two years of comprehensive SCADA records collected from a single offshore wind turbine operating in the Gulf of Guinea. Furthermore, the dataset contains the logs of failure events and recorded alarms triggered by the SCADA system across a wide range of subsystems. The paper proposes a multi-modal anomaly detection framework orchestrating an unsupervised module (i.e., decision tree method) with a supervised one (i.e., BI-LSTM AE). The results highlight the efficacy of the BI-LSTM autoencoder in accurately identifying anomalies within the SCADA data that exhibit strong temporal correlation with logged warnings and the actual failure events. The model’s performance is rigorously evaluated using standard machine learning metrics, including precision, recall, F1 Score, and accuracy, all of which demonstrate favorable results. Further analysis is conducted using Cumulative Sum (CUSUM) control charts to gain a deeper understanding of the identified anomalies’ behavior, particularly their persistence and timing leading up to the failures. Full article
Show Figures

Figure 1

17 pages, 2929 KiB  
Article
Novel Hybrid Deep Learning Model for Forecasting FOWT Power Output
by Mohammad Barooni, Deniz Velioglu Sogut, Parviz Sedigh and Masoumeh Bahrami
Energies 2025, 18(13), 3532; https://doi.org/10.3390/en18133532 - 4 Jul 2025
Viewed by 309
Abstract
This study presents a novel approach in the field of renewable energy, focusing on the power generation capabilities of floating offshore wind turbines (FOWTs). The study addresses the challenges of designing and assessing the power generation of FOWTs due to their multidisciplinary nature [...] Read more.
This study presents a novel approach in the field of renewable energy, focusing on the power generation capabilities of floating offshore wind turbines (FOWTs). The study addresses the challenges of designing and assessing the power generation of FOWTs due to their multidisciplinary nature involving aerodynamics, hydrodynamics, structural dynamics, and control systems. A hybrid deep learning model combining Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks is proposed to predict the performance of FOWTs accurately and more efficiently than traditional numerical models. This model addresses computational complexity and lengthy processing times of conventional models, offering adaptability, scalability, and efficient handling of nonlinear dynamics. The results for predicting the generator power of a spar-type floating offshore wind turbine (FOWT) in a multivariable parallel time-series dataset using the Convolutional Neural Network–Long Short-Term Memory (CNN-LSTM) model showed promising outcomes, offering valuable insights into the model’s performance and potential applications. Its ability to capture a comprehensive range of load case scenarios—from mild to severe—through the integration of multiple relevant features significantly enhances the model’s robustness and applicability in realistic offshore environments. The research demonstrates the potential of deep learning methods in advancing renewable energy technology, specifically in optimizing turbine efficiency, anticipating maintenance needs, and integrating wind power into energy grids. Full article
(This article belongs to the Section A3: Wind, Wave and Tidal Energy)
Show Figures

Figure 1

Back to TopTop