Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (332)

Search Parameters:
Keywords = Temporal Convolutional Network (TCN)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 3436 KiB  
Article
An Improved Wind Power Forecasting Model Considering Peak Fluctuations
by Shengjie Yang, Jie Tang, Lun Ye, Jiangang Liu and Wenjun Zhao
Electronics 2025, 14(15), 3050; https://doi.org/10.3390/electronics14153050 - 30 Jul 2025
Viewed by 117
Abstract
Wind power output sequences exhibit strong randomness and intermittency characteristics; traditional single forecasting models struggle to capture the internal features of sequences and are highly susceptible to interference from high-frequency noise and predictive accuracy is still notably poor at the peaks where the [...] Read more.
Wind power output sequences exhibit strong randomness and intermittency characteristics; traditional single forecasting models struggle to capture the internal features of sequences and are highly susceptible to interference from high-frequency noise and predictive accuracy is still notably poor at the peaks where the power curve undergoes abrupt changes. To address the poor fitting at peaks, a short-term wind power forecasting method based on the improved Informer model is proposed. First, the temporal convolutional network (TCN) is introduced to enhance the model’s ability to capture regional segment features along the temporal dimension, enhancing the model’s receptive field to address wind power fluctuation under varying environmental conditions. Next, a discrete cosine transform (DCT) is employed for adaptive modeling of frequency dependencies between channels, converting the time series data into frequency domain representations to extract its frequency features. These frequency domain features are then weighted using a channel attention mechanism to improve the model’s ability to capture peak features and resist noise interference. Finally, the Informer generative decoder is used to output the power prediction results, this enables the model to simultaneously leverage neighboring temporal segment features and long-range inter-temporal dependencies for future wind-power prediction, thereby substantially improving the fitting accuracy at power-curve peaks. Experimental results validate the effectiveness and practicality of the proposed model; compared with other models, the proposed approach reduces MAE by 9.14–42.31% and RMSE by 12.57–47.59%. Full article
(This article belongs to the Special Issue Digital Intelligence Technology and Applications)
Show Figures

Figure 1

18 pages, 10854 KiB  
Article
A Novel Method for Predicting Landslide-Induced Displacement of Building Monitoring Points Based on Time Convolution and Gaussian Process
by Jianhu Wang, Xianglin Zeng, Yingbo Shi, Jiayi Liu, Liangfu Xie, Yan Xu and Jie Liu
Electronics 2025, 14(15), 3037; https://doi.org/10.3390/electronics14153037 - 30 Jul 2025
Viewed by 135
Abstract
Accurate prediction of landslide-induced displacement is essential for the structural integrity and operational safety of buildings and infrastructure situated in geologically unstable regions. This study introduces a novel hybrid predictive framework that synergistically integrates Gaussian Process Regression (GPR) with Temporal Convolutional Neural Networks [...] Read more.
Accurate prediction of landslide-induced displacement is essential for the structural integrity and operational safety of buildings and infrastructure situated in geologically unstable regions. This study introduces a novel hybrid predictive framework that synergistically integrates Gaussian Process Regression (GPR) with Temporal Convolutional Neural Networks (TCNs), herein referred to as the GTCN model, to forecast displacement at building monitoring points subject to landslide activity. The proposed methodology is validated using time-series monitoring data collected from the slope adjacent to the Zhongliang Reservoir in Wuxi County, Chongqing, an area where slope instability poses a significant threat to nearby structural assets. Experimental results demonstrate the GTCN model’s superior predictive performance, particularly under challenging conditions of incomplete or sparsely sampled data. The model proves highly effective in accurately characterizing both abrupt fluctuations within the displacement time series and capturing long-term deformation trends. Furthermore, the GTCN framework outperforms comparative hybrid models based on Gated Recurrent Units (GRUs) and GPR, with its advantage being especially pronounced in data-limited scenarios. It also exhibits enhanced capability for temporal feature extraction relative to conventional imputation-based forecasting strategies like forward-filling. By effectively modeling both nonlinear trends and uncertainty within displacement sequences, the GTCN framework offers a robust and scalable solution for landslide-related risk assessment and early warning applications. Its applicability to building safety monitoring underscores its potential contribution to geotechnical hazard mitigation and resilient infrastructure management. Full article
Show Figures

Figure 1

21 pages, 3722 KiB  
Article
State of Health Estimation for Lithium-Ion Batteries Based on TCN-RVM
by Yu Zhao, Yonghong Xu, Yidi Wei, Liang Tong, Yiyang Li, Minghui Gong, Hongguang Zhang, Baoying Peng and Yinlian Yan
Appl. Sci. 2025, 15(15), 8213; https://doi.org/10.3390/app15158213 - 23 Jul 2025
Viewed by 236
Abstract
A State of Health (SOH) estimation of lithium-ion batteries is a core function of battery management systems, directly affecting the safe operation, lifetime prediction, and economic efficiency of batteries. However, existing methods still face challenges in balancing feature robustness and model generalization ability; [...] Read more.
A State of Health (SOH) estimation of lithium-ion batteries is a core function of battery management systems, directly affecting the safe operation, lifetime prediction, and economic efficiency of batteries. However, existing methods still face challenges in balancing feature robustness and model generalization ability; for instance, some studies rely on features whose physical correlation with SOH lacks strict verification, or the models struggle to simultaneously capture the temporal dynamics of health factors and nonlinear mapping relationships. To address this, this paper proposes an SOH estimation method based on incremental capacity (IC) curves and a Temporal Convolutional Network—Relevance Vector Machine (TCN-RVM) model, with core innovations reflected in two aspects. Firstly, five health factors are extracted from IC curves, and the strong correlation between these features and SOH is verified using both Pearson and Spearman coefficients, ensuring the physical rationality and statistical significance of feature selection. Secondly, the TCN-RVM model is constructed to achieve complementary advantages. The dilated causal convolution of TCN is used to extract temporal local features of health factors, addressing the insufficient capture of long-range dependencies in traditional models; meanwhile, the Bayesian inference framework of RVM is integrated to enhance the nonlinear mapping capability and small-sample generalization, avoiding the overfitting tendency of complex models. Experimental validation is conducted using the lithium-ion battery dataset from the University of Maryland. The results show that the mean absolute error of the SOH estimation using the proposed method does not exceed 0.72%, which is significantly superior to comparative models such as CNN-GRU, KELM, and SVM, demonstrating higher accuracy and reliability compared with other models. Full article
Show Figures

Figure 1

17 pages, 2719 KiB  
Article
State of Health Prediction for Lithium-Ion Batteries Based on Gated Temporal Network Assisted by Improved Grasshopper Optimization
by Xiankun Wei, Silun Peng and Mingli Mo
Energies 2025, 18(14), 3856; https://doi.org/10.3390/en18143856 - 20 Jul 2025
Viewed by 298
Abstract
Accurate SOH prediction provides a reliable reference for lithium-ion battery maintenance. However, novel algorithms are still needed because few studies have considered the correlations between monitored parameters in Euclidean space and non-Euclidean space at different time points. To address this challenge, a novel [...] Read more.
Accurate SOH prediction provides a reliable reference for lithium-ion battery maintenance. However, novel algorithms are still needed because few studies have considered the correlations between monitored parameters in Euclidean space and non-Euclidean space at different time points. To address this challenge, a novel gated-temporal network assisted by improved grasshopper optimization (IGOA-GGNN-TCN) is developed. In this model, features obtained from lithium-ion batteries are used to construct graph data based on cosine similarity. On this basis, the GGNN-TCN is employed to obtain the potential correlations between monitored parameters in Euclidean and non-Euclidean spaces. Furthermore, IGOA is introduced to overcome the issue of hyperparameter optimization for GGNN-TCN, improving the convergence speed and the local optimal problem. Competitive results on the Oxford dataset indicate that the SOH prediction performance of proposed IGOA-GGNN-TCN surpasses conventional methods, such as convolutional neural networks (CNNs) and gate recurrent unit (GRUs), achieving an R2 value greater than 0.99. The experimental results demonstrate that the proposed IGOA-GGNN-TCN framework offers a novel and effective approach for state-of-health (SOH) estimation in lithium-ion batteries. By integrating improved grasshopper optimization (IGOA) with hybrid graph-temporal modeling, the method achieves superior prediction accuracy compared to conventional techniques, providing a promising tool for battery management systems in real-world applications. Full article
(This article belongs to the Special Issue AI Solutions for Energy Management: Smart Grids and EV Charging)
Show Figures

Figure 1

20 pages, 4616 KiB  
Article
Temporal Convolutional Network with Attention Mechanisms for Strong Wind Early Warning in High-Speed Railway Systems
by Wei Gu, Guoyuan Yang, Hongyan Xing, Yajing Shi and Tongyuan Liu
Sustainability 2025, 17(14), 6339; https://doi.org/10.3390/su17146339 - 10 Jul 2025
Viewed by 381
Abstract
High-speed railway (HSR) is a key transport mode for achieving carbon reduction targets and promoting sustainable regional economic development due to its fast, efficient, and low-carbon nature. Accurate wind speed forecasting (WSF) is vital for HSR systems, as it provides future wind conditions [...] Read more.
High-speed railway (HSR) is a key transport mode for achieving carbon reduction targets and promoting sustainable regional economic development due to its fast, efficient, and low-carbon nature. Accurate wind speed forecasting (WSF) is vital for HSR systems, as it provides future wind conditions that are critical for ensuring safe train operations. Numerous WSF schemes based on deep learning have been proposed. However, accurately forecasting strong wind events remains challenging due to the complex and dynamic nature of wind. In this study, we propose a novel hybrid network architecture, MHSETCN-LSTM, for forecasting strong wind. The MHSETCN-LSTM integrates temporal convolutional networks (TCNs) and long short-term memory networks (LSTMs) to capture both short-term fluctuations and long-term trends in wind behavior. The multi-head squeeze-and-excitation (MHSE) attention mechanism dynamically recalibrates the importance of different aspects of the input sequence, allowing the model to focus on critical time steps, particularly when abrupt wind events occur. In addition to wind speed, we introduce wind direction (WD) to characterize wind behavior due to its impact on the aerodynamic forces acting on trains. To maintain the periodicity of WD, we employ a triangular transform to predict the sine and cosine values of WD, improving the reliability of predictions. Massive experiments are conducted to evaluate the effectiveness of the proposed method based on real-world wind data collected from sensors along the Beijing–Baotou railway. Experimental results demonstrated that our model outperforms state-of-the-art solutions for WSF, achieving a mean-squared error (MSE) of 0.0393, a root-mean-squared error (RMSE) of 0.1982, and a coefficient of determination (R2) of 99.59%. These experimental results validate the efficacy of our proposed model in enhancing the resilience and sustainability of railway infrastructure.Furthermore, the model can be utilized in other wind-sensitive sectors, such as highways, ports, and offshore wind operations. This will further promote the achievement of Sustainable Development Goal 9. Full article
(This article belongs to the Section Environmental Sustainability and Applications)
Show Figures

Figure 1

20 pages, 11079 KiB  
Article
A Bayesian Ensemble Learning-Based Scheme for Real-Time Error Correction of Flood Forecasting
by Liyao Peng, Jiemin Fu, Yanbin Yuan, Xiang Wang, Yangyong Zhao and Jian Tong
Water 2025, 17(14), 2048; https://doi.org/10.3390/w17142048 - 8 Jul 2025
Viewed by 318
Abstract
To address the critical demand for high-precision forecasts in flood management, real-time error correction techniques are increasingly implemented to improve the accuracy and operational reliability of the hydrological prediction framework. However, developing a robust error correction scheme remains a significant challenge due to [...] Read more.
To address the critical demand for high-precision forecasts in flood management, real-time error correction techniques are increasingly implemented to improve the accuracy and operational reliability of the hydrological prediction framework. However, developing a robust error correction scheme remains a significant challenge due to the compounded errors inherent in hydrological modeling frameworks. In this study, a Bayesian ensemble learning-based correction (BELC) scheme is proposed which integrates hydrological modeling with multiple machine learning methods to enhance real-time error correction for flood forecasting. The Xin’anjiang (XAJ) model is selected as the hydrological model for this study, given its proven effectiveness in flood forecasting across humid and semi-humid regions, combining structural simplicity with demonstrated predictive accuracy. The BELC scheme straightforwardly post-processes the output of the XAJ model under the Bayesian ensemble learning framework. Four machine learning methods are implemented as base learners: long short-term memory (LSTM) networks, a light gradient-boosting machine (LGBM), temporal convolutional networks (TCN), and random forest (RF). Optimal weights for all base learners are determined by the K-means clustering technique and Bayesian optimization in the BELC scheme. Four baseline schemes constructed by base learners and three ensemble learning-based schemes are also built for comparison purposes. The performance of the BELC scheme is systematically evaluated in the Hengshan Reservoir watershed (Fenghua City, China). Results indicate the following: (1) The BELC scheme achieves better performance in both accuracy and robustness compared to the four baseline schemes and three ensemble learning-based schemes. The average performance metrics for 1–3 h lead times are 0.95 (NSE), 0.92 (KGE), 24.25 m3/s (RMSE), and 8.71% (RPE), with a PTE consistently below 1 h in advance. (2) The K-means clustering technique proves particularly effective with the ensemble learning framework for high flow ranges, where the correction performance exhibits an increment of 62%, 100%, and 100% for 1 h, 2 h, and 3 h lead hours, respectively. Overall, the BELC scheme demonstrates the potential of a Bayesian ensemble learning framework in improving real-time error correction of flood forecasting systems. Full article
(This article belongs to the Special Issue Innovations in Hydrology: Streamflow and Flood Prediction)
Show Figures

Figure 1

18 pages, 9571 KiB  
Article
TCN-MAML: A TCN-Based Model with Model-Agnostic Meta-Learning for Cross-Subject Human Activity Recognition
by Chih-Yang Lin, Chia-Yu Lin, Yu-Tso Liu, Yi-Wei Chen, Hui-Fuang Ng and Timothy K. Shih
Sensors 2025, 25(13), 4216; https://doi.org/10.3390/s25134216 - 6 Jul 2025
Viewed by 328
Abstract
Human activity recognition (HAR) using Wi-Fi-based sensing has emerged as a powerful, non-intrusive solution for monitoring human behavior in smart environments. Unlike wearable sensor systems that require user compliance, Wi-Fi channel state information (CSI) enables device-free recognition by capturing variations in signal propagation [...] Read more.
Human activity recognition (HAR) using Wi-Fi-based sensing has emerged as a powerful, non-intrusive solution for monitoring human behavior in smart environments. Unlike wearable sensor systems that require user compliance, Wi-Fi channel state information (CSI) enables device-free recognition by capturing variations in signal propagation caused by human motion. This makes Wi-Fi sensing highly attractive for ambient healthcare, security, and elderly care applications. However, real-world deployment faces two major challenges: (1) significant cross-subject signal variability due to physical and behavioral differences among individuals, and (2) limited labeled data, which restricts model generalization. To address these sensor-related challenges, we propose TCN-MAML, a novel framework that integrates temporal convolutional networks (TCN) with model-agnostic meta-learning (MAML) for efficient cross-subject adaptation in data-scarce conditions. We evaluate our approach on a public Wi-Fi CSI dataset using a strict cross-subject protocol, where training and testing subjects do not overlap. The proposed TCN-MAML achieves 99.6% accuracy, demonstrating superior generalization and efficiency over baseline methods. Experimental results confirm the framework’s suitability for low-power, real-time HAR systems embedded in IoT sensor networks. Full article
(This article belongs to the Special Issue Sensors and Sensing Technologies for Object Detection and Recognition)
Show Figures

Figure 1

27 pages, 2954 KiB  
Article
Limited Data Availability in Building Energy Consumption Prediction: A Low-Rank Transfer Learning with Attention-Enhanced Temporal Convolution Network
by Bo Wang, Qiming Fu, You Lu and Ke Liu
Information 2025, 16(7), 575; https://doi.org/10.3390/info16070575 - 4 Jul 2025
Viewed by 195
Abstract
Building energy consumption prediction (BECP) is the essential foundation for attaining energy efficiency in buildings, contributing significantly to tackling global energy challenges and facilitating energy sustainability. However, while data-driven methods have emerged as a crucial method to solving this complex problem, the limited [...] Read more.
Building energy consumption prediction (BECP) is the essential foundation for attaining energy efficiency in buildings, contributing significantly to tackling global energy challenges and facilitating energy sustainability. However, while data-driven methods have emerged as a crucial method to solving this complex problem, the limited availability of data presents a significant challenge to model training. To address this challenge, this paper presents an innovative method, named Low-Rank Transfer Learning with Attention-Enhanced Temporal Convolution Network (LRTL-AtTCN). LRTL-AtTCN integrates the attention mechanism with temporal convolutional network (TCN), improving the ability of extracting global and local dependencies. Moreover, LRTL-AtTCN combines low-rank decomposition, reducing the number of parameters during the transfer learning process with similar buildings, which can achieve better transfer performance in the limited data case. Experimentally, we conduct a comprehensive evaluation across three forecasting horizons—1 week, 2 weeks, and 1 month. Compared to the horizon-matched baseline, LRTL-AtTCN cuts the MAE by 91.2%, 30.2%, and 26.4%, respectively, and lifts the 1-month R2 from 0.8188 to 0.9286. On every horizon it also outperforms state-of-the-art transfer-learning methods, confirming its strong generalization and transfer capability in BECP. Full article
(This article belongs to the Special Issue AI Applications in Construction and Infrastructure)
Show Figures

Figure 1

37 pages, 18679 KiB  
Article
Real-Time DDoS Detection in High-Speed Networks: A Deep Learning Approach with Multivariate Time Series
by Drixter V. Hernandez, Yu-Kuen Lai and Hargyo T. N. Ignatius
Electronics 2025, 14(13), 2673; https://doi.org/10.3390/electronics14132673 - 1 Jul 2025
Viewed by 459
Abstract
The exponential growth of Distributed Denial-of-Service (DDoS) attacks in high-speed networks presents significant real-time detection and mitigation challenges. The existing detection frameworks are categorized into flow-based and packet-based detection approaches. Flow-based approaches usually suffer from high latency and controller overhead in high-volume traffic. [...] Read more.
The exponential growth of Distributed Denial-of-Service (DDoS) attacks in high-speed networks presents significant real-time detection and mitigation challenges. The existing detection frameworks are categorized into flow-based and packet-based detection approaches. Flow-based approaches usually suffer from high latency and controller overhead in high-volume traffic. In contrast, packet-based approaches are prone to high false-positive rates and limited attack classification, resulting in delayed mitigation responses. To address these limitations, we propose a real-time DDoS detection architecture that combines hardware-accelerated statistical preprocessing with GPU-accelerated deep learning models. The raw packet header information is transformed into multivariate time series data to enable classification of complex traffic patterns using Temporal Convolutional Networks (TCN), Long Short-Term Memory (LSTM) networks, and Transformer architectures. We evaluated the proposed system using experiments conducted under low to high-volume background traffic to validate each model’s robustness and adaptability in a real-time network environment. The experiments are conducted across different time window lengths to determine the trade-offs between detection accuracy and latency. The results show that larger observation windows improve detection accuracy using TCN and LSTM models and consistently outperform the Transformer in high-volume scenarios. Regarding model latency, TCN and Transformer exhibit constant latency across all window sizes. We also used SHAP (Shapley Additive exPlanations) analysis to identify the most discriminative traffic features, enhancing model interpretability and supporting feature selection for computational efficiency. Among the experimented models, TCN achieves the most balance between detection performance and latency, making it an applicable model for the proposed architecture. These findings validate the feasibility of the proposed architecture and support its potential as a real-time DDoS detection application in a realistic high-speed network. Full article
(This article belongs to the Special Issue Emerging Technologies for Network Security and Anomaly Detection)
Show Figures

Figure 1

20 pages, 3320 KiB  
Article
Experimental Study on Heat Transfer Performance of FKS-TPMS Heat Sink Designs and Time Series Prediction
by Mahsa Hajialibabaei and Mohamad Ziad Saghir
Energies 2025, 18(13), 3459; https://doi.org/10.3390/en18133459 - 1 Jul 2025
Viewed by 445
Abstract
As the demand for advanced cooling solutions increases with the rise in artificial intelligence and high-performance computing, efficient thermal management becomes critical, particularly for data centers and electronic systems. Triply Periodic Minimal Surface (TPMS) heat sinks have shown superior thermal performance over conventional [...] Read more.
As the demand for advanced cooling solutions increases with the rise in artificial intelligence and high-performance computing, efficient thermal management becomes critical, particularly for data centers and electronic systems. Triply Periodic Minimal Surface (TPMS) heat sinks have shown superior thermal performance over conventional designs by enhancing heat transfer efficiency. In this study, a novel Fischer–Koch-S (FKS) TPMS heat sink was experimentally tested with four porosity configurations, 0.6 (identified as P6), 0.7 (identified as P7), 0.8 (identified as P8), and a gradient porosity ranging from 0.6 to 0.8 (identified as P678) along the flow direction, under a mass flow rate range of 0.012 to 0.019 kg/s. Key thermal parameters including surface temperature, thermal resistance, heat transfer coefficient, and Nusselt number were analyzed and compared to the conventional straight-channel heat sink (SCHS) using numerical modeling. Among all configurations, the P6 design demonstrated the best performance, with surface temperature differences ranging from 13.1 to 14.2 °C at 0.019 kg/s and a 54.46% higher heat transfer coefficient compared to the P8 design at the lowest mass flow rate. Thermal resistance decreased consistently with an increasing mass flow rate, with P6 achieving a 31.8% reduction compared to P8 at 0.019 kg/s. The P678 gradient design offered improved temperature uniformity and performance at higher mass flow rates. Nusselt number ratios confirmed that low-porosity and gradient TPMS designs outperform the SCHS, with performance advantages increasing as the mass flow rate rises. To further enhance the experimental process, a deep learning model based on a Temporal Convolutional Network (TCN) was developed to predict steady-state surface temperatures using early-stage time-series data, to reduce test time and enable efficient validation. Full article
(This article belongs to the Special Issue Experimental and Numerical Thermal Science in Porous Media)
Show Figures

Figure 1

23 pages, 3736 KiB  
Article
Performance Analysis of a Hybrid Complex-Valued CNN-TCN Model for Automatic Modulation Recognition in Wireless Communication Systems
by Hamza Ouamna, Anass Kharbouche, Noureddine El-Haryqy, Zhour Madini and Younes Zouine
Appl. Syst. Innov. 2025, 8(4), 90; https://doi.org/10.3390/asi8040090 - 28 Jun 2025
Viewed by 598
Abstract
This paper presents a novel deep learning-based automatic modulation recognition (AMR) model, designed to classify ten modulation types from complex I/Q signal data. The proposed architecture, named CV-CNN-TCN, integrates Complex-Valued Convolutional Neural Networks (CV-CNNs) with Temporal Convolutional Networks (TCNs) to jointly extract spatial [...] Read more.
This paper presents a novel deep learning-based automatic modulation recognition (AMR) model, designed to classify ten modulation types from complex I/Q signal data. The proposed architecture, named CV-CNN-TCN, integrates Complex-Valued Convolutional Neural Networks (CV-CNNs) with Temporal Convolutional Networks (TCNs) to jointly extract spatial and temporal features while preserving the inherent phase information of the signal. An enhanced variant, CV-CNN-TCN-DCC, incorporates dilated causal convolutions to further strengthen temporal representation. The models are trained and evaluated on the benchmark RadioML2016.10b dataset. At SNR = −10 dB, the CV-CNN-TCN achieves a classification accuracy of 37%, while the CV-CNN-TCN-DCC improves to 40%. In comparison, ResNet reaches 33%, and other models such as CLDNN (convolutional LSTM dense neural network) and SCRNN (Sequential Convolutional Recurrent Neural Network) remain below 30%. At 0 dB SNR, the CV-CNN-TCN-DCC achieves a Jaccard index of 0.58 and an MCC of 0.67, outperforming ResNet (0.55, 0.64) and CNN (0.53, 0.61). Furthermore, the CV-CNN-TCN-DCC achieves 75% accuracy at SNR = 10 dB and maintains over 90% classification accuracy for SNRs above 2 dB. These results demonstrate that the proposed architectures, particularly with dilated causal convolutional enhancements, significantly improve robustness and generalization under low-SNR conditions, outperforming state-of-the-art models in both accuracy and reliability. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

21 pages, 3698 KiB  
Article
Research on Bearing Fault Diagnosis Method Based on MESO-TCN
by Ruibin Gao, Jing Zhu, Yifan Wu, Kaiwen Xiao and Yang Shen
Machines 2025, 13(7), 558; https://doi.org/10.3390/machines13070558 - 27 Jun 2025
Viewed by 252
Abstract
To address the issues of information redundancy, limited feature representation, and empirically set parameters in rolling bearing fault diagnosis, this paper proposes a Multi-Entropy Screening and Optimization Temporal Convolutional Network (MESO-TCN). The method integrates feature filtering, network modeling, and parameter optimization into a [...] Read more.
To address the issues of information redundancy, limited feature representation, and empirically set parameters in rolling bearing fault diagnosis, this paper proposes a Multi-Entropy Screening and Optimization Temporal Convolutional Network (MESO-TCN). The method integrates feature filtering, network modeling, and parameter optimization into a unified diagnostic framework. Specifically, ensemble empirical mode decomposition (EEMD) is combined with a hybrid entropy criterion to preprocess the raw vibration signals and suppress redundant noise. A kernel-extended temporal convolutional network (ETCN) is designed with multi-scale dilated convolution to extract diverse temporal fault patterns. Furthermore, an improved whale optimization algorithm incorporating a firefly-inspired mechanism is introduced to adaptively optimize key hyperparameters. Experimental results on datasets from Xi’an Jiaotong University and Southeast University demonstrate that MESO-TCN achieves average accuracies of 99.78% and 95.82%, respectively, outperforming mainstream baseline methods. These findings indicate the method’s strong generalization ability, feature discriminability, and engineering applicability in intelligent fault diagnosis of rotating machinery. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

26 pages, 10233 KiB  
Article
Time-Series Forecasting Method Based on Hierarchical Spatio-Temporal Attention Mechanism
by Zhiguo Xiao, Junli Liu, Xinyao Cao, Ke Wang, Dongni Li and Qian Liu
Sensors 2025, 25(13), 4001; https://doi.org/10.3390/s25134001 - 26 Jun 2025
Viewed by 539
Abstract
In the field of intelligent decision-making, time-series data collected by sensors serves as the core carrier for interaction between the physical and digital worlds. Accurate analysis is the cornerstone of decision-making in critical scenarios, such as industrial monitoring and intelligent transportation. However, the [...] Read more.
In the field of intelligent decision-making, time-series data collected by sensors serves as the core carrier for interaction between the physical and digital worlds. Accurate analysis is the cornerstone of decision-making in critical scenarios, such as industrial monitoring and intelligent transportation. However, the inherent spatio-temporal coupling characteristics and cross-period long-range dependency of sensor data cause traditional time-series prediction methods to face performance bottlenecks in feature decoupling and multi-scale modeling. This study innovatively proposes a Spatio-Temporal Attention-Enhanced Network (TSEBG). Breaking through traditional structural designs, the model employs a Squeeze-and-Excitation Network (SENet) to reconstruct the convolutional layers of the Temporal Convolutional Network (TCN), strengthening the feature expression of key time steps through dynamic channel weight allocation to address the redundancy issue of traditional causal convolutions in local pattern capture. A Bidirectional Gated Recurrent Unit (BiGRU) variant based on a global attention mechanism is designed, leveraging the collaboration between gating units and attention weights to mine cross-period long-distance dependencies and effectively alleviate the gradient disappearance problem of Recurrent Neural Network (RNN-like) models in multi-scale time-series analysis. A hierarchical feature fusion architecture is constructed to achieve multi-dimensional alignment of local spatial and global temporal features. Through residual connections and the dynamic adjustment of attention weights, hierarchical semantic representations are output. Experiments show that TSEBG outperforms current dominant models in time-series single-step prediction tasks in terms of accuracy and performance, with a cross-dataset R2 standard deviation of only 3.7%, demonstrating excellent generalization stability. It provides a novel theoretical framework for feature decoupling and multi-scale modeling of complex time-series data. Full article
Show Figures

Figure 1

19 pages, 2565 KiB  
Article
Rolling Bearing Fault Diagnosis via Temporal-Graph Convolutional Fusion
by Fan Li, Yunfeng Li and Dongfeng Wang
Sensors 2025, 25(13), 3894; https://doi.org/10.3390/s25133894 - 23 Jun 2025
Viewed by 501
Abstract
To address the challenge of incomplete fault feature extraction in rolling bearing fault diagnosis under small-sample conditions, this paper proposes a Temporal-Graph Convolutional Fusion Network (T-GCFN). The method enhances diagnostic robustness through collaborative extraction and dynamic fusion of features from time-domain and frequency-domain [...] Read more.
To address the challenge of incomplete fault feature extraction in rolling bearing fault diagnosis under small-sample conditions, this paper proposes a Temporal-Graph Convolutional Fusion Network (T-GCFN). The method enhances diagnostic robustness through collaborative extraction and dynamic fusion of features from time-domain and frequency-domain branches. First, Variational Mode Decomposition (VMD) was employed to extract time-domain Intrinsic Mode Functions (IMFs). These were then input into a Temporal Convolutional Network (TCN) to capture multi-scale temporal dependencies. Simultaneously, frequency-domain features obtained via Fast Fourier Transform (FFT) were used to construct a K-Nearest Neighbors (KNN) graph, which was processed by a Graph Convolutional Network (GCN) to identify spatial correlations. Subsequently, a channel attention fusion layer was designed. This layer utilized global max pooling and average pooling to compress spatio-temporal features. A shared Multi-Layer Perceptron (MLP) then established inter-channel dependencies to generate attention weights, enhancing critical features for more complete fault information extraction. Finally, a SoftMax classifier performed end-to-end fault recognition. Experiments demonstrated that the proposed method significantly improved fault recognition accuracy under small-sample scenarios. These results validate the strong adaptability of the T-GCFN mechanism. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

17 pages, 3392 KiB  
Article
Crop Classification Using Time-Series Sentinel-1 SAR Data: A Comparison of LSTM, GRU, and TCN with Attention
by Yuta Tsuchiya and Rei Sonobe
Remote Sens. 2025, 17(12), 2095; https://doi.org/10.3390/rs17122095 - 18 Jun 2025
Viewed by 613
Abstract
This study investigates the performance of temporal deep learning models with attention mechanisms for crop classification using Sentinel-1 C-band synthetic aperture radar (C-SAR) data. A time series of 16 scenes, acquired at 12-day intervals from 25 April to 22 October 2024, was used [...] Read more.
This study investigates the performance of temporal deep learning models with attention mechanisms for crop classification using Sentinel-1 C-band synthetic aperture radar (C-SAR) data. A time series of 16 scenes, acquired at 12-day intervals from 25 April to 22 October 2024, was used to classify six crop types: beans, beetroot, grassland, maize, potato, and winter wheat. Three temporal models—long short-term memory (LSTM), bidirectional gated recurrent unit (Bi-GRU), and temporal convolutional network (TCN)—were evaluated with and without an attention mechanism. All model configurations achieved accuracies above 83%, demonstrating the potential of Sentinel-1 SAR data for reliable, weather-independent crop classification. The TCN with attention model achieved the highest accuracy of 85.7%, significantly outperforming the baseline. LSTM also showed improved accuracy when combined with attention, whereas Bi-GRU did not benefit from the attention mechanism. These results highlight the effectiveness of combining temporal deep learning models with attention mechanisms to enhance crop classification using Sentinel-1 SAR time-series data. This study further confirms that freely available, regularly acquired Sentinel-1 observations are well-suited for robust crop mapping under diverse environmental conditions. Full article
Show Figures

Figure 1

Back to TopTop