Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (491)

Search Parameters:
Keywords = temporal convolutional network (TCN)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 22374 KB  
Article
A Hybrid Drone SINS/GNSS Information Fusion Method Based on Attention-Augmented TCN in GNSS-Denied Environments
by Chuan Xu, Shuai Chen, Daxiang Zhao, Zhikuan Hou and Changhui Jiang
Remote Sens. 2026, 18(9), 1379; https://doi.org/10.3390/rs18091379 - 29 Apr 2026
Abstract
In the field of drone navigation systems, a high-precision positioning solution can be provided by an integrated strapdown inertial navigation system (SINS)/global navigation satellite system (GNSS). But when satellite signals are interfered with or blocked by tall buildings, the errors of SINS will [...] Read more.
In the field of drone navigation systems, a high-precision positioning solution can be provided by an integrated strapdown inertial navigation system (SINS)/global navigation satellite system (GNSS). But when satellite signals are interfered with or blocked by tall buildings, the errors of SINS will disperse rapidly due to the complex air and mechanical vibrations, leading to a serious degradation of navigation accuracy. To enhance the positioning performance in this situation, this paper proposes a hybrid information fusion method based on attention-augmented temporal convolutional network (TCN) for drone SINS/GNSS navigation system. A feature integration and prediction model is constructed to provide a pseudo-positioning reference for the integrated navigation filter during GNSS-denied periods, in which TCN is used to establish a predictive positioning error correction model based on inertial measurements and SINS data, while a self-attention model is incorporated to extract complex global drone motion features. The performance of the proposed method has been experimentally verified using Global Positioning System (GPS) and SINS data collected from real drone flight test. Comparison results among the proposed model, SINS with TCN, SINS with convergent Kalman filter (KF) prediction section and SINS-only indicate that the proposed method can effectively improve the drone positioning accuracy in specific GNSS-denied environments. Full article
Show Figures

Figure 1

23 pages, 2846 KB  
Article
Predicting Emergency Department Patient Arrivals at Hospitals Using Machine Learning Techniques
by Abdulmajeed M. Alenezi, Mahmoud Sameh, Meshal Aljohani and Hosam Alharbi
Healthcare 2026, 14(9), 1191; https://doi.org/10.3390/healthcare14091191 - 29 Apr 2026
Abstract
Background/Objective: Emergency Departments (EDs) face persistent challenges with overcrowding, unpredictable patient arrivals, and difficulty forecasting short-term demand. Precise hourly arrival predictions are crucial for effective staffing, optimal resource management, and minimizing entry delays. Methods: This paper develops and evaluates a forecasting framework comparing [...] Read more.
Background/Objective: Emergency Departments (EDs) face persistent challenges with overcrowding, unpredictable patient arrivals, and difficulty forecasting short-term demand. Precise hourly arrival predictions are crucial for effective staffing, optimal resource management, and minimizing entry delays. Methods: This paper develops and evaluates a forecasting framework comparing six approaches (a Seasonal Naive baseline, Exponential Smoothing (ETS), Ridge Regression, LightGBM, a hybrid Temporal Convolutional Network (TCN), and a hybrid Long Short-Term Memory (LSTM) network) using de-identified hourly patient arrival records from an ED in Madinah, Saudi Arabia, covering January–November 2024. A set of 183 engineered features is constructed from cyclical time encodings, weekend and public-holiday indicators, structured autoregressive lags, and volatility measures, with all lag-based features verified to use strictly retrospective information. Models are optimized using Bayesian hyperparameter search and trained under an asymmetric loss function that penalizes underprediction to reflect operational risk. Results: Results on a 14-day hold-out test set show that Ridge Regression achieves the lowest MAE (3.75, R2 = 0.52), with TCN and LSTM essentially tied (MAE 3.80 and 3.85). Diebold–Mariano tests confirm that Ridge, TCN, and LSTM are statistically indistinguishable from one another and that Ridge is marginally significantly better than LightGBM (p=0.028); all four ML models significantly outperform ETS and the Seasonal Naive baseline (p<0.001). On the asymmetric metric, TCN achieves the best AsymRMSE (5.59), reflecting its tendency to err on the safe side of staffing decisions. Robustness is confirmed through sensitivity analysis across penalty factors, feature ablation demonstrating the contribution of each feature group without overfitting, expanding-window cross-validation across three independent monthly test periods, and conformal prediction intervals achieving well-calibrated coverage. Conclusions: These results demonstrate that combining engineered temporal features with either a lightweight linear model or a hybrid sequence model yields accurate hourly ED arrival forecasts; whether the achieved accuracy is operationally sufficient for staffing decisions remains a site-specific question that requires clinical validation beyond the scope of this single-center study. Full article
(This article belongs to the Special Issue AI-Driven Healthcare: Transforming Patient Care and Outcomes)
Show Figures

Figure 1

22 pages, 16582 KB  
Article
Temporal Convolutional Network–Transformer Hybrid Architecture with Hippo Optimization for Lithium Battery SOC Estimation
by Long Wu, Yang Wang and Likun Xing
World Electr. Veh. J. 2026, 17(5), 236; https://doi.org/10.3390/wevj17050236 - 29 Apr 2026
Abstract
As an important state parameter in battery management systems, accurate state of charge (SOC) estimation is of great significance for the safe and reliable use of batteries. In this paper, a Temporal Convolutional Network–Transformer (TCN–Transformer) model is proposed for achieving accurate estimation of [...] Read more.
As an important state parameter in battery management systems, accurate state of charge (SOC) estimation is of great significance for the safe and reliable use of batteries. In this paper, a Temporal Convolutional Network–Transformer (TCN–Transformer) model is proposed for achieving accurate estimation of SOC. First, the TCN is integrated in series with the Transformer model. This integration not only extracts the local characteristics of time-series data but also captures broader spatiotemporal correlations, thereby enhancing the feature representation and achieving highly accurate estimation. However, since the hyperparameter settings of neural networks have a significant impact on model performance, this study employs the advanced hippo optimization (HO) algorithm to determine the optimal values for the number of filters, filter size, number of residual blocks, and number of encoder layers, ultimately improving the model’s stability and efficiency. Finally, the proposed model was tested under various dynamic driving conditions at different temperatures. Experimental validation on the CALCE dataset demonstrates that the proposed HO–TCN–Transformer achieves RMSE and MAE both under 0.7%, representing an approximately 50% overall error reduction compared to the standalone TCN. Cross-validation across five folds confirms robust performance with <7% standard deviation. Full article
(This article belongs to the Section Storage Systems)
Show Figures

Figure 1

44 pages, 36503 KB  
Article
A Dual-Branch ST-GCN System for Joint Recognition of OOW Unsafe Behaviors and Facial Fatigue Features
by Rui Qi, Shengwei Xing, Kairen Chen, Zijian Zhang and Xiaoyu He
Electronics 2026, 15(9), 1852; https://doi.org/10.3390/electronics15091852 - 27 Apr 2026
Abstract
The Officer on Watch (OOW) is critical to ensuring the safety of the vessel, cargo, and crew during navigation. To reduce maritime accidents caused by unsafe behaviors or fatigue, this paper proposes a dual-branch detection system based on Spatial–Temporal Graph Convolutional Networks (ST-GCN): [...] Read more.
The Officer on Watch (OOW) is critical to ensuring the safety of the vessel, cargo, and crew during navigation. To reduce maritime accidents caused by unsafe behaviors or fatigue, this paper proposes a dual-branch detection system based on Spatial–Temporal Graph Convolutional Networks (ST-GCN): BODY-ST-GCN for pose-based behavior recognition and FACE-ST-GCN for facial state analysis. For spatial modeling, a Triple Graph Fusion (TGF) strategy is introduced to integrate static, adaptive, and attention graphs, enhancing the representation of skeletal and facial keypoints. For temporal modeling, BODY-ST-GCN incorporates a Three-Scale Parallel Temporal Convolutional Network (TSP-TCN) to capture multi-scale motion dynamics, while FACE-ST-GCN uses a Temporal Adaptive Module (TAM) to extract stable facial state features. Furthermore, a joint risk classification mechanism categorizes OOW duty states into four hierarchical levels: Safe, Early Fatigue Warning, High Fatigue Risk, and Emergency. This mechanism enables continuous, real-time monitoring and dynamic assessment. Experiments demonstrate that BODY-ST-GCN and FACE-ST-GCN achieve macro average precisions of 0.969 and 0.947, respectively, outperforming the baseline ST-GCN by 6.4% and 14.9%, providing reliable technical support for onboard safety management. Full article
24 pages, 8285 KB  
Article
Regional Short-Term PV Power Forecasting Based on Graph Convolution and Transformer Networks
by Qinggui Chen, Ziqi Liu and Zhao Zhen
Electronics 2026, 15(9), 1817; https://doi.org/10.3390/electronics15091817 - 24 Apr 2026
Viewed by 164
Abstract
Accurate short-term photovoltaic (PV) power forecasting is essential for power system scheduling and market operations. Existing studies have shown the value of numerical weather prediction (NWP), graph-based spatial modeling, and temporal sequence learning, but the boundary of their contributions remains fragmented across many [...] Read more.
Accurate short-term photovoltaic (PV) power forecasting is essential for power system scheduling and market operations. Existing studies have shown the value of numerical weather prediction (NWP), graph-based spatial modeling, and temporal sequence learning, but the boundary of their contributions remains fragmented across many practical forecasting frameworks. In particular, adjacent multi-point NWP information is often not explicitly organized according to its spatial relationships, while historical similar-day power is rarely integrated with graph-structured meteorological features in a unified model. To address this gap, this study develops a short-term PV power forecasting framework that combines multi-point NWP graph construction with similar-day-guided Transformer fusion. First, predicted irradiance from the target site and neighboring NWP points is organized as a graph, and a Graph Convolutional Network (GCN) is used to extract local spatial meteorological features. Second, similar days are identified through a two-stage selection strategy based on Euclidean distance and Pearson correlation, and the corresponding historical power sequences are aggregated as temporal guidance. Finally, the graph-extracted NWP features, similar-day power, and predicted humidity are fused by a Transformer-based temporal modeling module to generate day-ahead PV power forecasts. Experimental results show that the proposed framework outperforms TCN-Transformer, Transformer, GCN, LSTM, and BP on the studied dataset, and maintains favorable performance on additional PV stations. These results indicate that the joint integration of graph-structured multi-point NWP information and historical similar-day power is effective for short-term PV power forecasting. Full article
(This article belongs to the Special Issue AI Applications for Smart Grid: 2nd Edition)
Show Figures

Figure 1

29 pages, 1833 KB  
Article
MSTFNet: Multi-Scale Temporal Fusion Network with Frequency-Enhanced Attention for Financial Time Series Forecasting
by Qian Xia and Wenhao Kang
Mathematics 2026, 14(8), 1391; https://doi.org/10.3390/math14081391 - 21 Apr 2026
Viewed by 164
Abstract
Financial time series forecasting remains a persistent challenge due to the non-stationary nature, inherent noise, and multi-scale temporal dependencies present in market data. This paper presents MSTFNet, a multi-scale temporal fusion network that combines dilated causal convolutions with a frequency-enhanced sparse attention mechanism [...] Read more.
Financial time series forecasting remains a persistent challenge due to the non-stationary nature, inherent noise, and multi-scale temporal dependencies present in market data. This paper presents MSTFNet, a multi-scale temporal fusion network that combines dilated causal convolutions with a frequency-enhanced sparse attention mechanism for improved financial prediction. The proposed architecture consists of three core components: a multi-scale dilated causal convolution module that extracts temporal patterns across different time horizons through parallel convolutional branches with varying dilation rates, a frequency-enhanced sparse attention mechanism that leverages Fast Fourier Transform to identify dominant periodic components and modulate attention weights accordingly, and an adaptive scale fusion gate that learns to dynamically combine representations from multiple temporal scales. Extensive experiments conducted on three public financial datasets (S&P 500, CSI 300, and NASDAQ Composite) spanning the period from January 2015 to December 2024 show two key results. First, consistent with near-efficient markets, the random-walk benchmark (y^t+1=yt) outperforms all the data-driven models on level-error metrics (MAE, RMSE, MAPE, and R2), establishing the martingale as the binding lower bound on point-prediction error. Second, MSTFNet achieves the highest directional accuracy (DA) across all three indices—56.3% on the S&P 500 versus 50.0% for the martingale—representing a 6.3 percentage-point improvement that generates positive pre-cost returns in a trading strategy backtest. Among the eight data-driven baselines (LSTM, GRU, TCN, Transformer, Autoformer, FEDformer, PatchTST, and iTransformer), MSTFNet also achieves the lowest MAE, reducing it by 13.6% relative to the strongest data-driven baseline (iTransformer) on the S&P 500. These results confirm that integrating multi-scale temporal modeling with frequency-domain guidance extracts a real, if modest, directional signal from financial time series. Full article
Show Figures

Figure 1

19 pages, 3398 KB  
Article
A Hybrid TCN-Attention-BiLSTM Framework for AIS-Based Nearshore Vessel Speed Prediction and Risk Warning
by Xin Liu, Zhaona Chen, Yu Cao and Dan Zhang
Appl. Sci. 2026, 16(8), 3978; https://doi.org/10.3390/app16083978 - 19 Apr 2026
Viewed by 274
Abstract
Accurate vessel speed prediction is essential for maritime traffic supervision, navigational safety, and intelligent coastal management. However, due to the nonlinear, time-varying, and context-dependent characteristics of vessel motion in nearshore waters, conventional single-model approaches often fail to provide sufficiently accurate forecasts. To address [...] Read more.
Accurate vessel speed prediction is essential for maritime traffic supervision, navigational safety, and intelligent coastal management. However, due to the nonlinear, time-varying, and context-dependent characteristics of vessel motion in nearshore waters, conventional single-model approaches often fail to provide sufficiently accurate forecasts. To address this issue, this study proposes a hybrid deep learning framework for Automatic Identification System (AIS)-based nearshore vessel speed prediction and risk warning, integrating a temporal convolutional network (TCN), an attention mechanism, and a bidirectional long short-term memory network (BiLSTM) into a unified architecture. The core novelty of this framework is its task-oriented sequential design, in which TCN extracts local temporal patterns and multi-scale sequence features from historical AIS observations, the attention mechanism adaptively emphasizes informative representations, and BiLSTM models bidirectional contextual dependencies in vessel motion sequences; on this basis, a speed-risk warning process is constructed by combining the predicted speed with electronic-fence threshold constraints. Experiments conducted on real AIS data from coastal waters show that the proposed method obtains lower mean absolute error (MAE), mean squared error (MSE), and root mean square error (RMSE) as well as a higher coefficient of determination (R2) than several benchmark models. The results illustrate that the proposed framework effectively improves vessel speed prediction accuracy within the studied coastal area and provides practical support for proactive maritime supervision and nearshore safety management. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

29 pages, 4784 KB  
Article
Incipient Fault Diagnosis in Power Cables Based on WOA-CEEMDAN and a TCN-BiLSTM Network with Multi-Head Attention
by Yuhua Xing and Yaolong Yin
Appl. Sci. 2026, 16(8), 3908; https://doi.org/10.3390/app16083908 - 17 Apr 2026
Viewed by 167
Abstract
Incipient faults in power cables are difficult to diagnose because their transient signatures are weak, non-stationary, and easily masked by background noise, while labeled real-world samples are often scarce. To address these challenges, this paper proposes an offline diagnosis framework that integrates Whale [...] Read more.
Incipient faults in power cables are difficult to diagnose because their transient signatures are weak, non-stationary, and easily masked by background noise, while labeled real-world samples are often scarce. To address these challenges, this paper proposes an offline diagnosis framework that integrates Whale Optimization Algorithm (WOA)-guided CEEMDAN with a TCN-BiLSTM-Multi-HeadAttention network. The proposed method has three main features. First, WOA is explicitly mapped to the CEEMDAN parameter optimization problem and is used to adaptively optimize the noise amplitude and ensemble number, thereby improving decomposition quality and enhancing weak fault-related components. Second, the optimized intrinsic mode functions are reconstructed into a multi-channel representation that preserves complementary fault information across different frequency bands. Third, a hybrid deep architecture combining Temporal Convolutional Networks, Bidirectional Long Short-Term Memory, and multi-HeadAttention is designed to jointly capture local transient characteristics, bidirectional temporal dependencies, and fault-sensitive feature interactions. Experimental results on both PSCAD/EMTDC simulation data and real-world measured data show that the optimized WOA-CEEMDAN achieves superior decomposition performance, with an RMSE of 0.097 and an SNR of 8.42 dB. On the real-world test dataset, the proposed framework achieves 96.00% accuracy, 97.25% precision, 96.84% recall, an F1-score of 0.970, and an AUC of 0.97, outperforming several representative baseline models. Additional ablation, noise-robustness, small-sample, confusion-matrix, and cross-cable validation results further demonstrate the effectiveness and robustness of the proposed framework for incipient cable fault diagnosis. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

29 pages, 2959 KB  
Article
A Diffusion-Augmented GWO-TCN-PSA Method for Real-Time Inverse Kinematics in Robotic Manipulator Applications
by Baiyang Wang, Xiangxiao Zeng, Ming Fang, Fang Li and Hongjun Wang
Electronics 2026, 15(8), 1688; https://doi.org/10.3390/electronics15081688 - 16 Apr 2026
Viewed by 241
Abstract
This paper presents an efficient inverse kinematics (IK) solution for robotic manipulators, addressing the challenges of high computational complexity, low efficiency, and sensitivity to singularities associated with traditional methods. A data augmentation strategy is introduced, utilizing an enhanced Diffusion-TS model to generate diverse [...] Read more.
This paper presents an efficient inverse kinematics (IK) solution for robotic manipulators, addressing the challenges of high computational complexity, low efficiency, and sensitivity to singularities associated with traditional methods. A data augmentation strategy is introduced, utilizing an enhanced Diffusion-TS model to generate diverse joint-angle samples and corresponding end-effector poses through forward kinematics, thereby creating a high-quality dataset. To improve real-time performance, a Temporal Convolutional Network (TCN) model is developed, optimized using the Grey Wolf Optimizer (GWO), and augmented with a probabilistic sparse attention mechanism to effectively capture key pose features. Experimental evaluations on the Jaka MiniCobo robotic arm demonstrate that the proposed method significantly reduces inference time while maintaining high accuracy, making it suitable for real-world applications that demand both speed and precision. Full article
Show Figures

Figure 1

17 pages, 2026 KB  
Article
A Regional Short-Term Wind Power Prediction Method Integrating DQN Error Correction with GCN-TCN-Transformer
by Wei Xu, Yulin Wang, Lihong Peng, Zixuan Wang, Sheng Zhang, Hongyi Lai, Yongjia Hu and Huankun Zheng
Processes 2026, 14(8), 1275; https://doi.org/10.3390/pr14081275 - 16 Apr 2026
Viewed by 181
Abstract
The inherent intermittency and uncertainty of wind power generation pose significant challenges to grid security and the integration of renewable energy. Accurate and reliable short-term wind power forecasting is crucial for enhancing wind energy usage and ensuring the safe operation of power systems. [...] Read more.
The inherent intermittency and uncertainty of wind power generation pose significant challenges to grid security and the integration of renewable energy. Accurate and reliable short-term wind power forecasting is crucial for enhancing wind energy usage and ensuring the safe operation of power systems. Current mainstream forecasting methods inadequately model spatial correlations among regional wind farms. Additionally, wind power generation is susceptible to sudden changes in weather conditions and environmental factors, limiting the robustness of existing forecasting methods when confronting dynamically changing prediction environments. This poses major challenges for accurate and reliable regional wind power forecasting. This paper employs Graph Convolutional Networks (GCN) to model spatial connections between wind farms while introducing a combined TCN-Transformer model for temporal feature extraction and dependency modeling. Furthermore, to enhance prediction accuracy and reliability, Deep Q-Network (DQN) is incorporated to dynamically correct model prediction errors. Experimental results demonstrate that the proposed short-term wind power forecasting method achieves an RMSE of 60.14 and an MAE of 45.98, showing significant improvement over predictions from models without DQN error correction and other comparative models. Future work may extend the forecasting horizon to provide more information support for grid supply security decisions. Full article
(This article belongs to the Special Issue Optimal Design, Control and Simulation of Energy Management Systems)
Show Figures

Figure 1

22 pages, 4648 KB  
Article
Digital Twin-Driven TLE Error Correction for Precise LEO Satellite Orbit Prediction
by Xinchen Xu, Hong Wen, Wenjing Hou, Liang Chen, Yingwei Zhao and Tian Liu
Aerospace 2026, 13(4), 375; https://doi.org/10.3390/aerospace13040375 - 16 Apr 2026
Viewed by 277
Abstract
Low earth orbit (LEO) satellite orbit prediction is one of the key measures to compensate for position errors and ensure position accuracy, which plays an important role in the aerospace communication network for undertaking functions such as routing relay, real-time communication, and signal [...] Read more.
Low earth orbit (LEO) satellite orbit prediction is one of the key measures to compensate for position errors and ensure position accuracy, which plays an important role in the aerospace communication network for undertaking functions such as routing relay, real-time communication, and signal forwarding. However, existing learning-based satellite orbit prediction models that are recognized as the best measurement inevitably face the problem of distribution bias. Orbit predictions can lead to a decrease in model performance due to different types of satellites (LEO and SSO) and different time scales. In this article, a new method is explored to overcome these shortcomings. Unlike previous methods that explore the temporal correlation of orbit data, this novel orbit prediction method converts satellite orbit data into the frequency domain via Fourier transformation, using a third-order Fourier-derivative convolution framework. Specifically, the proposed Fourier dilation convolution (FDC) model demonstrates better generalization ability across different types of satellites and different time scales by combining frequency domain analysis and dilated convolution. Two real datasets are applied for experimental validation, and the results show the effectiveness of our proposed FDC model. Meanwhile, the proposed FDC model shows a decrease in mean absolute error (MAE) values compared to the temporal convolutional network based seasonal and trend decomposition using a Loess (STL-TCN) model. Quantitative comparisons demonstrate that compared to the STL-TCN model, the FDC model reduces the mean absolute error (MAE) by approximately 10% to 85% across different orbital dimensions. Finally, we conducted further analysis of the interpretability of the model. Full article
Show Figures

Figure 1

27 pages, 4774 KB  
Article
Hybrid Temporal Convolutional Networks and Long Short-Term Memory Model for Accurate and Sustainable Wind–Solar Power Forecasting Leveraging Time-Frequency Joint Analysis and Multi-Head Self-Attention
by Yue Liu, Qinglin Cheng, Haiying Sun, Yaming Qi and Lingli Meng
Sustainability 2026, 18(8), 3904; https://doi.org/10.3390/su18083904 - 15 Apr 2026
Viewed by 302
Abstract
Accurate forecasting of wind and photovoltaic power remains challenging due to the strong nonlinearity, nonstationarity, and seasonal heterogeneity of renewable generation series. To address this issue, this study proposes a hybrid forecasting framework integrating time–frequency joint analysis (TFAA), temporal convolutional networks (TCN), long [...] Read more.
Accurate forecasting of wind and photovoltaic power remains challenging due to the strong nonlinearity, nonstationarity, and seasonal heterogeneity of renewable generation series. To address this issue, this study proposes a hybrid forecasting framework integrating time–frequency joint analysis (TFAA), temporal convolutional networks (TCN), long short-term memory (LSTM), and multi-head self-attention (MHSA). Wavelet transform is used to extract frequency-domain representations, which are jointly encoded with the original time-domain sequence through a dual-branch architecture and adaptively fused. The fused features are then processed by a TCN-LSTM backbone to capture both long-range dependencies and short-term dynamics, while MHSA is introduced to enhance global contextual modeling. Experiments on wind-farm and photovoltaic datasets from China, together with external validation on the NREL WIND Toolkit and the GEFCom2014 Solar benchmark, show that the proposed model achieves the best overall seasonal performance and maintains competitive improvements on public benchmarks. Additional ablation studies, repeated-run statistical validation, persistence-based skill-score analysis, prediction-interval evaluation, ramp-event assessment, meteorological-driver enrichment, permutation-based driver attribution, regime-conditioned error diagnostics, and transferability evidence analysis further confirm the effectiveness, robustness, physical consistency, and practical applicability of the proposed framework. The results indicate that the proposed model provides a reliable and operationally relevant solution for short-term wind and photovoltaic power forecasting. These findings further support sustainable renewable-energy integration, smart-grid dispatch, and low-carbon power-system operation. Full article
Show Figures

Figure 1

29 pages, 1375 KB  
Article
A Distribution-Free Neural Estimator for Mean Reversion, with Application to Energy Commodity Markets
by Carlo Mari and Emiliano Mari
Mathematics 2026, 14(8), 1302; https://doi.org/10.3390/math14081302 - 13 Apr 2026
Viewed by 198
Abstract
Accurate estimation of the mean-reversion speed α in the AR(1) process Xt+1=(1α)Xt+εt is central to energy-commodity modelling. Classical estimators such as GARCH, jump-diffusion, and regime-switching produce model-conditioned estimates by [...] Read more.
Accurate estimation of the mean-reversion speed α in the AR(1) process Xt+1=(1α)Xt+εt is central to energy-commodity modelling. Classical estimators such as GARCH, jump-diffusion, and regime-switching produce model-conditioned estimates by embedding α within distributional assumptions, so that different model choices yield different α^ values from the same series without a principled criterion to adjudicate. We propose a distribution-free neural estimator based on a Temporal Convolutional Network (TCN) trained on synthetic AR(1) series with Sinh-ArcSinh (SAS) innovations. Distribution-free here means that no parametric family is assumed for the innovation distribution at inference time: the estimator imposes no distributional hypothesis when processing a new series. The SAS family serves as a training vehicle—not a model for the real data—chosen for its ability to span a broad range of tail weights and asymmetry profiles. The theoretical foundation is spectral invariance: the Yule–Walker equations establish that the autocorrelation structure ρk=(1α)k depends on α alone, provided innovations are uncorrelated across lags—a condition satisfied not only by i.i.d. innovations but also by conditionally heteroscedastic processes such as GARCH. The TCN therefore generalises to volatility-clustering environments without modification, learning to extract α from temporal dependence alone, independently of the marginal innovation distribution and of the temporal variance structure. On held-out test series the estimator outperforms all classical competitors, with the advantage growing monotonically with non-Gaussianity. A robustness analysis on three out-of-distribution innovation families and on AR(1)-GARCH(1,1) processes empirically validates the spectral invariance guarantee across both marginal and temporal variance structure, including near-integrated GARCH processes where innovation kurtosis far exceeds the training range. The distribution-free α^ enables a two-stage pipeline in which α and the innovation distribution are characterised independently—a decoupling structurally impossible in classical likelihood-based approaches. Once trained, the TCN acts as a universal mean-reversion estimator applicable to any price series without re-fitting. Applied to four energy markets—Italian natural gas (PSV price), Italian electricity (PUN price), US Henry Hub, and US PJM West Hub—spanning log-return kurtosis from near-Gaussian to strongly heavy-tailed, the TCN yields robust, distribution-free estimates of mean-reversion speed. Full article
Show Figures

Figure 1

31 pages, 3398 KB  
Article
Multimodal Smart-Skin for Real-Time Sitting Posture Recognition with Cross-Session Validation
by Giva Andriana Mutiara, Muhammad Rizqy Alfarisi, Paramita Mayadewi, Lisda Meisaroh and Periyadi
Multimodal Technol. Interact. 2026, 10(4), 39; https://doi.org/10.3390/mti10040039 - 9 Apr 2026
Viewed by 312
Abstract
Prolonged sitting with poor posture is associated with musculoskeletal disorders, reduced productivity, and long-term health risks. Many existing posture monitoring systems predominantly rely on single-modality sensing, such as pressure or vision-based approaches, limiting their ability to capture both static alignment and dynamic micro-movements. [...] Read more.
Prolonged sitting with poor posture is associated with musculoskeletal disorders, reduced productivity, and long-term health risks. Many existing posture monitoring systems predominantly rely on single-modality sensing, such as pressure or vision-based approaches, limiting their ability to capture both static alignment and dynamic micro-movements. This study proposes a multimodal smart-skin system integrating pressure, temperature, and vibration sensors for sitting posture recognition. A total of 42 sensors distributed across 14 anatomical locations were deployed, generating 15,037 samples collected over three independent sessions to evaluate cross-session temporal generalization across nine posture classes under controlled experimental conditions. Two deep learning architectures—Temporal Convolutional Networks with Attention (TCN + Attn) and Convolutional Neural Network–Long Short-Term Memory (CNN − LSTM)—were compared under Leave-One-Session-Out (LOSO) cross-validation. TCN + Attn achieved 85.23% LOSO accuracy, outperforming CNN − LSTM by 2.56 percentage points while reducing training time by 36.7% and inference latency by 33.9%. Ablation analysis revealed that temperature sensing was the most discriminative unimodal modality (71.5% accuracy), and full multimodal fusion improved LOSO accuracy by 22.93% compared to pressure-only configurations. These results demonstrate the feasibility of multimodal smart-skin sensing combined with temporal convolutional modeling for cross-session posture recognition and indicate potential for efficient real-time, privacy-preserving ergonomic monitoring. This study should be interpreted as a controlled, single-subject proof-of-concept, and further validation in multi-subject and real-world environments is required to establish broader generalizability. Full article
Show Figures

Figure 1

16 pages, 1624 KB  
Article
Surface EMG-Based Hand Gesture Recognition Using a Hybrid Multistream Deep Learning Architecture
by Yusuf Çelik and Umit Can
Sensors 2026, 26(7), 2281; https://doi.org/10.3390/s26072281 - 7 Apr 2026
Viewed by 477
Abstract
Surface electromyography (sEMG) enables non-invasive measurement of muscle activity for applications such as human–machine interaction, rehabilitation, and prosthesis control. However, high noise levels, inter-subject variability, and the complex nature of muscle activation hinder robust gesture classification. This study proposes a multistream hybrid deep-learning [...] Read more.
Surface electromyography (sEMG) enables non-invasive measurement of muscle activity for applications such as human–machine interaction, rehabilitation, and prosthesis control. However, high noise levels, inter-subject variability, and the complex nature of muscle activation hinder robust gesture classification. This study proposes a multistream hybrid deep-learning architecture for the FORS-EMG dataset to address these challenges. The model integrates Temporal Convolutional Networks (TCN), depthwise separable convolutions, bidirectional Long Short-Term Memory (LSTM)–Gated Recurrent Unit (GRU) layers, and a Transformer encoder to capture complementary temporal and spectral patterns, and an ArcFace-based classifier to enhance class separability. We evaluate the approach under three protocols: subject-wise, random split without augmentation, and random split with augmentation. In the augmented random-split setting, the model attains 96.4% accuracy, surpassing previously reported values. In the subject-wise setting, accuracy is 74%, revealing limited cross-user generalization. The results demonstrate the method’s high performance and highlight the impact of data-partition strategies for real-world sEMG-based gesture recognition. Full article
(This article belongs to the Special Issue Machine Learning in Biomedical Signal Processing)
Show Figures

Figure 1

Back to TopTop