Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,325)

Search Parameters:
Keywords = BiLSTM network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 1102 KB  
Article
Near-Real-Time Epileptic Seizure Detection with Reduced EEG Electrodes: A BiLSTM-Wavelet Approach on the EPILEPSIAE Dataset
by Kiyan Afsari, May El Barachi and Christian Ritz
Brain Sci. 2026, 16(1), 119; https://doi.org/10.3390/brainsci16010119 - 22 Jan 2026
Viewed by 48
Abstract
Background and Objectives: Epilepsy is a chronic neurological disorder characterized by recurrent seizures caused by abnormal brain activity. Reliable near-real-time seizure detection is essential for preventing injuries, enabling early interventions, and improving the quality of life for patients with drug-resistant epilepsy. This study [...] Read more.
Background and Objectives: Epilepsy is a chronic neurological disorder characterized by recurrent seizures caused by abnormal brain activity. Reliable near-real-time seizure detection is essential for preventing injuries, enabling early interventions, and improving the quality of life for patients with drug-resistant epilepsy. This study presents a near-real-time epileptic seizure detection framework designed for low-latency operation, focusing on improving both clinical reliability and patient comfort through electrode reduction. Method: The framework integrates bidirectional long short-term memory (BiLSTM) networks with wavelet-based feature extraction using Electroencephalogram (EEG) recordings from the EPILEPSIAE dataset. EEG signals from 161 patients comprising 1032 seizures were analyzed. Wavelet features were combined with raw EEG data to enhance temporal and spectral representation. Furthermore, electrode reduction experiments were conducted to determine the minimum number of strategically positioned electrodes required to maintain performance. Results: The optimized BiLSTM model achieved 86.9% accuracy, 86.1% recall, and an average detection delay of 1.05 s, with a total processing time of 0.065 s per 0.5 s EEG window. Results demonstrated that reliable detection is achievable with as few as six electrodes, maintaining comparable performance to the full configuration. Conclusions: These findings demonstrate that the proposed BiLSTM-wavelet approach provides a clinically viable, computationally efficient, and wearable-friendly solution for near-real-time epileptic seizure detection using reduced EEG channels. Full article
(This article belongs to the Section Neural Engineering, Neuroergonomics and Neurorobotics)
Show Figures

Figure 1

20 pages, 1124 KB  
Article
Scalable Neural Cryptanalysis of Block Ciphers in Federated Attack Environments
by Ongee Jeong, Seonghwan Park and Inkyu Moon
Mathematics 2026, 14(2), 373; https://doi.org/10.3390/math14020373 - 22 Jan 2026
Viewed by 22
Abstract
This paper presents an extended investigation into deep learning-based cryptanalysis of block ciphers by introducing and evaluating a multi-server attack environment. Building upon our prior work in centralized settings, we explore the practicality and scalability of deploying such attacks across multiple distributed edge [...] Read more.
This paper presents an extended investigation into deep learning-based cryptanalysis of block ciphers by introducing and evaluating a multi-server attack environment. Building upon our prior work in centralized settings, we explore the practicality and scalability of deploying such attacks across multiple distributed edge servers. We assess the vulnerability of five representative block ciphers—DES, SDES, AES-128, SAES, and SPECK32/64—under two neural attack models: Encryption Emulation (EE) and Plaintext Recovery (PR), using both fully connected neural networks and Recurrent Neural Networks (RNNs) based on bidirectional Long Short-Term Memory (BiLSTM). Our experimental results show that the proposed federated learning-based cryptanalysis framework achieves performance nearly identical to that of centralized attacks, particularly for ciphers with low round complexity. Even as the number of edge servers increases to 32, the attack models maintain high accuracy in reduced-round settings. We validate our security assessments through formal statistical significance testing using two-tailed binomial tests with 99% confidence intervals. Additionally, our scalability analysis demonstrates that aggregation times remain negligible (<0.01% of total training time), confirming the computational efficiency of the federated framework. Overall, this work provides both a scalable cryptanalysis framework and valuable insights into the design of cryptographic algorithms that are resilient to distributed, deep learning-based threats. Full article
(This article belongs to the Section E: Applied Mathematics)
Show Figures

Figure 1

21 pages, 1463 KB  
Article
A Mathematical Framework for E-Commerce Sales Prediction Using Attention-Enhanced BiLSTM and Bayesian Optimization
by Hao Hu, Jinshun Cai and Chenke Xu
Math. Comput. Appl. 2026, 31(1), 17; https://doi.org/10.3390/mca31010017 - 22 Jan 2026
Viewed by 29
Abstract
Accurate sales prediction is crucial for inventory and marketing in e-commerce. Cross-border sales involve complex patterns that traditional models cannot capture. To address this, we propose an improved Bidirectional Long Short-Term Memory (BiLSTM) model, enhanced with an attention mechanism and Bayesian hyperparameter optimization. [...] Read more.
Accurate sales prediction is crucial for inventory and marketing in e-commerce. Cross-border sales involve complex patterns that traditional models cannot capture. To address this, we propose an improved Bidirectional Long Short-Term Memory (BiLSTM) model, enhanced with an attention mechanism and Bayesian hyperparameter optimization. The attention mechanism focuses on key temporal features, improving trend identification. The BiLSTM captures both forward and backward dependencies, offering deeper insights into sales patterns. Bayesian optimization fine-tunes hyperparameters such as learning rate, hidden-layer size, and dropout rate to achieve optimal performance. These innovations together improve forecasting accuracy, making the model more adaptable and efficient for cross-border e-commerce sales. Experimental results show that the model achieves an Root Mean Square Error (RMSE) of 13.2, Mean Absolute Error (MAE) of 10.2, Mean Absolute Percentage Error (MAPE) of 8.7 percent, and a Coefficient of Determination (R2) of 0.92. It outperforms baseline models, including BiLSTM (RMSE 16.5, MAPE 10.9 percent), BiLSTM with Attention (RMSE 15.2, MAPE 10.1 percent), Temporal Convolutional Network (RMSE 15.0, MAPE 9.8 percent), and Transformer for Time Series (RMSE 14.8, MAPE 9.5 percent). These results highlight the model’s superior performance in forecasting cross-border e-commerce sales, making it a valuable tool for inventory management and demand planning. Full article
(This article belongs to the Special Issue New Trends in Computational Intelligence and Applications 2025)
Show Figures

Figure 1

17 pages, 2935 KB  
Article
A Hybrid Deep Learning Framework for Non-Intrusive Load Monitoring
by Xiangbin Kong, Zhihang Gui, Minghu Wu, Chuyu Miao and Zhe Luo
Electronics 2026, 15(2), 453; https://doi.org/10.3390/electronics15020453 - 21 Jan 2026
Viewed by 144
Abstract
In recent years, load disaggregation and non-intrusive load-monitoring (NILM) methods have garnered widespread attention for optimizing energy management systems, becoming crucial tools for achieving energy efficiency and analyzing power consumption. However, existing NILM methods face challenges in accurately handling appliances with multiple operational [...] Read more.
In recent years, load disaggregation and non-intrusive load-monitoring (NILM) methods have garnered widespread attention for optimizing energy management systems, becoming crucial tools for achieving energy efficiency and analyzing power consumption. However, existing NILM methods face challenges in accurately handling appliances with multiple operational states and suffer from low accuracy and poor computational efficiency, particularly in modeling long-term dependencies and complex appliance load patterns. This article proposes an improved NILM model optimized based on transformers. The model first utilizes a convolutional neural network (CNN) to extract features from the input sequence and employs a bidirectional long short-term memory (BiLSTM) network to model long-term dependencies. Subsequently, multiple transformer blocks are used to capture dependencies within the sequence. To validate the effectiveness of the proposed model, we applied it to real-world household energy datasets: UK-DALE and REDD. Compared with suboptimal models, our model significantly improves the F1 score by 24.5% and 22.8%. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

14 pages, 11925 KB  
Technical Note
Detecting Mowed Tidal Wetlands Using Time-Series NDVI and LSTM-Based Machine Learning
by Mayeesha Humaira, Stephen Aboagye-Ntow, Chuyuan Wang, Alexi Sanchez de Boado, Mark Burchick, Leslie Wood Mummert and Xin Huang
Land 2026, 15(1), 193; https://doi.org/10.3390/land15010193 - 21 Jan 2026
Viewed by 122
Abstract
This study presents the first application of machine learning (ML) to detect and map mowed tidal wetlands in the Chesapeake Bay region of Maryland and Virginia, focusing on emergent estuarine intertidal (E2EM) wetlands. Monitoring human disturbances like mowing is essential because repeated mowing [...] Read more.
This study presents the first application of machine learning (ML) to detect and map mowed tidal wetlands in the Chesapeake Bay region of Maryland and Virginia, focusing on emergent estuarine intertidal (E2EM) wetlands. Monitoring human disturbances like mowing is essential because repeated mowing stresses wetland vegetation, reducing habitat quality and diminishing other ecological services wetlands provide, including shoreline stabilization and water filtration. Traditional field-based monitoring is labor-intensive and impractical for large-scale assessments. To address these challenges, this study utilized 2021 and 2022 Sentinel-2 satellite imagery and a time-series analysis of the Normalized Difference Vegetation Index (NDVI) to distinguish between mowed and unmowed (control) wetlands. A bidirectional Long Short-Term Memory (BiLSTM) neural network was created to predict NDVI patterns associated with mowing events, such as rapid decreases followed by slow vegetation regeneration. The training dataset comprised 204 field-verified and desktop-identified samples, accounting for under 0.002% of the research area’s herbaceous E2EM wetlands. The model obtained 97.5% accuracy on an internal test set and was verified at eight separate Chesapeake Bay locations, indicating its promising generality. This work demonstrates the potential of remote sensing and machine learning for scalable, automated monitoring of tidal wetland disturbances to aid in conservation, restoration, and resource management. Full article
(This article belongs to the Section Land – Observation and Monitoring)
Show Figures

Figure 1

23 pages, 2529 KB  
Article
Loss Prediction and Global Sensitivity Analysis for Distribution Transformers Based on NRBO-Transformer-BiLSTM
by Qionglin Li, Yi Wang and Tao Mao
Electronics 2026, 15(2), 420; https://doi.org/10.3390/electronics15020420 - 18 Jan 2026
Viewed by 140
Abstract
As distributed energy resources and nonlinear loads are integrated into power grids on a large scale, power quality issues have grown increasingly prominent, triggering a substantial rise in distribution transformer losses. Traditional approaches struggle to accurately forecast transformer losses under complex power quality [...] Read more.
As distributed energy resources and nonlinear loads are integrated into power grids on a large scale, power quality issues have grown increasingly prominent, triggering a substantial rise in distribution transformer losses. Traditional approaches struggle to accurately forecast transformer losses under complex power quality conditions and lack quantitative analysis of the influence of various power quality indicators on losses. This study presents a data-driven methodology for transformer loss prediction and sensitivity analysis in such environments. First, an experimental platform is designed and built to measure transformer losses under composite power quality conditions, enabling the collection of actual measurement data when multi-source disturbances exist. Second, a high-precision loss prediction model—dubbed Newton-Raphson-Based Optimizer-Transformer-Bidirectional Long Short-Term Memory (NRBO-Transformer-BiLSTM)—is developed on the basis of an enhanced deep neural network. Finally, global sensitivity analysis methods are utilized to quantitatively evaluate the impact of different power quality indicators on transformer losses. Experimental results reveal that the proposed prediction model achieves an average error rate of less than 0.18% and a similarity coefficient of over 0.9989. Among all power quality indicators, voltage deviation has the most significant impact on transformer losses (with a sensitivity of 0.3268), followed by three-phase unbalance (sensitivity: 0.0109) and third harmonics (sensitivity: 0.0075). This research offers a theoretical foundation and technical support for enhancing the energy efficiency of distribution transformers and implementing effective power quality management. Full article
Show Figures

Figure 1

23 pages, 13094 KB  
Article
PDR-STGCN: An Enhanced STGCN with Multi-Scale Periodic Fusion and a Dynamic Relational Graph for Traffic Forecasting
by Jie Hu, Bingbing Tang, Langsha Zhu, Yiting Li, Jianjun Hu and Guanci Yang
Systems 2026, 14(1), 102; https://doi.org/10.3390/systems14010102 - 18 Jan 2026
Viewed by 129
Abstract
Accurate traffic flow prediction is a core component of intelligent transportation systems, supporting proactive traffic management, resource optimization, and sustainable urban mobility. However, urban traffic networks exhibit heterogeneous multi-scale periodic patterns and time-varying spatial interactions among road segments, which are not sufficiently captured [...] Read more.
Accurate traffic flow prediction is a core component of intelligent transportation systems, supporting proactive traffic management, resource optimization, and sustainable urban mobility. However, urban traffic networks exhibit heterogeneous multi-scale periodic patterns and time-varying spatial interactions among road segments, which are not sufficiently captured by many existing spatio-temporal forecasting models. To address this limitation, this paper proposes PDR-STGCN (Periodicity-Aware Dynamic Relational Spatio-Temporal Graph Convolutional Network), an enhanced STGCN framework that jointly models multi-scale periodicity and dynamically evolving spatial dependencies for traffic flow prediction. Specifically, a periodicity-aware embedding module is designed to capture heterogeneous temporal cycles (e.g., daily and weekly patterns) and emphasize dominant social rhythms in traffic systems. In addition, a dynamic relational graph construction module adaptively learns time-varying spatial interactions among road nodes, enabling the model to reflect evolving traffic states. Spatio-temporal feature fusion and prediction are achieved through an attention-based Bidirectional Long Short-Term Memory (BiLSTM) network integrated with graph convolution operations. Extensive experiments are conducted on three datasets, including Metro Traffic Los Angeles (METR-LA), Performance Measurement System Bay Area (PEMS-BAY), and a real-world traffic dataset from Guizhou, China. Experimental results demonstrate that PDR-STGCN consistently outperforms state-of-the-art baseline models. For next-hour traffic forecasting, the proposed model achieves average reductions of 16.50% in RMSE, 9.00% in MAE, and 0.34% in MAPE compared with the second-best baseline. Beyond improved prediction accuracy, PDR-STGCN reveals latent spatio-temporal evolution patterns and dynamic interaction mechanisms, providing interpretable insights for traffic system analysis, simulation, and AI-driven decision-making in urban transportation networks. Full article
Show Figures

Figure 1

47 pages, 17315 KB  
Article
RNN Architecture-Based Short-Term Forecasting Framework for Rooftop PV Surplus to Enable Smart Energy Scheduling in Micro-Residential Communities
by Abdo Abdullah Ahmed Gassar, Mohammad Nazififard and Erwin Franquet
Buildings 2026, 16(2), 390; https://doi.org/10.3390/buildings16020390 - 17 Jan 2026
Viewed by 106
Abstract
With growing community awareness of greenhouse gas emissions and their environmental consequences, distributed rooftop photovoltaic (PV) systems have emerged as a sustainable energy alternative in residential settings. However, the high penetration of these systems without effective operational strategies poses significant challenges for local [...] Read more.
With growing community awareness of greenhouse gas emissions and their environmental consequences, distributed rooftop photovoltaic (PV) systems have emerged as a sustainable energy alternative in residential settings. However, the high penetration of these systems without effective operational strategies poses significant challenges for local distribution grids. Specifically, the estimation of surplus energy production from these systems, closely linked to complex outdoor weather conditions and seasonal fluctuations, often lacks an accurate forecasting approach to effectively capture the temporal dynamics of system output during peak periods. In response, this study proposes a recurrent neural network (RNN)- based forecasting framework to predict rooftop PV surplus in the context of micro-residential communities over time horizons not exceeding 48 h. The framework includes standard RNN, long short-term memory (LSTM), bidirectional LSTM (BiLSTM), and gated recurrent unit (GRU) networks. In this context, the study employed estimated surplus energy datasets from six single-family detached houses, along with weather-related variables and seasonal patterns, to evaluate the framework’s effectiveness. Results demonstrated the significant effectiveness of all framework models in forecasting surplus energy across seasonal scenarios, with low MAPE values of up to 3.02% and 3.59% over 24-h and 48-h horizons, respectively. Simultaneously, BiLSTM models consistently demonstrated a higher capacity to capture surplus energy fluctuations during peak periods than their counterparts. Overall, the developed data-driven framework demonstrates potential to enable short-term smart energy scheduling in micro-residential communities, supporting electric vehicle charging from single-family detached houses through efficient rooftop PV systems. It also provides decision-making insights for evaluating renewable energy contributions in the residential sector. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

25 pages, 16529 KB  
Article
Multi-Scale Photovoltaic Power Forecasting with WDT–CRMABIL–Fusion: A Two-Stage Hybrid Deep Learning Framework
by Reza Khodabakhshi Palandi, Loredana Cristaldi and Luca Martiri
Energies 2026, 19(2), 455; https://doi.org/10.3390/en19020455 - 16 Jan 2026
Viewed by 197
Abstract
Ultra-short-term photovoltaic (PV) power forecasts are vital for secure grid operation as solar penetration rises. We propose a two-stage hybrid framework, WDT–CRMABIL–Fusion. In Stage 1, we apply a three-level discrete wavelet transform to PV power and key meteorological series (shortwave radiation and panel [...] Read more.
Ultra-short-term photovoltaic (PV) power forecasts are vital for secure grid operation as solar penetration rises. We propose a two-stage hybrid framework, WDT–CRMABIL–Fusion. In Stage 1, we apply a three-level discrete wavelet transform to PV power and key meteorological series (shortwave radiation and panel irradiance). We then forecast the approximation and detail sub-series using specialized component predictors: a 1D-CNN with dual residual multi-head attention (feature-wise and time-wise) together with a BiLSTM. In Stage 2, a compact dense fusion network recombines the component forecasts into the final PV power trajectory. We use 5-min data from a PV plant in Milan and evaluate 5-, 10-, and 15-min horizons. The proposed approach outperforms strong baselines (DCC+LSTM, CNN+LSTM, CNN+BiLSTM, CRMABIL direct, and WDT+CRMABIL direct). For the 5-min horizon, it achieves MAE = 1.60 W and RMSE = 4.21 W with R2 = 0.943 and CORR = 0.973, compared with the best benchmark (MAE = 3.87 W; RMSE = 7.89 W). The gains persist across K-means++ weather clusters (rainy/sunny/cloudy) and across seasons. By combining explicit multi-scale decomposition, attention-based sequence learning, and learned fusion, WDT–CRMABIL–Fusion provides accurate and robust ultra-short-term PV forecasts suitable for storage dispatch and reserve scheduling. Full article
Show Figures

Figure 1

22 pages, 5927 KB  
Article
Research on a Temperature and Humidity Prediction Model for Greenhouse Tomato Based on iT-LSTM-CA
by Yanan Gao, Pingzeng Liu, Yuxuan Zhang, Fengyu Li, Ke Zhu, Yan Zhang and Shiwei Xu
Sustainability 2026, 18(2), 930; https://doi.org/10.3390/su18020930 - 16 Jan 2026
Viewed by 153
Abstract
Constructing a temperature and humidity prediction model for greenhouse-grown tomatoes is of great significance for achieving resource-efficient and sustainable greenhouse environmental control and promoting healthy tomato growth. However, traditional models often struggle to simultaneously capture long-term temporal trends, short-term local dynamic variations, and [...] Read more.
Constructing a temperature and humidity prediction model for greenhouse-grown tomatoes is of great significance for achieving resource-efficient and sustainable greenhouse environmental control and promoting healthy tomato growth. However, traditional models often struggle to simultaneously capture long-term temporal trends, short-term local dynamic variations, and the coupling relationships among multiple variables. To address these issues, this study develops an iT-LSTM-CA multi-step prediction model, in which the inverted Transformer (iTransformer, iT) is employed to capture global dependencies across variables and long temporal scales, the Long Short-Term Memory (LSTM) network is utilized to extract short-term local variation patterns, and a cross-attention (CA) mechanism is introduced to dynamically fuse the two types of features. Experimental results show that, compared with models such as Gated Recurrent Unit (GRU), Temporal Convolutional Network (TCN), Recurrent Neural Network (RNN), LSTM, and Bidirectional Long Short-Term Memory (Bi-LSTM), the iT-LSTM-CA achieves the best performance in multi-step forecasting tasks at 3 h, 6 h, 12 h, and 24 h horizons. For temperature prediction, the R2 ranges from 0.96 to 0.98, with MAE between 0.42 °C and 0.79 °C and RMSE between 0.58 °C and 1.06 °C; for humidity prediction, the R2 ranges from 0.95 to 0.97, with MAE between 1.21% and 2.49% and RMSE between 1.78% and 3.42%. These results indicate that the iT-LSTM-CA model can effectively capture greenhouse environmental variations and provide a scientific basis for environmental control and management in tomato greenhouses. Full article
Show Figures

Figure 1

18 pages, 3893 KB  
Article
A Method for Asymmetric Fault Location in HVAC Transmission Lines Based on the Modal Amplitude Ratio
by Bin Zhang, Shihao Yin, Shixian Hui, Mingliang Yang, Yunchuan Chen and Ning Tong
Energies 2026, 19(2), 411; https://doi.org/10.3390/en19020411 - 14 Jan 2026
Viewed by 122
Abstract
To address the issues of insensitivity to high-impedance ground faults and difficulty in identifying reflected wavefronts in single-ended traveling-wave fault location methods for asymmetric ground faults in high-voltage AC transmission lines, this paper proposes a single-ended fault location method based on the modal [...] Read more.
To address the issues of insensitivity to high-impedance ground faults and difficulty in identifying reflected wavefronts in single-ended traveling-wave fault location methods for asymmetric ground faults in high-voltage AC transmission lines, this paper proposes a single-ended fault location method based on the modal amplitude ratio and deep learning. First, based on the dispersion characteristics of traveling waves, an approximate formula is derived between the fault distance and the amplitude ratio of the sum of the initial transient voltage traveling-wave 1-mode and 2-mode to 0-mode at the measurement point. Simulation verifies that the fault distance x from the measurement point at the line head is unaffected by transition resistance and fault inception angle, and that a nonlinear positive correlation exists between the distance x and the modal amplitude ratio. The multi-scale wavelet modal maximum ratio of the sum of 1-mode and 2-mode to 0-mode is used to characterize the amplitude ratio. This ratio serves as the input for a Residual Bidirectional Long Short-Term Memory (BiLSTM) network, which is optimized using the Dung Beetle Optimizer (DBO). The DBO-Res-BiLSTM model fits the nonlinear mapping between the fault distance x and the amplitude ratio. Simulation results demonstrate that the proposed method achieves high location accuracy. Furthermore, it remains robust against variations in fault type, location, transition resistance, and inception angle. Full article
Show Figures

Figure 1

21 pages, 1762 KB  
Article
Ultra-Short-Term Wind Power Forecasting Based on Improved TTAO Optimization and High-Frequency Adaptive Weighting Strategy
by Xiaoming Wang, Yan Huang, Jing Pu, Youqing Yang, Lin Zhang, Xiaolong Bai, Haoran Fan and Sheng Lin
Electronics 2026, 15(2), 363; https://doi.org/10.3390/electronics15020363 - 14 Jan 2026
Viewed by 158
Abstract
Accurate ultra-short-term wind power forecasting (WPF) is essential for maintaining power grid stability and minimizing economic risks, yet the inherent volatility of wind speed poses significant modeling challenges. To address this, this study proposes an ensemble framework integrating an Improved Triangular Topology Aggregation [...] Read more.
Accurate ultra-short-term wind power forecasting (WPF) is essential for maintaining power grid stability and minimizing economic risks, yet the inherent volatility of wind speed poses significant modeling challenges. To address this, this study proposes an ensemble framework integrating an Improved Triangular Topology Aggregation Optimizer (ITTAO) and a high-frequency adaptive weighting strategy. Methodologically, the ITTAO incorporates multi-strategy mechanisms to overcome the premature convergence of the traditional TTAO, thereby enabling precise hyperparameter optimization for the variational mode decomposition (VMD) and BiLSTM networks. Furthermore, in the reconstruction stage, a dynamic weighting strategy is introduced to modulate the contribution of high-frequency sub-sequences, thereby enhancing the capture of rapid fluctuations. Experimental results across multi-seasonal datasets demonstrate that the proposed hybrid model consistently outperforms representative baselines. Notably, in the most volatile scenarios, the model achieves an NMAE of 1.33%, an NRMSE of 2.20%, and an R2 of 98.18%. The results demonstrate that the proposed model achieves superior forecasting accuracy, enhancing the operational stability of wind farms and the secure integration of wind energy into the power grid. Full article
(This article belongs to the Section Systems & Control Engineering)
Show Figures

Figure 1

17 pages, 3529 KB  
Article
Study on Multimodal Sensor Fusion for Heart Rate Estimation Using BCG and PPG Signals
by Jisheng Xing, Xin Fang, Jing Bai, Luyao Cui, Feng Zhang and Yu Xu
Sensors 2026, 26(2), 548; https://doi.org/10.3390/s26020548 - 14 Jan 2026
Viewed by 223
Abstract
Continuous heart rate monitoring is crucial for early cardiovascular disease detection. To overcome the discomfort and limitations of ECG in home settings, we propose a multimodal temporal fusion network (MM-TFNet) that integrates ballistocardiography (BCG) and photoplethysmography (PPG) signals. The network extracts temporal features [...] Read more.
Continuous heart rate monitoring is crucial for early cardiovascular disease detection. To overcome the discomfort and limitations of ECG in home settings, we propose a multimodal temporal fusion network (MM-TFNet) that integrates ballistocardiography (BCG) and photoplethysmography (PPG) signals. The network extracts temporal features from BCG and PPG signals through temporal convolutional networks (TCNs) and bidirectional long short-term memory networks (BiLSTMs), respectively, achieving cross-modal dynamic fusion at the feature level. First, bimodal features are projected into a unified dimensional space through fully connected layers. Subsequently, a cross-modal attention weight matrix is constructed for adaptive learning of the complementary correlation between BCG mechanical vibration and PPG volumetric flow features. Combined with dynamic focusing on key heartbeat waveforms through multi-head self-attention (MHSA), the model’s robustness under dynamic activity states is significantly enhanced. Experimental validation using a publicly available BCG-PPG-ECG simultaneous acquisition dataset comprising 40 subjects demonstrates that the model achieves excellent performance with a mean absolute error (MAE) of 0.88 BPM in heart rate prediction tasks, outperforming current mainstream deep learning methods. This study provides theoretical foundations and engineering guidance for developing contactless, low-power, edge-deployable home health monitoring systems, demonstrating the broad application potential of multimodal fusion methods in complex physiological signal analysis. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

23 pages, 1151 KB  
Article
CNN–BiLSTM–Attention-Based Hybrid-Driven Modeling for Diameter Prediction of Czochralski Silicon Single Crystals
by Pengju Zhang, Hao Pan, Chen Chen, Yiming Jing and Ding Liu
Crystals 2026, 16(1), 57; https://doi.org/10.3390/cryst16010057 - 13 Jan 2026
Viewed by 185
Abstract
High-precision prediction of the crystal diameter during the growth of electronic-grade silicon single crystals is a critical step for the fabrication of high-quality single crystals. However, the process features high-temperature operation, strong nonlinearities, significant time-delay dynamics, and external disturbances, which limit the accuracy [...] Read more.
High-precision prediction of the crystal diameter during the growth of electronic-grade silicon single crystals is a critical step for the fabrication of high-quality single crystals. However, the process features high-temperature operation, strong nonlinearities, significant time-delay dynamics, and external disturbances, which limit the accuracy of conventional mechanism-based models. In this study, mechanism-based models denote physics-informed heat-transfer and geometric models that relate heater power and pulling rate to diameter evolution. To address this challenge, this paper proposes a hybrid deep learning model combining a convolutional neural network (CNN), a bidirectional long short-term memory network (BiLSTM), and self-attention to improve diameter prediction during the shoulder-formation and constant-diameter stages. The proposed model leverages the CNN to extract localized spatial features from multi-source sensor data, employs the BiLSTM to capture temporal dependencies inherent to the crystal growth process, and utilizes the self-attention mechanism to dynamically highlight critical feature information, thereby substantially enhancing the model’s capacity to represent complex industrial operating conditions. Experiments on operational production data collected from an industrial Czochralski (Cz) furnace, model TDR-180, demonstrate improved prediction accuracy and robustness over mechanism-based and single data-driven baselines, supporting practical process control and production optimization. Full article
(This article belongs to the Section Inorganic Crystalline Materials)
Show Figures

Figure 1

19 pages, 3746 KB  
Article
Fault Diagnosis and Classification of Rolling Bearings Using ICEEMDAN–CNN–BiLSTM and Acoustic Emission
by Jinliang Li, Haoran Sheng, Bin Liu and Xuewei Liu
Sensors 2026, 26(2), 507; https://doi.org/10.3390/s26020507 - 12 Jan 2026
Viewed by 252
Abstract
Reliable operation of rolling bearings is essential for mechanical systems. Acoustic emission (AE) offers a promising approach for bearing fault detection because of its high-frequency response and strong noise-suppression capability. This study proposes an intelligent diagnostic method that combines an improved complete ensemble [...] Read more.
Reliable operation of rolling bearings is essential for mechanical systems. Acoustic emission (AE) offers a promising approach for bearing fault detection because of its high-frequency response and strong noise-suppression capability. This study proposes an intelligent diagnostic method that combines an improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) and a convolutional neural network–bidirectional long short-term memory (CNN–BiLSTM) architecture. The method first applies wavelet denoising to AE signals, then uses ICEEMDAN decomposition followed by kurtosis-based screening to extract key fault components and construct feature vectors. Subsequently, a CNN automatically learns deep time–frequency features, and a BiLSTM captures temporal dependencies among these features, enabling end-to-end fault identification. Experiments were conducted on a bearing acoustic emission dataset comprising 15 operating conditions, five fault types, and three rotational speeds; comparative model tests were also performed. Results indicate that ICEEMDAN effectively suppresses mode mixing (average mixing rate 6.08%), and the proposed model attained an average test-set recognition accuracy of 98.00%, significantly outperforming comparative models. Moreover, the model maintained 96.67% accuracy on an independent validation set, demonstrating strong generalization and practical application potential. Full article
(This article belongs to the Special Issue Deep Learning Based Intelligent Fault Diagnosis)
Show Figures

Figure 1

Back to TopTop