Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,004)

Search Parameters:
Keywords = Bi-LSTM

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 2877 KB  
Article
Research on BiLSTM–Transformer Power Load Forecasting Method Based on Dynamic Adaptive Fusion
by Jialong Xu, Lei Zhang and Zhenxiong Zhang
Energies 2026, 19(6), 1473; https://doi.org/10.3390/en19061473 (registering DOI) - 15 Mar 2026
Abstract
Power load forecasting is a core technical component for achieving safe, stable, and economic operation in smart grids. This paper proposes a hybrid BiLSTM–Transformer forecasting method based on a Dynamic Adaptive Fusion (DAF) module. The core of this method involves utilizing the DAF [...] Read more.
Power load forecasting is a core technical component for achieving safe, stable, and economic operation in smart grids. This paper proposes a hybrid BiLSTM–Transformer forecasting method based on a Dynamic Adaptive Fusion (DAF) module. The core of this method involves utilizing the DAF module to adaptively weight different feature channels to highlight key influencing factors, while simultaneously employing a temporal attention mechanism to capture the contributions of various time steps. Building on this, the model effectively combines the strengths of BiLSTM networks in capturing bidirectional dependencies with the capability of Transformer models to extract global contextual features, thereby achieving a multi-level dynamic fusion of load characteristics. Experiments on real-world grid datasets demonstrate that the proposed method achieves a significant performance improvement over traditional models, particularly in terms of load peak prediction accuracy and stability. This provides effective technical support for the refined scheduling of power systems. Full article
23 pages, 4266 KB  
Article
A CNN–BiLSTM–Attention-Based Deep Learning Approach for Predicting Asphalt Pavement Performance
by Yu Huang, Chen Chen and Xiaomin Dai
Buildings 2026, 16(6), 1150; https://doi.org/10.3390/buildings16061150 (registering DOI) - 14 Mar 2026
Abstract
Reliable prediction of asphalt pavement performance is essential for scientific maintenance decision-making. However, current methodologies have two primary challenges that represent significant research gaps: a heavy reliance on high-dimensional multi-source data—which is often inaccessible in resource-constrained remote regions—and the inability of traditional deep [...] Read more.
Reliable prediction of asphalt pavement performance is essential for scientific maintenance decision-making. However, current methodologies have two primary challenges that represent significant research gaps: a heavy reliance on high-dimensional multi-source data—which is often inaccessible in resource-constrained remote regions—and the inability of traditional deep learning models to adequately capture nonlinear bidirectional temporal correlations within short-time-series pavement data. To address these limitations, this study proposes a hybrid CNN–BiLSTM–Attention architecture. The model was trained using a four-year dataset (2067 records from Xinjiang) of Pavement Condition Index (PCI) and Riding Quality Index (RQI) scores to predict fifth-year performance. Benchmarked against four state-of-the-art models, the proposed method demonstrated superior accuracy: PCI predictions achieved an R2 of 0.837 (a 1.7% improvement) and a Mean Absolute Error (MAE) of 5.31 (a 0.57% reduction) compared to the second-best model. Similarly, RQI predictions yielded an R2 of 0.855 and an MAE of 1.84, representing a 1.1% increase in accuracy and a 5.6% reduction in error, respectively. By obviating the dependency on multi-source data, this approach reduces the data acquisition and processing overhead by over 80%. Consequently, this research fills a critical gap in single-source, short-time-series prediction and provides a robust, data-driven solution for infrastructure maintenance in remote areas. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

17 pages, 2083 KB  
Article
Monitoring of Liquid Metal Reactor Heater Zones with Recurrent Neural Network Learning of Temperature Time Series
by Maria Pantopoulou, Derek Kultgen, Lefteri Tsoukalas and Alexander Heifetz
Energies 2026, 19(6), 1462; https://doi.org/10.3390/en19061462 (registering DOI) - 14 Mar 2026
Abstract
Advanced high-temperature fluid reactors (ARs), such as sodium fast reactors (SFRs) and molten salt cooled reactors (MSCRs) utilize high-temperature fluids at ambient pressure. To melt the fluid during reactor startup and prevent fluid freezing during cooldown, the thermal–hydraulic systems of such ARs include [...] Read more.
Advanced high-temperature fluid reactors (ARs), such as sodium fast reactors (SFRs) and molten salt cooled reactors (MSCRs) utilize high-temperature fluids at ambient pressure. To melt the fluid during reactor startup and prevent fluid freezing during cooldown, the thermal–hydraulic systems of such ARs include heater zones consisting of specific heaters with controllers, temperature sensors, and thermal insulation. The failure of heater zones due to insulation material degradation or improper installation, resulting in parasitic heat losses, can lead to fluid freezing. The detection of faults using a heat-transfer model is difficult because of a lack of knowledge of the experimental details. Data-driven machine learning of heater zone temperature time series offers a viable alternative. In this study, we benchmarked the performance of recurrent neural networks (RNNs) in an analysis of heat-up transient temperature time series of heater zones installed on a liquid sodium vessel. The RNN models include long short-term memory (LSTM) and gated recurrent unit (GRU) networks, as well as their bi-directional variants, BiLSTM and BiGRU. Anomalous temperature points were designated using a percentile-based threshold applied to residual fluctuations in the detrended temperature time series. Additionally, the impact of the exponentially weighted moving average (EWMA) method on detection accuracy was examined. The RNN models’ performance was assessed using precision, recall, and F1 score metrics. Results demonstrated that RNN models effectively detect anomalies in temperature time series with the best models for each heater zone achieving F1 scores of over 93%. To explain the variations in RNN model performance across different heater zones, we used Kullback–Leibler (KL) divergence to quantify the relative entropy between training and testing data, and the Detrended Fluctuation Analysis (DFA) to assess long-range temporal correlations. For datasets with strong long-range correlations and minimal relative entropy between training and testing data, GRU is the best-performing model. When the data exhibits weaker long-term correlations and a significant relative entropy between training and testing distributions, BiGRU shows the best performance. For the data sets with intermediate values of both KL divergence and DFA, the best performance is obtained with LSTM and BiLSTM, respectively. Full article
13 pages, 718 KB  
Article
Construction of Mineral Resources Knowledge Graph: A Case Study of Linyi City, Shandong Province, China
by Xiaocai Liu, Yong Zhang, Ming Liu, Yonglin Yao, Kun Liu, Yongqing Tong and Xinqi Zheng
Appl. Sci. 2026, 16(6), 2749; https://doi.org/10.3390/app16062749 - 13 Mar 2026
Viewed by 69
Abstract
The efficient exploration and development of mineral resources rely on deep mining and correlation analysis of massive, multi-source, and unstructured geological data. Knowledge graph technology provides a structured solution for integrating fragmented knowledge in the field of mineral resources. This study takes the [...] Read more.
The efficient exploration and development of mineral resources rely on deep mining and correlation analysis of massive, multi-source, and unstructured geological data. Knowledge graph technology provides a structured solution for integrating fragmented knowledge in the field of mineral resources. This study takes the iron ore resources in Linyi City, Shandong Province as a typical case and proposes a method framework for automatically constructing a regional mineral resource knowledge graph from unstructured text. Firstly, seven types of mineral entities (location, ore body, scale, type, attitude, alteration, development degree) and five semantic relationships (type, scale, location, inclusion, development) were defined, and a high-quality Chinese annotation corpus containing 10,434 entities and 6660 relationships was constructed through domain ontology design. Secondly, BiLSTM-CRF, BiGRU-CRF, and various BERT based models were compared in the named entity recognition task, and it was found that the optimized BERT-CRF model achieved the best performance (F1 score: 82.8%). The BERT based model significantly outperforms traditional PCNN and BiGRU models, achieving an F1 score of 98.14%, which was found in relation extraction tasks. Finally, based on the extracted triples, a visual knowledge graph of iron ore resources in Linyi City was constructed using the Neo4j graph database, in order to achieve knowledge association queries and visual navigation. Full article
Show Figures

Figure 1

22 pages, 5399 KB  
Article
Bridge Deformation Prediction with BGCO-PIC-DA-LSTM Based on Prior-Informed Multi-Source Fusion and Dual-Stream Residual Attention
by Pengchen Qin and Feng Wang
Appl. Sci. 2026, 16(6), 2681; https://doi.org/10.3390/app16062681 - 11 Mar 2026
Viewed by 113
Abstract
Accurate deflection prediction is vital for structural health monitoring of large-span bridges yet remains challenging due to complex nonlinear environmental couplings. This paper proposes a hybrid deep learning framework, BGCO-PIC-DA-LSTM, for precise bridge deflection prediction. First, a Prior-Informed Correlation (PIC) strategy incorporating temperature [...] Read more.
Accurate deflection prediction is vital for structural health monitoring of large-span bridges yet remains challenging due to complex nonlinear environmental couplings. This paper proposes a hybrid deep learning framework, BGCO-PIC-DA-LSTM, for precise bridge deflection prediction. First, a Prior-Informed Correlation (PIC) strategy incorporating temperature lag terms is introduced to enhance the statistical consistency of input features. Second, a dual-stream residual Bi-LSTM network integrating adaptive temporal attention is developed to simultaneously capture long-term evolutionary trends and instantaneous dynamic fluctuations. Furthermore, a Bayesian-Gradient Cooperative Optimization (BGCO) strategy is employed to automatically configure optimal hyperparameters. Validation using in situ data from a large-span cable-stayed bridge demonstrates that the proposed method significantly outperforms baseline algorithms in prediction accuracy and robustness. Additionally, the prediction residuals exhibit characteristics approximating zero-mean Gaussian white noise, establishing a reference baseline for structural state evolution and providing a certain basis for identifying potential performance shifts. Full article
(This article belongs to the Section Civil Engineering)
Show Figures

Figure 1

24 pages, 2078 KB  
Article
A Few-Shot Bearing Fault Diagnosis Method Integrating Improved Generative Adversarial Network and CNN-BiLSTM-Attention Hybrid Network
by Shiqun Liu, Xingli Liu and Zhaoyong Jiang
Appl. Sci. 2026, 16(6), 2660; https://doi.org/10.3390/app16062660 - 11 Mar 2026
Viewed by 111
Abstract
Artificial intelligence technology offers an intelligent and efficient new pathway for bearing fault diagnosis, holding significant importance for ensuring the stable operation of industrial systems. However, bearing fault samples are scarce in industrial practice, and traditional data-driven methods exhibit a marked decline in [...] Read more.
Artificial intelligence technology offers an intelligent and efficient new pathway for bearing fault diagnosis, holding significant importance for ensuring the stable operation of industrial systems. However, bearing fault samples are scarce in industrial practice, and traditional data-driven methods exhibit a marked decline in diagnostic performance under conditions of small sample sizes. To address this, this paper proposes a few-shot bearing fault diagnosis method that integrates an Improved Generative Adversarial Network with a CNN-BiLSTM-Attention hybrid network. The method comprises three core stages: in the data augmentation stage, a class-center-constrained Least Squares Generative Adversarial Network (CCC-LSGAN) model featuring class center constraint and joint loss optimization is proposed to generate high-quality fault samples through frequency-domain feature constraints, effectively expanding the training data; in the feature learning stage, a one-dimensional Convolutional Neural Network, Bidirectional Long Short-Term Memory, and Attention hybrid network (1D-CNN-BiLSTM-Attention) hybrid base classifier is constructed, which combines multi-scale convolution, bidirectional temporal modeling, and attention mechanisms to fully extract the spatiotemporal features of vibration signals; in the inference stage, test-time noise augmentation and a multi-model weighted voting ensemble mechanism are introduced to enhance the robustness and generalization capability of the diagnosis. Experimental results based on the PU and CWRU public bearing datasets demonstrate that the proposed method significantly outperforms existing mainstream diagnostic approaches in core metrics, including accuracy, precision, recall, and F1 score. It achieves a diagnostic accuracy of 96.60% on the PU dataset and 98.58% on the CWRU dataset. This method verifies the feasibility of highly reliable diagnosis under few-shot conditions and provides an effective solution for the intelligent operation and maintenance of industrial equipment. Full article
Show Figures

Figure 1

31 pages, 7577 KB  
Article
A Zero-Interaction, Cloud-Free Remote ECG Monitoring and Arrhythmia Screening System Using Handheld Leads and Email Transmission
by Wenjie Feng, Lingjun Meng, Tianxiang Yang, Hong Jin, Xinhao Liu and Pan Pei
Appl. Sci. 2026, 16(6), 2640; https://doi.org/10.3390/app16062640 - 10 Mar 2026
Viewed by 202
Abstract
To address the challenges of complex operation, high server deployment costs, and insufficient automated identification capabilities in community-based centralized electrocardiogram (ECG) screening, a novel arrhythmia screening system based on handheld ECG leads and email transmission is proposed. The system is operated in a [...] Read more.
To address the challenges of complex operation, high server deployment costs, and insufficient automated identification capabilities in community-based centralized electrocardiogram (ECG) screening, a novel arrhythmia screening system based on handheld ECG leads and email transmission is proposed. The system is operated in a zero-interaction mode: ECG acquisition is initiated automatically upon skin contact with the electrodes, and upon completion, the ECG signal is automatically analyzed and the email transmission function is triggered—no user intervention being required. First, noise in the ECG signal is effectively suppressed by cascading a zero-phase high-pass filter with a sliding window and a zero-crossing-rate (ZCR) guided adaptive wavelet thresholding technique. Subsequently, RR interval sequences are extracted from the denoised signals and fed into a lightweight bidirectional long short-term memory (BiLSTM) network for automatic arrhythmia detection. In the final step, a 30 s standard ECG, screening status, and acquired image are automatically delivered to clinicians via standard IMAP/SMTP email protocols—eliminating the need for dedicated mobile applications or cloud platforms. Experimental results demonstrated that the relative signal-to-noise ratio (SNRECG) was improved by 2.36 dB. On the independent test set, a sensitivity of 97.98%, a specificity of 98.21%, and an AUC of 0.994 were achieved. Furthermore, an end-to-end email transmission latency of less than 7.68 s was recorded. These findings confirm the potential of the proposed system as a low-cost, easily deployable, and elderly-friendly remote ECG solution for primary healthcare settings. Finally, in a pilot screening involving 10 volunteers, one case of arrhythmia was successfully identified, which validated the feasibility of the system. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data, 2nd Volume)
Show Figures

Figure 1

32 pages, 7360 KB  
Article
Short-Term Load Forecasting for a Renewable-Rich Power System Using an IMVMD-XLSTM
by Qiujing Lin, Hongquan Zhu, Xiaolong Wang and Xiangang Peng
Energies 2026, 19(5), 1379; https://doi.org/10.3390/en19051379 - 9 Mar 2026
Viewed by 195
Abstract
The high penetration of photovoltaic and wind power introduces strong non-stationarity and multi-scale fluctuations into power system load profiles, challenging the accuracy of short-term load forecasting (STLF). To address this, we propose a hybrid forecasting framework, IMVMD-XLSTM, which synergistically integrates an optimized multivariate [...] Read more.
The high penetration of photovoltaic and wind power introduces strong non-stationarity and multi-scale fluctuations into power system load profiles, challenging the accuracy of short-term load forecasting (STLF). To address this, we propose a hybrid forecasting framework, IMVMD-XLSTM, which synergistically integrates an optimized multivariate decomposition with an advanced neural network. First, to address the critical issue that MVMD performance is highly sensitive to its parameter settings, which impacts decomposition quality, a multi-strategy Improved Fruit Fly Optimization Algorithm (IFOA) is developed to task-oriented adaptively tune the key parameters of MVMD, forming an Improved MVMD (IMVMD). This optimization aims to ensure decomposition stability and maximize the relevance for the subsequent forecasting task. Second, to fully leverage the characteristics of the frequency-aligned, multi-channel sub-sequences generated by IMVMD, an Extended LSTM (XLSTM) network is designed. Its serially arranged BisLSTM and mLSTM units are specifically tailored to capture the bidirectional long-term dependencies within each stable sub-sequence and the complex high-dimensional interactions across the aligned sub-sequences, respectively. Evaluated on 15 min resolution data from the Austrian grid, the proposed IMVMD-XLSTM framework achieves a day-ahead forecasting Mean Absolute Percentage Error (MAPE) of 2.45% (±1.41%). This study provides a verifiable and effective solution that couples data-adaptive signal processing with a purpose-built neural architecture to enhance forecasting reliability in renewable-rich power systems. Full article
Show Figures

Figure 1

23 pages, 94753 KB  
Article
Dynamic Evaluation of Tillage–Residue Management Systems and Maize Yield Prediction via Multi-Source Data Fusion and Mixed-Effects Modeling
by Zhenzi Zhang, Miao Gan, Na Li, Jun Dong, Yang Liu, Zhiyan Hou, Xingyu Yue and Zhi Dong
Agronomy 2026, 16(5), 584; https://doi.org/10.3390/agronomy16050584 - 8 Mar 2026
Viewed by 266
Abstract
Tillage–residue management is a controllable lever for improving maize yield and system resilience under climate variability. Here we propose a mixed-effects spatiotemporal learning framework (ME-LSTM) that integrates multi-source observations to enable robust yield prediction and management system evaluation across heterogeneous sites and years. [...] Read more.
Tillage–residue management is a controllable lever for improving maize yield and system resilience under climate variability. Here we propose a mixed-effects spatiotemporal learning framework (ME-LSTM) that integrates multi-source observations to enable robust yield prediction and management system evaluation across heterogeneous sites and years. First, we construct multi-year sliding-window inputs to represent legacy effects and cumulative influences of past management and environment. Second, a deep temporal encoder learns nonlinear dependencies from climate–soil–remote-sensing sequences to enhance interannual extrapolation. Third, a mixed-effects module explicitly separates management fixed effects from hierarchical random effects (e.g., source/study, site, year, and plot), absorbing source-specific biases and unobserved heterogeneity while improving interpretability. Finally, we parameterize management × climate/soil interactions to quantify system-specific sensitivities to environmental drivers and to support scenario-based comparison and recommendation of management options. Across multi-ecological maize datasets, ME-LSTM achieved an R2 of 0.8989 with an RMSE of 309.83 kg ha−1 on the test set. Ablation analyses show that removing remote-sensing features or ground-based temporal information substantially degrades performance, confirming the complementary value of multi-source fusion. Benchmarking against strong temporal baselines (LSTM, GRU, BiGRU, and Transformer) further demonstrates consistent accuracy gains of ME-LSTM, highlighting its suitability for small-sample, noisy, and hierarchically structured agricultural data. Overall, ME-LSTM provides an interpretable and scalable tool for climate-adaptive optimization of tillage–residue management and supports robust, actionable decision-making across diverse agro-ecological conditions. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

17 pages, 2386 KB  
Article
Comparative Evaluation of Deep Learning Models for Respiratory Rate Estimation Using PPG-Derived Numerical Features
by Syed Mahedi Hasan, Mercy Golda Sam Raj and Kunal Mitra
Electronics 2026, 15(5), 1108; https://doi.org/10.3390/electronics15051108 - 7 Mar 2026
Viewed by 222
Abstract
Respiratory rate (RR) is a critical vital sign for the early detection of hypoxia and respiratory deterioration, yet its continuous monitoring remains challenging in clinical environments. Photoplethysmography (PPG) provides a non-invasive source of physiological information from which respiratory dynamics can be inferred. In [...] Read more.
Respiratory rate (RR) is a critical vital sign for the early detection of hypoxia and respiratory deterioration, yet its continuous monitoring remains challenging in clinical environments. Photoplethysmography (PPG) provides a non-invasive source of physiological information from which respiratory dynamics can be inferred. In this study, numerical physiological features derived from PPG data were used to comparatively evaluate multiple deep learning models for respiratory rate estimation. Fixed-length sliding windows were constructed from the dataset and used to train five neural network architectures: a Deep Feedforward Neural Network (DFNN), unidirectional and bidirectional Recurrent Neural Networks (RNN, Bi-RNN), and unidirectional and bidirectional Long Short-Term Memory networks (LSTM, Bi-LSTM). Model performance was assessed using mean absolute error (MAE), root mean squared error (RMSE), coefficient of determination (R2), and computational runtime. Results indicate that models incorporating temporal dependencies outperform the static feedforward baseline, achieving MAE values as low as 0.521 breaths/min, making them competitive with or lower than previously reported PPG-based approaches. These findings highlight the effectiveness of temporal deep learning models for respiratory rate estimation from PPG-derived numerical features and provide insight into accuracy–efficiency trade-offs relevant to real-time monitoring applications. Full article
Show Figures

Figure 1

23 pages, 2046 KB  
Article
Carbon Price Forecasting via a CNN-BiLSTM Model Integrating VMD and Classified News Sentiment
by Xiyun Yang, Han Chen, Xiangjun Li and Xiaoyu Liu
Big Data Cogn. Comput. 2026, 10(3), 82; https://doi.org/10.3390/bdcc10030082 - 6 Mar 2026
Viewed by 148
Abstract
Accurate carbon price forecasting is vital for risk management but is hindered by high volatility and sensitivity to external shocks. Existing multivariate models typically overlook unstructured news sentiment, failing to capture irrational fluctuations driven by market public opinion. To address this, this paper [...] Read more.
Accurate carbon price forecasting is vital for risk management but is hindered by high volatility and sensitivity to external shocks. Existing multivariate models typically overlook unstructured news sentiment, failing to capture irrational fluctuations driven by market public opinion. To address this, this paper proposes VBN-Net, a hybrid model integrating carbon-specific news sentiment with Variational Mode Decomposition (VMD). Two core innovations are presented: First, a multi-modal input mechanism combines structured financial data with unstructured carbon news sentiment to effectively capture policy-driven shocks. Second, a Sequential Beluga Whale Optimization strategy is designed to adaptively optimize feature engineering in steps. Unlike conventional approaches, the VBN-Net first employs VMD for denoising and frequency decomposition, and then optimizes the fusion weights of news sentiment across different frequency components derived from multi-source news. This strategy effectively overcomes the subjectivity of manual parameter selection, providing high-quality features for a fixed CNN-BiLSTM backbone. By integrating VMD-based denoising with optimized multi-source news fusion, the model achieves consistent performance improvements across multiple evaluation metrics. The empirical findings validate the effectiveness of the proposed model in enhancing forecasting performance, thereby providing a reliable analytical tool for participants in the carbon market. Full article
Show Figures

Figure 1

24 pages, 1727 KB  
Article
Symmetry-Guided Deep Generative Model for Multi-Step Evolution of Complex Dynamical Systems
by Ying Xu, Chengbo Zhu, Nannan Su, Yingying Wang and Ziqi Fan
Symmetry 2026, 18(3), 450; https://doi.org/10.3390/sym18030450 - 6 Mar 2026
Viewed by 171
Abstract
Complex dynamical systems are characterized by inherent nonlinearity, high dimensionality, spatiotemporal uncertainty, and implicit symmetry, posing fundamental challenges for their mathematical modeling and multi-step evolution prediction. For example, wind power exhibits strong randomness, intermittency, and latent temporal symmetry. To address this, this paper [...] Read more.
Complex dynamical systems are characterized by inherent nonlinearity, high dimensionality, spatiotemporal uncertainty, and implicit symmetry, posing fundamental challenges for their mathematical modeling and multi-step evolution prediction. For example, wind power exhibits strong randomness, intermittency, and latent temporal symmetry. To address this, this paper proposes a symmetry-guided deep generative model, the bi-directional recurrent generative adversarial network (BDR-GAN), for the multi-step rolling prediction of such systems. The BDR-GAN formalizes multi-step evolution as a conditional probability distribution learning problem. It systematically integrates three forms of symmetry to enhance modeling validity: bi-directional temporal symmetry captured by a BiLSTM-based generator, structural symmetry within the adversarial learning framework between the generator and a 1D-CNN discriminator, and rolling symmetry enabled by a recursive prediction strategy that supports cyclic state updates. Theoretical analysis demonstrates that this symmetry-embedded adversarial mechanism enables BDR-GAN to effectively approximate the underlying dynamic operators and the conditional distribution of future states, improving the learned model’s generalization. Experimental validation on wind power datasets confirms the framework’s superiority. Compared to benchmark models, BDR-GAN achieves superior prediction accuracy (e.g., RMSE 0.236, MAPE 5.12%), provides reliable uncertainty quantification (PICP 95.5%), and exhibits enhanced robustness against noise and variability. This work provides a generalizable, symmetry-guided modeling framework for the multi-step evolution of complex dynamical systems, offering theoretical and technical support for high-precision prediction in critical applications such as wind power integration and smart grid operation. Full article
(This article belongs to the Special Issue Application of Symmetry/Asymmetry and Machine Learning)
Show Figures

Figure 1

22 pages, 3288 KB  
Article
An Intelligent Real-Time System for Sentence-Level Recognition of Continuous Saudi Sign Language Using Landmark-Based Temporal Modeling
by Adel BenAbdennour, Mohammed Mukhtar, Osama Almolike, Bilal A. Khawaja and Abdulmajeed M. Alenezi
Sensors 2026, 26(5), 1652; https://doi.org/10.3390/s26051652 - 5 Mar 2026
Viewed by 257
Abstract
A persistent challenge for Deaf and Hard-of-Hearing individuals is the communication gap between sign language users and the hearing community, particularly in regions with limited automated translation resources. In Saudi Arabia, this gap is amplified by the reliance on Saudi Sign Language (SSL) [...] Read more.
A persistent challenge for Deaf and Hard-of-Hearing individuals is the communication gap between sign language users and the hearing community, particularly in regions with limited automated translation resources. In Saudi Arabia, this gap is amplified by the reliance on Saudi Sign Language (SSL) and the scarcity of real-time, sentence-level translation systems. This paper presents a real-time system for sentence-level recognition of continuous SSL and direct mapping to natural spoken Arabic. The proposed system operates end-to-end on live video streams or pre-recorded content, extracting spatio-temporal landmark features using the MediaPipe Holistic framework. For classification, the input feature vector consists of 225 features derived from hand and body pose landmarks. These features are processed by a Bidirectional Long Short-Term Memory (BiLSTM) network trained on the ArabSign (ArSL) dataset to perform direct sentence-level classification over a vocabulary of 50 continuous Arabic sign language sentences, supported by an idle-based segmentation mechanism that enables natural, uninterrupted signing. Experimental evaluation demonstrates robust generalization: under a Leave-One-Signer-Out (LOSO) cross-validation protocol, the model attains a mean sentence-level accuracy of 94.2%, outperforming the fixed signer-independent split baseline of 92.07%, while maintaining real-time performance suitable for interactive use. To enhance linguistic fluency, an optional post-recognition refinement stage is incorporated using a large language model (LLM), followed by text-to-speech synthesis to produce audible Arabic output; this refinement operates strictly as post-processing and is not included in the reported recognition accuracy metrics. The results demonstrate that direct sentence-level modeling, combined with landmark-based feature extraction and real-time segmentation, provides an effective and practical solution for continuous SSL sentence recognition in real-time. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition (3rd Edition))
Show Figures

Figure 1

17 pages, 3174 KB  
Article
A Hybrid Model Integrating CNN–BiLSTM for Discriminating Strain and Temperature Effects on FBG-Based Sensors
by Chuanhao Wei, Qiang Liu, Dongdong Lin, Dan Zhu, Jingzhan Shi and Yiping Wang
Photonics 2026, 13(3), 254; https://doi.org/10.3390/photonics13030254 - 4 Mar 2026
Viewed by 261
Abstract
A primary bottleneck in deploying Fiber Bragg Grating (FBG) sensors lies in their inherent dual sensitivity to thermal and mechanical variations, which mandates robust decoupling mechanisms for precise parameter extraction. To address this persistent cross-sensitivity issue, this study introduces a novel interrogation scheme [...] Read more.
A primary bottleneck in deploying Fiber Bragg Grating (FBG) sensors lies in their inherent dual sensitivity to thermal and mechanical variations, which mandates robust decoupling mechanisms for precise parameter extraction. To address this persistent cross-sensitivity issue, this study introduces a novel interrogation scheme that integrates a Convolutional Neural Network with a Bidirectional Long Short-Term Memory (CNN-BiLSTM) architecture. Instead of relying on conventional peak-tracking algorithms or isolated central wavelengths, our proposed data-driven strategy directly mines structural features from the full reflection spectra, thereby substantially mitigating cross-interference errors. The experimental results reveal that the coefficients of determination (R2) for strain and temperature prediction reach 99.37% and 99.75% each, while the root mean square errors (RMSEs) are 13.51 µε and 1.42 °C, respectively. The proposed method requires only a single FBG sensor, which reduces the sensor requirements, showing great potential in sensing applications requiring low costs and high adaptability. In addition, in some special environments, temperature information cannot be obtained, so we utilize another reference FBG to realize the temperature compensation. Meanwhile, we proposed a spectral differencing method (SDM) by differencing the spectra of the two FBGs to obtain the spectra containing only strain information and sent them as a dataset for model training, with a 4-times improvement in accuracy over traditional compensation methods. Finally, we also explored the application of the system for distributed FBGs, achieving an absolute peak wavelength interrogation precision of approximately ±0.02 nm. The system is expected to be applied in the field of structural health monitoring, which is promising even in harsh environments. Full article
(This article belongs to the Special Issue Fiber Optic Sensors: Advances, Technologies and Applications)
Show Figures

Figure 1

21 pages, 3469 KB  
Article
Explainable Monitoring Model Based on AE-BiGRU and SHAP Analysis of Seepage Pressure for Concrete Dams
by Jinji Xie, Yuan Shao, Junzhuo Li, Zihao Jia, Chunjiang Fu, Chenfei Shao, Yanxin Xu and Yating Hu
Water 2026, 18(5), 614; https://doi.org/10.3390/w18050614 - 4 Mar 2026
Viewed by 231
Abstract
Precise forecasting and physical elucidation of seepage behavior are crucial for maintaining the operational safety of concrete dams. Nonetheless, current monitoring methodologies frequently fail to adequately encompass nonlinear temporal relationships in seepage processes and exhibit a deficiency in straightforward interpretability. This paper provides [...] Read more.
Precise forecasting and physical elucidation of seepage behavior are crucial for maintaining the operational safety of concrete dams. Nonetheless, current monitoring methodologies frequently fail to adequately encompass nonlinear temporal relationships in seepage processes and exhibit a deficiency in straightforward interpretability. This paper provides an explainable monitoring approach that combines an alpha-evolution Bidirectional Gated Recurrent Unit (AE-BiGRU) with Shapley Additive Explanations (SHAP)-based interpretability analysis to solve these shortcomings. An AE-BiGRU prediction model is first developed, in which the BiGRU architecture exploits bidirectional temporal dependencies to enhance prediction accuracy and robustness. The alpha-evolution algorithm is then employed to optimize key hyperparameters of the neural network, thereby further improving model performance. Subsequently, SHAP interpretability analysis is applied to quantify the contribution of individual input variables and to elucidate the physical drivers of seepage variation. Validation utilizing long-term seepage monitoring data from a roller-compacted concrete (RCC) gravity dam indicates that the proposed AE-BiGRU model substantially surpasses benchmark models, including LSTM and traditional GRU variations. Furthermore, SHAP interpretability analysis reveals the predominant influences of reservoir water level fluctuations and cumulative temporal factors on seepage evolution patterns. The suggested approach attains high-precision seepage prediction while ensuring physically meaningful interpretability, thus providing a dependable foundation for safety evaluation and intelligent monitoring of concrete dams. Full article
Show Figures

Figure 1

Back to TopTop