Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,214)

Search Parameters:
Keywords = Bidirectional Long Short-Term Memory (BiLSTM)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
44 pages, 14806 KB  
Article
An Agricultural Product Price Prediction Model Based on Quadratic Clustering Decomposition and TOC-Optimized Deep Learning
by Fengkai Ye, Ruoqian Li, Danping Wang and Mengyang Li
Algorithms 2026, 19(5), 357; https://doi.org/10.3390/a19050357 - 3 May 2026
Abstract
Accurate forecasting of agricultural product prices is crucial for informed decision-making in agricultural markets; however, such time series are inherently characterized by non-stationarity, multi-scale dynamics, and substantial noise, posing significant challenges to conventional methods. To overcome these limitations, this study proposes a novel [...] Read more.
Accurate forecasting of agricultural product prices is crucial for informed decision-making in agricultural markets; however, such time series are inherently characterized by non-stationarity, multi-scale dynamics, and substantial noise, posing significant challenges to conventional methods. To overcome these limitations, this study proposes a novel hybrid framework, termed TOC-CNN-BiLSTM-SA, built upon a “quadratic decomposition–clustering–optimization” paradigm. Specifically, a composite CEEMDAN–K-means++–VMD approach is first employed to hierarchically decompose the raw price series via coarse decomposition, feature clustering, and refined decomposition, enabling effective noise suppression and multi-scale feature extraction. Subsequently, a deep learning architecture integrating Convolutional Neural Networks (CNNs), Bidirectional Long Short-Term Memory networks (BiLSTM), and a self-attention mechanism is developed, where CNN captures local patterns, BiLSTM models bidirectional temporal dependencies, and the attention mechanism enhances global feature representation. Furthermore, the Tornado Optimizer with Coriolis force (TOC) is introduced to adaptively tune key hyperparameters, thereby improving model robustness and generalization capability. Empirical results based on wheat price data from Henan Province, China, demonstrate that the proposed model achieves outstanding predictive performance, with RMSE, MAE, MAPE, and R2 values of 4.425, 3.9372, 0.16%, and 99.97%, respectively, significantly outperforming existing benchmark models. These research indicate that the proposed framework effectively captures complex price dynamics and offers a reliable and practical solution for agricultural price forecasting. Full article
28 pages, 14737 KB  
Article
SMAPNet: A Hybrid Ship Motion Attitude Prediction Network Integrating Incremental Decomposition
by Zhibo Lei, Yanlin Liu, Zonghan Li, Huibing Gan and Fupeng Sun
J. Mar. Sci. Eng. 2026, 14(9), 843; https://doi.org/10.3390/jmse14090843 - 30 Apr 2026
Viewed by 86
Abstract
An accurate prediction of the short-term motion attitude of ships is essential for navigation safety and offshore operations. However, conventional time series prediction models have constraints in handling time-varying dynamics and adapting to diverse sea states. Therefore, Ship Motion Attitude Prediction Network (SMAPNet) [...] Read more.
An accurate prediction of the short-term motion attitude of ships is essential for navigation safety and offshore operations. However, conventional time series prediction models have constraints in handling time-varying dynamics and adapting to diverse sea states. Therefore, Ship Motion Attitude Prediction Network (SMAPNet) based on Non-Symmetric Tri-Cube Kernel Trend Filter (NTKTF) is proposed in the present paper. SMAPNet decomposes temporal signals using the Feature Extraction Block (FEB), fuses local and global features through Feature Refinement Block (FRB), and integrates Bidirectional Long Short-Term Memory Network (Bi-LSTM) with a self-attention mechanism, Feature Prediction Block (FPB), for short-term prediction within 1 to 5 s. In this experiment, field-measured data from the ship XIN HONG ZHUAN were employed to construct online prediction scenarios, and a systematic evaluation was conducted from three perspectives: local prediction accuracy, evaluation metric, and error distribution. The findings indicate that SMAPNet exhibits improved adaptability and prediction accuracy in predicting ship motion attitudes under different sea states. Specifically, in the single-step prediction of roll and pitch under sea states 3 and 4, the mean square errors (MSE) of SMAPNet are reduced by 10.45%, 6.96% and 14.60%, 2.77% respectively compared with the superior candidate model. Full article
(This article belongs to the Section Ocean Engineering)
39 pages, 3200 KB  
Article
A Multimodal Audiovisual Deep Learning Framework for Early Detection of Parkinson’s Disease
by Yinpeng Guo, Hua Huo, Yulong Pei, Lan Ma, Shilu Kang, Jiaxin Xu and Aokun Mei
Electronics 2026, 15(9), 1904; https://doi.org/10.3390/electronics15091904 - 30 Apr 2026
Viewed by 84
Abstract
Parkinson’s disease (PD) is a progressive neurodegenerative disorder primarily caused by the degeneration of dopamine-producing neurons in the substantia nigra, leading to characteristic motor symptoms such as tremors, rigidity, and bradykinesia, as well as non-motor manifestations including depression, sleep disturbances, and speech impairments. [...] Read more.
Parkinson’s disease (PD) is a progressive neurodegenerative disorder primarily caused by the degeneration of dopamine-producing neurons in the substantia nigra, leading to characteristic motor symptoms such as tremors, rigidity, and bradykinesia, as well as non-motor manifestations including depression, sleep disturbances, and speech impairments. Among these symptoms, speech abnormalities affect approximately 90% of individuals with PD, making acoustic analysis a promising non-invasive cue for early detection. However, subtle speech variations are often imperceptible to the human ear, and speech-only analysis may overlook complementary visual manifestations, such as hypomimia—reduced facial expressivity commonly observed in PD patients. To address these limitations, we propose Parkinson’s Detection via Attentional Fusion Network (PDAF-Net), a novel multimodal deep learning framework for early PD detection that jointly models acoustic and facial dynamic features in a binary classification setting. The proposed architecture consists of a Dual-Stream Feature Encoder (DSFE), with an audio branch based on a one-dimensional convolutional neural network (1D-CNN) and bidirectional long short-term memory (BiLSTM), and a visual branch built upon a two-dimensional convolutional neural network (2D-CNN) and a Transformer encoder. Multimodal integration is achieved through a Cross-Attention-guided Attentional Feature Fusion (CA-AFF) module, which explicitly models bidirectional cross-modal interactions and performs adaptive feature recalibration via an iterative attentional fusion mechanism. We conducted experiments on a self-collected Chinese multimodal dataset comprising 100 PD patients and 100 healthy controls. Although the data are balanced at the subject level, sliding-window segmentation introduces sample-level imbalance; to address this issue, a class-balanced focal loss is employed. Model performance was evaluated using subject-wise five-fold cross-validation. The results demonstrate that PDAF-Net consistently outperforms unimodal baselines across multiple evaluation metrics, achieving an accuracy of 89.3%, an F1-score of 0.884, and an AUC of 0.916. These findings highlight the effectiveness of explicit cross-modal interaction modeling and adaptive feature fusion for improving automated early PD screening in real-world clinical settings. Full article
26 pages, 3557 KB  
Article
Short-Term Wind Power Forecasting Using CEEMDAN-CNN-BiLSTM Based on MIC Feature Selection
by Zheng Jiajia, Linjun Zeng, Shuang Liang, Wen Xia, Nuersimanguli Abuduwasiti and Xianhua Zeng
Processes 2026, 14(9), 1456; https://doi.org/10.3390/pr14091456 - 30 Apr 2026
Viewed by 89
Abstract
To address the issue of insufficient accuracy in wind power forecasting arising from intermittency and volatility, this paper proposes a short-term wind power prediction model integrating MIC (Maximal Information Coefficient) feature selection with adaptive noise-complete set empirical mode decomposition, convolutional neural networks, and [...] Read more.
To address the issue of insufficient accuracy in wind power forecasting arising from intermittency and volatility, this paper proposes a short-term wind power prediction model integrating MIC (Maximal Information Coefficient) feature selection with adaptive noise-complete set empirical mode decomposition, convolutional neural networks, and a bidirectional long short-term memory network hybrid architecture. The main innovations of this work lie in the following: Firstly, MIC quantifies the strength of the nonlinear correlation between meteorological features and the MAE (Mean Absolute Error) in power generation, thereby enabling the identification of highly correlated features to reduce the input dimensionality. Secondly, CEEMDAN (Complete Ensemble Empirical Mode Decomposition with Adaptive Noise) performs adaptive modal decomposition on raw power sequences. Combining sample entropy with K-means clustering reconstructs IMFs (Intrinsic Mode Functions), while the introduction of VMD (Variational Mode Decomposition) for quadratic optimisation significantly improves the quality of signal decomposition, enabling a more refined separation of fluctuation characteristics across different time scales. Finally, the optimised meteorological features and reconstructed components are input into a CNN (Convolutional Neural Network)-BiLSTM (Bidirectional Long Short-Term Memory) module. Power regression prediction is achieved through the synergistic effect of spatial feature extraction and bidirectional temporal dependency modelling. Case study results demonstrate that compared to the TCN (Temporal Convolutional Network)-Transformer, the proposed method achieves a 0.4022 improvement in the coefficient of determination R2, a 13.2598 reduction in MAE, a 19.864 decrease in RMSE (Root Mean Square Error). At the same time, it maintains stable performance even when faced with unreliable data scenarios involving random missing features, demonstrating excellent generalisation ability. Furthermore, the model training time has been reduced to 77.6469 s, with a single prediction response time of just 0.0659 s. Full article
(This article belongs to the Section Energy Systems)
17 pages, 10447 KB  
Article
A Refined Prediction Model for Regional Zenith Troposphere Combining ICEEMDAN and BiLSTM-XGBoost
by Chao Chen, Yinghao Zhao, Wenyuan Zhang, Yulong Ge, Jiajia Yuan and Chao Hu
Remote Sens. 2026, 18(9), 1381; https://doi.org/10.3390/rs18091381 - 30 Apr 2026
Viewed by 87
Abstract
To address the degradation of zenith tropospheric delay (ZTD) prediction accuracy caused by time-varying noise and error accumulation in multi-step forecasting, this study proposes an integrated prediction model, named IBX, which combines improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN), bidirectional [...] Read more.
To address the degradation of zenith tropospheric delay (ZTD) prediction accuracy caused by time-varying noise and error accumulation in multi-step forecasting, this study proposes an integrated prediction model, named IBX, which combines improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN), bidirectional long short-term memory (BiLSTM), and extreme gradient boosting (XGBoost). In the proposed framework, ICEEMDAN is first used to decompose the original ZTD series into components at different temporal scales. A three-criterion reconstruction strategy based on the Pearson correlation coefficient, dominant period, and sample entropy is then applied to obtain high-, medium-, and low-frequency subsequences with clearer physical meanings. BiLSTM and XGBoost are used to predict the reconstructed components, and their outputs are fused through a root mean square error (RMS)-based weighting strategy to improve forecasting robustness. Hourly ZTD data from 27 global navigation satellite system (GNSS) stations in China from 2011 to 2020 were used for model validation under 1–12 h rolling forecasting horizons. The results show that IBX achieves the best overall performance among the tested models. Its mean RMS and mean absolute error (MAE) over the 1–12 h horizons are 14.17 mm and 10.24 mm, respectively, which are 22.5% and 21.4% lower than those of the baseline BiLSTM model. Spatial and climate-region-based analyses further indicate that ZTD prediction accuracy is strongly affected by altitude, regional moisture conditions, and climate type. The proposed IBX model shows stable error suppression across heterogeneous station environments, especially in the temperate monsoon region and low-altitude regions with complex water vapor variability. These results demonstrate that IBX provides a reliable and physically interpretable approach for short- to medium-term ZTD forecasting and real-time atmospheric delay correction. Full article
Show Figures

Figure 1

25 pages, 2185 KB  
Article
A Bidirectional Spatiotemporal Deep Learning Model with Integrated Vegetation–Thermal Features for Wildfire Detection
by Han Luo, Ming Wang, Lei He, Bin Liu, Yuxia Li and Dan Tang
Remote Sens. 2026, 18(9), 1376; https://doi.org/10.3390/rs18091376 - 29 Apr 2026
Viewed by 124
Abstract
Quicker identifying abilities are required due to the rising frequency and severity of wildfires. Although polar-orbiting satellites with medium and high resolution can accurately identify wildfires, the majority of available fire detection images originate from such platforms. However, their low temporal revisit rates [...] Read more.
Quicker identifying abilities are required due to the rising frequency and severity of wildfires. Although polar-orbiting satellites with medium and high resolution can accurately identify wildfires, the majority of available fire detection images originate from such platforms. However, their low temporal revisit rates restrict the potential for early warning. Geostationary satellites provide minute-level, continuous monitoring that corresponds with the quick onset of wildfires; however, their dependence on conventional threshold methods and coarse spatial resolution result in notable detection errors. This study developed an integrated deep learning framework for accurate wildfire detection in low-resolution geostationary imagery in order to get over these restrictions. A novel dynamic index, the Dynamic Normalized Burn Ratio—Thermal (DNBRT), was proposed to characterize wildfire progression by integrating instantaneous thermal anomalies with dynamic vegetation signals. Based on this, a Fire Spatiotemporal Network (FST-Net) was designed, with an efficient residual backbone, a Convolutional Block Attention Module (CBAM) for feature refinement, and a Bidirectional Long Short-Term Memory (BiLSTM) network to capture temporal evolution. Trained and evaluated on an FY-4B-based fire/non-fire dataset, the proposed framework demonstrated superior performance. FST-Net outperformed benchmark models, improving accuracy and recall by averages of 10.30% and 9.32% respectively while achieving faster inference speed. An ablation experiment confirmed the critical role of fusing thermal and vegetation features in DNBRT, with 92.7% accuracy and 94.9% recall. Compared to the FY-4B fire product, the proposed framework enables earlier detection, maintains more complete tracking of fire progression, and exhibits greater robustness under complex burning conditions while achieving sub-hectare (0.36 ha) detection sensitivity at the 2 km resolution. By synergizing a discriminative dynamic index with an efficient spatiotemporal architecture, this work provides an effective solution for operational, real-time monitoring of small and early-stage wildfires from geostationary satellites. Full article
(This article belongs to the Special Issue Remote Sensed Image Processing and Geospatial Intelligence)
37 pages, 2045 KB  
Article
A Hybrid Artificial Intelligence Framework for Reliable and Seamless Vertical Handover in Next-Generation Heterogeneous Networks
by Sunisa Kunarak
Big Data Cogn. Comput. 2026, 10(5), 139; https://doi.org/10.3390/bdcc10050139 - 29 Apr 2026
Viewed by 130
Abstract
Next-generation heterogeneous wireless networks (HetNets) comprising LTE macro-cells, 5G New Radio (NR) small cells, and WiFi 6 access points aim to provide seamless connectivity under diverse mobility scenarios. However, vertical handover (VHO) remains a performance bottleneck because of the highly variable radio environments, [...] Read more.
Next-generation heterogeneous wireless networks (HetNets) comprising LTE macro-cells, 5G New Radio (NR) small cells, and WiFi 6 access points aim to provide seamless connectivity under diverse mobility scenarios. However, vertical handover (VHO) remains a performance bottleneck because of the highly variable radio environments, dynamic user mobility, stringent quality of service (QoS) requirements, and the coexistence of multi-tier access technologies. Existing handover approaches based on deep learning and deep reinforcement learning (DRL) suffer from limitations: deep learning models lack decision-making capabilities, whereas DRL models, particularly deep Q-network (DQN)-based policies, face Q-value overestimation and unstable convergence. To overcome these limitations, this paper introduces a Hybrid deep double-Q networks (DDQN)–bidirectional long short-term memory (Bi-LSTM) Framework that integrates bi-directional mobility prediction and DRL-based adaptive decision-making. The Bi-LSTM module captures forward and backward temporal dependencies and predicts future Received Signal Strength (RSS) trajectories, mobility dynamics, and cell-edge transitions. The DDQN module stabilizes the action value estimation, mitigates overestimation bias, and enables context-aware handover decisions. A multi-tier simulation environment consisting of LTE, 5G NR, and WiFi 6 networks was developed using realistic path loss, shadowing, interference, and mobility models. Extensive evaluations demonstrated substantial improvements in mobility prediction accuracy, handover stability, radio link reliability, throughput efficiency, and latency reduction compared to conventional RSS-based and DQN-based schemes. The findings highlight the effectiveness of integrating predictive intelligence with reinforcement learning for reliable mobility management in 5G-Advanced and emerging 6G networks. Full article
20 pages, 3466 KB  
Review
AI-Driven Hybrid Detection and Classification Framework for Secure Sleep Health IoT Networks
by Prajoona Valsalan and Mohammad Maroof Siddiqui
Clocks & Sleep 2026, 8(2), 23; https://doi.org/10.3390/clockssleep8020023 - 28 Apr 2026
Viewed by 209
Abstract
Sleep disorders, such as insomnia, obstructive sleep apnea (OSA), narcolepsy, REM sleep behavior disorder, and circadian rhythm disturbances, represent a rapidly expanding global health burden that is strongly associated with cardiovascular, metabolic, neurological, and psychiatric diseases. Advancements in wearable sensing technologies and Internet [...] Read more.
Sleep disorders, such as insomnia, obstructive sleep apnea (OSA), narcolepsy, REM sleep behavior disorder, and circadian rhythm disturbances, represent a rapidly expanding global health burden that is strongly associated with cardiovascular, metabolic, neurological, and psychiatric diseases. Advancements in wearable sensing technologies and Internet of Medical Things (IoMT) infrastructures have expanded the possibilities for continuous, home-based sleep assessment beyond conventional polysomnography laboratories. These Sleep Health Internet of Things (S-HIoT) systems combine multimodal physiological sensing (EEG, ECG, SpO2, respiratory effort and actigraphy) with wireless communication and cloud-based analytics for automated sleep-stage classification and disorder detection. Nonetheless, the digitization of sleep medicine brings about significant cybersecurity concerns. The constant transmission of sensitive biomedical information makes S-HIoT networks open to anomalous traffic flows, signal manipulation, replay attacks, spoofing, and data integrity violation. Existing studies mostly focus on analyzing physiological signals and network intrusion detection independently, resulting in a systemic vulnerability of cyber–physical sleep monitoring ecosystems. With the aim of addressing this empirical deficiency, this review integrates emerging advances (2022–2026) in the AI-assisted categorization of sleep phases and IoMT anomaly detector designs on the finer analysis of CNN, LSTM/BiLSTM, Transformer-based systems, and a component part of federated schemes and the lightweight, edge-deployable intruder assessor models available. The aim of this study is to uncover a gap in the literature: integrated architectures to trade off audiences of faithfulness of physiological modeling with communication-layer security. To counter it, we present a single framework to include CNN-based spatial feature extraction, Bidirectional Long Short-Term Memory (BiLSTM)-based temporal models and Random Forest-based ensemble classification using a dual task-learning approach. We propose a multi-objective optimization framework to jointly optimize the performance of sleep-stage prediction and that of network anomaly detection. Performance on publicly available datasets (Sleep-EDF and CICIoMT2024) confirms that hybrid integration can be tailored to achieve high accuracy [99.8% sleep staging; 98.6% anomaly detection] whilst being characterized by low inference latency (<45 ms), which is promising for feasibility in real-time deployment in view of targeting edge devices. This work presents a comprehensive framework for developing secure, intelligent, and clinically robust digital sleep health ecosystems by bridging chronobiological signal modeling with cybersecurity mechanisms. Furthermore, it highlights future research directions, including explainable AI, federated secure learning, adversarial robustness, and energy-aware edge optimization. Full article
(This article belongs to the Section Computational Models)
Show Figures

Figure 1

16 pages, 919 KB  
Article
A Comparative Performance Study of Host-Based Intrusion Detection Using TextRank-Based System Call Preprocessing and Deep Learning Models
by Hyunwook You, Chulgyun Park, Dongkyoo Shin and Dongil Shin
Electronics 2026, 15(9), 1856; https://doi.org/10.3390/electronics15091856 - 27 Apr 2026
Viewed by 246
Abstract
Host-based intrusion detection systems (HIDSs) can address the limitations of network-based detection by analyzing system calls and other low-level events. Many existing benchmark datasets remain inadequate for evaluating modern attacks because they were built in outdated environments and cover only a limited set [...] Read more.
Host-based intrusion detection systems (HIDSs) can address the limitations of network-based detection by analyzing system calls and other low-level events. Many existing benchmark datasets remain inadequate for evaluating modern attacks because they were built in outdated environments and cover only a limited set of attack behaviors. To address this gap, this study builds a TextRank-based preprocessing pipeline on the LID-DS 2021 dataset and compares five end-to-end pipelines: Random Forest (RF), Long Short-Term Memory (LSTM), Convolutional Neural Network(CNN) + LSTM, LSTM, Bidirectional LSTM (BiLSTM), and CNN + Bidirectional Gated Recurrent Unit (BiGRU). Of the 15 scenarios in the dataset, six multi-stage attacks were excluded, and three representative scenarios were selected based on attack-category coverage and suitability for single-chunk host-level detection. Within these three selected scenarios and same-scenario file-level splits, the deep learning pipelines achieved F1-scores of 0.90–0.94, whereas RF ranged from 0.55 to 0.63. Among the evaluated pipelines, CNN + BiGRU produced the strongest overall results. These findings indicate that, under this constrained evaluation setting, sequential deep learning pipelines can be effective for scenario-specific system-call-based HIDS; however, broader generalization to unseen attacks or to the full LID-DS 2021 scenario set remains unverified. Full article
Show Figures

Figure 1

42 pages, 10246 KB  
Article
Enhancing Karst Spring Discharge Simulation Through a Hybrid XGBoost–BiLSTM Machine Learning Framework
by Mohamed Hamdy Eid, Attila Kovács and Péter Szűcs
Water 2026, 18(9), 1038; https://doi.org/10.3390/w18091038 - 27 Apr 2026
Viewed by 535
Abstract
Accurate simulation of karst spring discharge is critical for sustainable water resource management, yet it remains a significant challenge due to the inherent complexity, heterogeneity, and non-linearity of karst systems. While machine learning models have been increasingly applied to this problem, standalone algorithms [...] Read more.
Accurate simulation of karst spring discharge is critical for sustainable water resource management, yet it remains a significant challenge due to the inherent complexity, heterogeneity, and non-linearity of karst systems. While machine learning models have been increasingly applied to this problem, standalone algorithms often struggle to simultaneously capture complex temporal dependencies and maintain robust generalization. This study provides a comprehensive comparative assessment of five state-of-the-art machine learning (ML) models for forecasting the daily discharge of the Jósva Spring, located in the World Heritage Aggtelek karst area. The main goal of the study is to determine which modern machine learning approach can most accurately forecast the daily discharge of the Jósva Spring using meteorological data and the discharge of a hydraulically connected upstream spring. This is motivated by the need for a reliable operational prediction tool for complex karst aquifers, the improved water-resource management in a climate-sensitive region, and a lack of comparative studies evaluating multiple ML paradigms on the same karst system. The study also aimed at comparing the predictive performance of five state-of-the-art ML models to identify the most accurate and robust model and to understand the predictability of the karst system by analyzing feature importance, lag effects, and temporal dependencies. Three tree-based ensemble models (Random Forest, XGBoost, and Extra Trees) and two deep learning architectures (a Bidirectional Long Short-Term Memory network, BiLSTM, and a novel Hybrid XGBoost–BiLSTM model) were trained using a five-year (2015–2019) daily dataset comprising rainfall, temperature, and upstream discharge. The modeling framework was designed for synchronous simulation (lead time = 0 days), estimating concurrent downstream discharge using upstream and meteorological measurements from the same time step. A rigorous feature-engineering workflow was implemented based on statistical characterization, correlation analysis, and time-series diagnostics. Models were trained on 80% of the dataset and evaluated on an independent 20% test set. The results demonstrate that the proposed Hybrid XGBoost-BiLSTM model achieved the highest predictive accuracy on the unseen test data (R2 = 0.74, NSE = 0.74, RMSE = 716.35 L/min). While the standalone tree-based models, particularly XGBoost (R2 = 0.66), also exhibited strong and competitive performance, the hybrid architecture provided a consistent and measurable improvement across all evaluation metrics. The hybrid model’s success is attributed to its synergistic design, which leverages the powerful feature extraction and refinement capabilities of XGBoost to provide a more informative input space for the BiLSTM, thereby enhancing its ability to capture complex temporal dependencies while mitigating overfitting. Feature importance analysis confirmed that upstream discharge at a 3-day lag was the most critical predictor, highlighting the system’s hydraulic connectivity. This research provides clear, evidence-based guidance showing that hybrid machine learning architectures, which integrate the strengths of different modeling paradigms, represent the most effective approach for developing robust and reliable operational prediction tools for complex karst aquifers. Full article
Show Figures

Figure 1

20 pages, 8588 KB  
Article
Robust SOH Estimation for Batteries via Deep Learning Under Incomplete Measurements
by Jenhao Teng, Kuanyu Lin and Pingtse Lee
Energies 2026, 19(9), 2100; https://doi.org/10.3390/en19092100 - 27 Apr 2026
Viewed by 226
Abstract
Battery state-of-health (SOH) estimation is essential for the safety and reliability of energy storage systems. However, incomplete measurements due to sensor or communication failures pose significant challenges for accurate prediction. This paper proposes a robust SOH estimation framework using a minimal 5 min [...] Read more.
Battery state-of-health (SOH) estimation is essential for the safety and reliability of energy storage systems. However, incomplete measurements due to sensor or communication failures pose significant challenges for accurate prediction. This paper proposes a robust SOH estimation framework using a minimal 5 min observation window to handle high data sparsity in both random and latter-half missing scenarios. Three Deep Learning (DL) architectures—Long Short-Term Memory (LSTM), Bidirectional LSTM (BiLSTM), and Transformer—are evaluated for data imputation and SOH estimation against traditional polynomial fitting. Simulation results on the NASA benchmark dataset demonstrate that the proposed LSTM model achieves high accuracy, with an RMSE of 0.8522 on complete data. For imperfect data scenarios, BiLSTM-based imputation effectively suppresses extreme deviations, reducing the Maximum Error (MxE) by 44% (from 14.04 to 7.85) compared to traditional polynomial methods. Furthermore, in challenging terminal missing-data cases, a hybrid LSTM-Transformer strategy maintains physical consistency, achieving a superior RMSE of 1.0026. These findings confirm that the proposed DL-based framework significantly outperforms conventional techniques, providing a robust and reliable solution for real-time battery health monitoring under unpredictable data conditions. Full article
(This article belongs to the Section D: Energy Storage and Application)
Show Figures

Figure 1

28 pages, 3444 KB  
Article
A Lightweight Method for Power Quality Disturbance Recognition Based on Optimized VMD and CNN–Transformer
by Dongya Xiao, Jiaming Liu, Haining Liu and Yang Zhao
Electronics 2026, 15(9), 1832; https://doi.org/10.3390/electronics15091832 - 26 Apr 2026
Viewed by 215
Abstract
Aiming at the issues of low recognition accuracy and high model computational complexity for power quality disturbances (PQDs) in strong-noise environments, this paper proposes a novel lightweight PQD-recognition method that integrates a hybrid architecture of variational mode decomposition (VMD), convolutional neural network (CNN), [...] Read more.
Aiming at the issues of low recognition accuracy and high model computational complexity for power quality disturbances (PQDs) in strong-noise environments, this paper proposes a novel lightweight PQD-recognition method that integrates a hybrid architecture of variational mode decomposition (VMD), convolutional neural network (CNN), and transformer. Firstly, a hybrid optimization algorithm named the monkey–genetic hybrid optimization algorithm (MGHOA) is proposed to optimize VMD parameters for denoising disturbance signals, thereby enhancing recognition accuracy in noisy environments. Secondly, to fully extract disturbance signal features and reduce the computational complexity of the model, a lightweight CNN–transformer model is designed. Depthwise separable convolution (DSC) is employed to extract local features and the multi-head attention mechanism of transformer is utilized to mine the long-distance dependence and global features, thereby enhancing the feature representation. Thirdly, a multitask joint-learning method is proposed to collaboratively optimize classification accuracy and temporal localization tasks, enhancing the discrimination of similar disturbances. Additionally, a dual-pooling global feature fusion strategy is designed to further enhance the model’s ability to discriminate complex disturbances. Comparative experiments on 16 typical PQD types demonstrate that the proposed method achieves excellent performance in recognition accuracy, model robustness, and computational efficiency. The integration of the MGHOA–VMD module improves recognition accuracy by 1.08%, while the multitask joint-learning method contributes an additional 0.55% improvement. When achieving recognition accuracy comparable to complex models, the training time of the proposed method is 36.51% of that required by DeepCNN and merely 5.90% of that required by bidirectional long short-term memory (BiLSTM), with a 31.22% reduction in parameter scale. This work provides a novel solution for intelligent power quality disturbance recognition. Full article
(This article belongs to the Section Power Electronics)
Show Figures

Figure 1

21 pages, 1850 KB  
Article
A Spatio-Temporal Hybrid Multi-Head Attention Model for AIS-Based Ship Trajectory Prediction
by Yuhui Liu, Xiongguan Bao, Shuangming Li, Chenhui Gu and Qihua Fang
Future Transp. 2026, 6(3), 94; https://doi.org/10.3390/futuretransp6030094 - 24 Apr 2026
Viewed by 134
Abstract
To improve ship AIS trajectory prediction under pronounced spatiotemporal coupling and dynamic maneuvering conditions, this study proposes a Spatio-Temporal-Hybrid-Multi-head Attention model (STHA) integrating multiscale convolution, bidirectional long short-term memory, and multi-head attention. Historical AIS data from the Zhoushan waters in 2024 were preprocessed [...] Read more.
To improve ship AIS trajectory prediction under pronounced spatiotemporal coupling and dynamic maneuvering conditions, this study proposes a Spatio-Temporal-Hybrid-Multi-head Attention model (STHA) integrating multiscale convolution, bidirectional long short-term memory, and multi-head attention. Historical AIS data from the Zhoushan waters in 2024 were preprocessed through screening, cleaning, outlier removal, resampling, and cubic spline interpolation to construct trajectory samples. Comparative experiments were conducted against BP, BiLSTM, and BiGRU using MAPE, RMSE, and R2 as evaluation metrics. The results show that STHA achieves the best overall predictive performance, more accurately follows trajectory variations across different vessel types, and exhibits better robustness in scenarios involving turning and speed changes. These findings indicate that the proposed model is effective for high-precision ship trajectory prediction and can provide useful support for subsequent collision risk assessment and navigation safety assistance. Full article
(This article belongs to the Special Issue Next-Generation AI and Foundation Models for Transportation Systems)
15 pages, 6831 KB  
Article
Multi-Class Arrhythmia Detection from PPG Signals Based on VGG-BiLSTM Hybrid Deep Learning Model
by Shiyong Li, Jiaying Mo, Jiating Pan, Zhengguang Zheng, Qunfeng Tang and Zhencheng Chen
Biosensors 2026, 16(5), 235; https://doi.org/10.3390/bios16050235 - 23 Apr 2026
Viewed by 504
Abstract
Arrhythmia is a common and potentially life-threatening cardiovascular condition. Photoplethysmography (PPG) has emerged as a noninvasive alternative to electrocardiography for cardiac rhythm monitoring, yet most PPG-based methods remain limited to binary classification. In this study, a new deep learning approach is suggested for [...] Read more.
Arrhythmia is a common and potentially life-threatening cardiovascular condition. Photoplethysmography (PPG) has emerged as a noninvasive alternative to electrocardiography for cardiac rhythm monitoring, yet most PPG-based methods remain limited to binary classification. In this study, a new deep learning approach is suggested for categorizing six arrhythmia types from PPG data: sinus rhythm (SR), premature ventricular contraction (PVC), premature atrial contraction (PAC), ventricular tachycardia (VT), supraventricular tachycardia (SVT), and atrial fibrillation (AF). The raw PPG signal is enhanced by extracting its first and second derivatives to capture morphological features not readily apparent in the original signal. A hybrid architecture, VGG-BiLSTM, is utilized, merging VGG convolutional layers for spatial features extraction with bidirectional long short-term memory layers for modeling temporal dependencies. A stratified data splitting strategy is further adopted to address class imbalance across arrhythmia types. A publicly available dataset containing 46,827 PPG segments from 91 individuals was employed to assess the effectiveness of the suggested technique. The method yielded an overall accuracy, sensitivity, specificity and F1 score of 88.7%, 78.5%, 97.6% and 80.5% correspondingly. Full article
Show Figures

Figure 1

20 pages, 2383 KB  
Article
Enhanced Sentiment Analysis of E-Commerce Product Reviews Using Luong Attention-Based Bi-LSTM
by Orken Mamyrbayev, Dinara Mussayeva and Turdybek Kurmetkan
Information 2026, 17(5), 398; https://doi.org/10.3390/info17050398 - 22 Apr 2026
Viewed by 272
Abstract
The rapid growth of e-commerce has highlighted the critical need for efficient customer review sentiment analysis, yet natural language complexities like sarcasm and mixed sentiments remain challenging. To address these ambiguities, this study proposes a novel sentiment analysis architecture. The methodology integrates a [...] Read more.
The rapid growth of e-commerce has highlighted the critical need for efficient customer review sentiment analysis, yet natural language complexities like sarcasm and mixed sentiments remain challenging. To address these ambiguities, this study proposes a novel sentiment analysis architecture. The methodology integrates a bidirectional Long Short-Term Memory (Bi-LSTM) network with a Luong Attention mechanism. The Bi-LSTM component models the sequential and bidirectional context of the text, while the Luong Attention mechanism isolates and emphasizes the most significant parts of the reviews for precise sentiment detection. The proposed hybrid model demonstrates exceptional performance compared to traditional methods, achieving an accuracy of 96.67%, a precision of 96.83%, and a recall of 96.67%, alongside relatively low overfitting. Ultimately, the findings confirm that this architecture effectively manages ambiguous language and is highly capable of large-scale, real-time sentiment analysis, offering robust analytical tools for shaping e-commerce marketing strategies. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop