Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,611)

Search Parameters:
Keywords = long short-term memory network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 14737 KB  
Article
SMAPNet: A Hybrid Ship Motion Attitude Prediction Network Integrating Incremental Decomposition
by Zhibo Lei, Yanlin Liu, Zonghan Li, Huibing Gan and Fupeng Sun
J. Mar. Sci. Eng. 2026, 14(9), 843; https://doi.org/10.3390/jmse14090843 - 30 Apr 2026
Abstract
An accurate prediction of the short-term motion attitude of ships is essential for navigation safety and offshore operations. However, conventional time series prediction models have constraints in handling time-varying dynamics and adapting to diverse sea states. Therefore, Ship Motion Attitude Prediction Network (SMAPNet) [...] Read more.
An accurate prediction of the short-term motion attitude of ships is essential for navigation safety and offshore operations. However, conventional time series prediction models have constraints in handling time-varying dynamics and adapting to diverse sea states. Therefore, Ship Motion Attitude Prediction Network (SMAPNet) based on Non-Symmetric Tri-Cube Kernel Trend Filter (NTKTF) is proposed in the present paper. SMAPNet decomposes temporal signals using the Feature Extraction Block (FEB), fuses local and global features through Feature Refinement Block (FRB), and integrates Bidirectional Long Short-Term Memory Network (Bi-LSTM) with a self-attention mechanism, Feature Prediction Block (FPB), for short-term prediction within 1 to 5 s. In this experiment, field-measured data from the ship XIN HONG ZHUAN were employed to construct online prediction scenarios, and a systematic evaluation was conducted from three perspectives: local prediction accuracy, evaluation metric, and error distribution. The findings indicate that SMAPNet exhibits improved adaptability and prediction accuracy in predicting ship motion attitudes under different sea states. Specifically, in the single-step prediction of roll and pitch under sea states 3 and 4, the mean square errors (MSE) of SMAPNet are reduced by 10.45%, 6.96% and 14.60%, 2.77% respectively compared with the superior candidate model. Full article
(This article belongs to the Section Ocean Engineering)
39 pages, 3200 KB  
Article
A Multimodal Audiovisual Deep Learning Framework for Early Detection of Parkinson’s Disease
by Yinpeng Guo, Hua Huo, Yulong Pei, Lan Ma, Shilu Kang, Jiaxin Xu and Aokun Mei
Electronics 2026, 15(9), 1904; https://doi.org/10.3390/electronics15091904 - 30 Apr 2026
Abstract
Parkinson’s disease (PD) is a progressive neurodegenerative disorder primarily caused by the degeneration of dopamine-producing neurons in the substantia nigra, leading to characteristic motor symptoms such as tremors, rigidity, and bradykinesia, as well as non-motor manifestations including depression, sleep disturbances, and speech impairments. [...] Read more.
Parkinson’s disease (PD) is a progressive neurodegenerative disorder primarily caused by the degeneration of dopamine-producing neurons in the substantia nigra, leading to characteristic motor symptoms such as tremors, rigidity, and bradykinesia, as well as non-motor manifestations including depression, sleep disturbances, and speech impairments. Among these symptoms, speech abnormalities affect approximately 90% of individuals with PD, making acoustic analysis a promising non-invasive cue for early detection. However, subtle speech variations are often imperceptible to the human ear, and speech-only analysis may overlook complementary visual manifestations, such as hypomimia—reduced facial expressivity commonly observed in PD patients. To address these limitations, we propose Parkinson’s Detection via Attentional Fusion Network (PDAF-Net), a novel multimodal deep learning framework for early PD detection that jointly models acoustic and facial dynamic features in a binary classification setting. The proposed architecture consists of a Dual-Stream Feature Encoder (DSFE), with an audio branch based on a one-dimensional convolutional neural network (1D-CNN) and bidirectional long short-term memory (BiLSTM), and a visual branch built upon a two-dimensional convolutional neural network (2D-CNN) and a Transformer encoder. Multimodal integration is achieved through a Cross-Attention-guided Attentional Feature Fusion (CA-AFF) module, which explicitly models bidirectional cross-modal interactions and performs adaptive feature recalibration via an iterative attentional fusion mechanism. We conducted experiments on a self-collected Chinese multimodal dataset comprising 100 PD patients and 100 healthy controls. Although the data are balanced at the subject level, sliding-window segmentation introduces sample-level imbalance; to address this issue, a class-balanced focal loss is employed. Model performance was evaluated using subject-wise five-fold cross-validation. The results demonstrate that PDAF-Net consistently outperforms unimodal baselines across multiple evaluation metrics, achieving an accuracy of 89.3%, an F1-score of 0.884, and an AUC of 0.916. These findings highlight the effectiveness of explicit cross-modal interaction modeling and adaptive feature fusion for improving automated early PD screening in real-world clinical settings. Full article
22 pages, 2321 KB  
Article
A Deployment-Aware Data Processing Approach for Accuracy and Authenticity Evaluation of Artificial Emotional Intelligence in IoT Edge with Deep Learning
by Şükrü Mustafa Kaya
Appl. Sci. 2026, 16(9), 4394; https://doi.org/10.3390/app16094394 - 30 Apr 2026
Abstract
Artificial Emotional Intelligence (AEI) has gained significant attention for enabling machines to recognize and interpret human affective states through modalities such as speech. While deep learning-based speech emotion recognition (SER) models have achieved promising accuracy levels, their practical deployment in resource-constrained IoT edge [...] Read more.
Artificial Emotional Intelligence (AEI) has gained significant attention for enabling machines to recognize and interpret human affective states through modalities such as speech. While deep learning-based speech emotion recognition (SER) models have achieved promising accuracy levels, their practical deployment in resource-constrained IoT edge environments remains insufficiently explored. In particular, there is a lack of systematic evaluation approaches that jointly consider classification performance, computational efficiency, and deployment feasibility under edge-oriented operational constraints. In this study, I address this gap by proposing a deployment-aware evaluation perspective for SER systems operating under IoT edge constraints. Rather than introducing a new model architecture, I focus on establishing a unified and reproducible evaluation framework that reflects practical deployment considerations for edge-based intelligent systems. Within this framework, three widely used deep learning architectures, convolutional neural networks (CNN), long short-term memory (LSTM), and dense neural networks, are systematically analyzed using the EMODB dataset. The experimental results demonstrate that CNN-based models achieve the most consistent classification performance, with peak validation accuracy reaching approximately 84%, while also providing a favorable balance between recognition performance and computational efficiency. To better reflect deployment-oriented evaluation, the study also considers latency-related behavior and computational characteristics relevant to edge computing environments based on benchmark-driven estimations. The findings highlight the importance of deployment-aware evaluation strategies and provide practical insights for selecting suitable model architectures in edge-oriented speech emotion recognition scenarios. This study contributes to bridging the gap between theoretical deep learning performance and practical feasibility considerations in IoT-based intelligent systems. Full article
Show Figures

Figure 1

26 pages, 3557 KB  
Article
Short-Term Wind Power Forecasting Using CEEMDAN-CNN-BiLSTM Based on MIC Feature Selection
by Zheng Jiajia, Linjun Zeng, Shuang Liang, Wen Xia, Nuersimanguli Abuduwasiti and Xianhua Zeng
Processes 2026, 14(9), 1456; https://doi.org/10.3390/pr14091456 - 30 Apr 2026
Abstract
To address the issue of insufficient accuracy in wind power forecasting arising from intermittency and volatility, this paper proposes a short-term wind power prediction model integrating MIC (Maximal Information Coefficient) feature selection with adaptive noise-complete set empirical mode decomposition, convolutional neural networks, and [...] Read more.
To address the issue of insufficient accuracy in wind power forecasting arising from intermittency and volatility, this paper proposes a short-term wind power prediction model integrating MIC (Maximal Information Coefficient) feature selection with adaptive noise-complete set empirical mode decomposition, convolutional neural networks, and a bidirectional long short-term memory network hybrid architecture. The main innovations of this work lie in the following: Firstly, MIC quantifies the strength of the nonlinear correlation between meteorological features and the MAE (Mean Absolute Error) in power generation, thereby enabling the identification of highly correlated features to reduce the input dimensionality. Secondly, CEEMDAN (Complete Ensemble Empirical Mode Decomposition with Adaptive Noise) performs adaptive modal decomposition on raw power sequences. Combining sample entropy with K-means clustering reconstructs IMFs (Intrinsic Mode Functions), while the introduction of VMD (Variational Mode Decomposition) for quadratic optimisation significantly improves the quality of signal decomposition, enabling a more refined separation of fluctuation characteristics across different time scales. Finally, the optimised meteorological features and reconstructed components are input into a CNN (Convolutional Neural Network)-BiLSTM (Bidirectional Long Short-Term Memory) module. Power regression prediction is achieved through the synergistic effect of spatial feature extraction and bidirectional temporal dependency modelling. Case study results demonstrate that compared to the TCN (Temporal Convolutional Network)-Transformer, the proposed method achieves a 0.4022 improvement in the coefficient of determination R2, a 13.2598 reduction in MAE, a 19.864 decrease in RMSE (Root Mean Square Error). At the same time, it maintains stable performance even when faced with unreliable data scenarios involving random missing features, demonstrating excellent generalisation ability. Furthermore, the model training time has been reduced to 77.6469 s, with a single prediction response time of just 0.0659 s. Full article
(This article belongs to the Section Energy Systems)
17 pages, 2031 KB  
Article
AGConvLSTM: An Adaptive Graph Convolutional LSTM Network for Multi-Station Water Quality Classification
by Yali Zhao, Xuecheng Wang, Fansen Meng and Xiaoyan Chen
Water 2026, 18(9), 1073; https://doi.org/10.3390/w18091073 - 30 Apr 2026
Abstract
Water quality classification is essential for freshwater ecosystem protection but faces challenges posed by spatiotemporal dependencies and class imbalance. To address these issues, this paper proposes the Adaptive Graph Convolutional Long Short-Term Memory Network (AGConvLSTM), which integrates adaptive graph convolution into the LSTM [...] Read more.
Water quality classification is essential for freshwater ecosystem protection but faces challenges posed by spatiotemporal dependencies and class imbalance. To address these issues, this paper proposes the Adaptive Graph Convolutional Long Short-Term Memory Network (AGConvLSTM), which integrates adaptive graph convolution into the LSTM gating mechanism to explicitly capture spatiotemporal dependencies. As complementary components, station-wise Principal Component Analysis (PCA) preserves spatial heterogeneity in feature structures, while DTW-SMOTE with adaptive sampling and dynamic denoising mitigates class imbalance. Evaluated on five-year water quality data from 13 stations in the Taihu Basin, China, AGConvLSTM achieves a test accuracy of 69.34% and an F1 score of 69.68%, outperforming baseline models. Station-wise accuracy ranges from 49.12% to 88.48%, reflecting spatial heterogeneity. These results suggest that spatiotemporal fusion within recurrent units provides an effective pathway for multi-station water quality classification and offers practical value for watershed early warning systems. Full article
Show Figures

Figure 1

26 pages, 36181 KB  
Article
A Hybrid U-Net and Attention-Based BiLSTM Framework for Wildfire Prediction Using Multi-Source Remote Sensing and Meteorological Sensor Data
by Zhiyu Chen, Weiwei Song, Xiaoqing Zuo, Siyuan Li, Huyue Chen and Bowen Zuo
Electronics 2026, 15(9), 1893; https://doi.org/10.3390/electronics15091893 - 30 Apr 2026
Abstract
Forest and grassland fires have become increasingly severe under climate change, posing significant threats to ecosystems and human safety. Accurate wildfire prediction using remote sensing data remains challenging due to complex spatiotemporal dynamics and heterogeneous data sources. To address this issue, this study [...] Read more.
Forest and grassland fires have become increasingly severe under climate change, posing significant threats to ecosystems and human safety. Accurate wildfire prediction using remote sensing data remains challenging due to complex spatiotemporal dynamics and heterogeneous data sources. To address this issue, this study proposes a hybrid deep learning framework integrating U-Net and an attention-enhanced bidirectional long short-term memory network (AUBLSTM) for spatiotemporal wildfire prediction using multi-source remote sensing and meteorological data. The U-Net is employed for spatial feature extraction, while AUBLSTM captures temporal dependencies and improves fire spread modeling with attention mechanisms. An encoder–decoder architecture is adopted to enhance multi-scale feature representation, and meteorological constraints are incorporated to improve physical consistency. Experimental results demonstrate that the proposed model outperforms baseline methods, including convolutional long short-term memory (ConvLSTM) and fully connected networks, achieving superior performance in terms of MSE, RRMSE, PSNR, SSIM, IoU, and F1-Score. The framework is efficient, scalable, and suitable for deployment in electronic monitoring and early warning systems, providing a practical solution for integrating multi-source data into wildfire surveillance applications. Full article
Show Figures

Figure 1

22 pages, 3221 KB  
Article
A Hybrid PSO-GWO-BP Predictive Model for Demand-Driven Scheduling and Energy-Efficient Operation of Building Secondary Water Supply Systems
by Shu-Guang Zhu, Jing-Wen Yu, Xing-Zhao Wang, Bang-Wu Deng, Shuai Jiang, Qi-Lin Wu and Wei Wei
Buildings 2026, 16(9), 1785; https://doi.org/10.3390/buildings16091785 - 30 Apr 2026
Abstract
Accurate forecasting of water demand enables optimized peak-load management, alleviating pressure during high-demand periods and improving the operational efficiency of urban secondary water supply systems—a critical component in the energy-efficient and sustainable operation of buildings. However, existing water demand prediction methods in some [...] Read more.
Accurate forecasting of water demand enables optimized peak-load management, alleviating pressure during high-demand periods and improving the operational efficiency of urban secondary water supply systems—a critical component in the energy-efficient and sustainable operation of buildings. However, existing water demand prediction methods in some regions suffer from low accuracy and excessively long prediction cycles, posing challenges for real-time water scheduling in building-scale systems. To address these challenges, this study develops a hybrid predictive framework that integrates a BP neural network with the Gray Wolf Optimizer (GWO) and Particle Swarm Optimization (PSO) algorithms for enhanced parameter optimization. Using hourly water consumption data from a representative residential district, the proposed model is compared against standalone machine learning models—Extreme Learning Machines (ELM), Support Vector Machines (SVM), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU). Model performance is rigorously evaluated using the coefficient of determination, mean absolute error (MAE), mean squared error (MSE), mean absolute percentage error (MAPE), root mean square error (RMSE), and Nash–Sutcliffe efficiency coefficient (NSE). The PSO-GWO-BP hybrid model achieves a predictive accuracy of 97.06%, yielding the lowest MAE, MSE, RMSE, and MAPE, as well as the highest R among all models considered, thereby significantly outperforming the benchmark standalone models. Furthermore, the high-precision short-term prediction outputs enable dynamic regulation of secondary water tank refill thresholds, facilitating refined water allocation and enhanced operational management of building water supply systems. These findings demonstrate the considerable application potential of the proposed hybrid model in enhancing both water resource efficiency and energy utilization performance in the daily operation of green buildings, providing reliable technical support for intelligent and low-carbon building water supply management. Full article
Show Figures

Figure 1

15 pages, 2402 KB  
Article
Research on Data-Driven Modeling of Solid Rocket Motor Plume Temperature Distribution with Physics Guidance
by Bo Cheng, Chengyuan Qian, Xinxin Chen and Chengfei Zhang
Appl. Sci. 2026, 16(9), 4373; https://doi.org/10.3390/app16094373 - 29 Apr 2026
Abstract
Aiming at the problems of the large prediction error of model-driven algorithms and poor interpretability (even potential violation of physical laws) of pure data-driven algorithms in the prediction of aerospace vehicle plume characteristics, a physics mechanism-guided prediction algorithm for aerospace vehicle plume characteristics [...] Read more.
Aiming at the problems of the large prediction error of model-driven algorithms and poor interpretability (even potential violation of physical laws) of pure data-driven algorithms in the prediction of aerospace vehicle plume characteristics, a physics mechanism-guided prediction algorithm for aerospace vehicle plume characteristics was proposed. Taking the long short-term memory (LSTM) network as the backbone, this algorithm constructed a hybrid physics–data model by embedding the prior knowledge of physical laws and empirical rules into the neural network, and designed a loss function combined with physical mechanisms to guide network training. The aerospace vehicle plume dataset was preprocessed through characteristic parameter extraction, extended physical parameter calculation, data splicing and sliding window operation, and the LSTM network structure was optimized by adjusting hyperparameters such as the number of hidden layers and neurons. Experimental results show that the proposed algorithm achieves a Mean Absolute Error (MAE) of 31.89 and a Physical Inconsistency of 0.1723 on the test set, with MAE reduced by 14% and Physical Inconsistency reduced by 7.5% compared with traditional machine learning models such as Random Forest. Ablation experiments verify that the introduction of physical mechanisms can improve the prediction accuracy of the model by about 25%. This algorithm makes up for the defects of traditional prediction algorithms, has good generalization ability and physical consistency, and provides an effective method for the prediction of engine exhaust plume temperature distribution. Full article
(This article belongs to the Section Aerospace Science and Engineering)
34 pages, 13121 KB  
Article
Mortality Forecasting Using LSTM-CNN Model
by Ning Zhang, Jingyang Chen, Hao Chen and Jingzhen Liu
Axioms 2026, 15(5), 324; https://doi.org/10.3390/axioms15050324 - 29 Apr 2026
Abstract
Accurate mortality prediction is essential to actuarial practice as it is directly linked to insurance pricing, reserving, and the management of longevity risk. This study proposes a deep neural network (DNN) model for the mortality rates of multiple populations; it is composed of [...] Read more.
Accurate mortality prediction is essential to actuarial practice as it is directly linked to insurance pricing, reserving, and the management of longevity risk. This study proposes a deep neural network (DNN) model for the mortality rates of multiple populations; it is composed of long short-term memory (LSTM) and convolutional neural network (CNN) components. As mortality trends evolve over long time horizons, and as capturing the complex dependencies among mortality rates across countries or regions with a linear model is challenging, the LSTM and CNN were applied to mortality modeling. The former can automatically learn long-term dependencies of sequential data, whereas the latter can extract local features from grid or sequential data. Formulated as a nonlinear generalization of the Lee–Carter decomposition, the model maps the log-mortality matrix logM to future logm(x,t) end-to-end and generates multi-step forecasts through dynamic recursive prediction. Then, the DNN and baseline models were used to fit mortality data of 21 countries from the Human Mortality Database (HMD), which were divided into training and test sets with the year 2000 as the split point. Extensive numerical experiments from the perspectives of accuracy, stability, and reliability of long-term forecasting revealed that DNN models yield better predictive performance, particularly the LSTM-CNN model. It combines the LSTM, CNN, and fully connected network (FCN) layers and thus exploits each deep neural network to fit nonlinear age, period, and cohort effects as well as their interactive terms to achieve better predictive performance. However, the CNN still outperformed other models for certain groups. In addition, the conclusions hold for remaining life expectancy. Full article
(This article belongs to the Special Issue Financial Mathematics and Econophysics)
22 pages, 5221 KB  
Article
Hybrid Deep Neural Network with Natural Language Processing Techniques to Analyze Customer Satisfaction with Delivery Platform Manager Responses
by Salihah Alotaibi
Appl. Sci. 2026, 16(9), 4359; https://doi.org/10.3390/app16094359 - 29 Apr 2026
Abstract
Delivery services have drawn much attention and become of topmost significance in urban areas by presenting online food delivery selections for a diversity of dishes from a wide range of restaurants, decreasing both travel and waiting times. Customer data analysis acts as a [...] Read more.
Delivery services have drawn much attention and become of topmost significance in urban areas by presenting online food delivery selections for a diversity of dishes from a wide range of restaurants, decreasing both travel and waiting times. Customer data analysis acts as a cornerstone in corporate strategy, allowing enterprises to gather and interpret user feedback and helping them to make informed decisions that drive future business development. However, major knowledge gaps remain due to the scarcity of literature review studies on these delivery services, hindering a complete understanding of customer satisfaction in this sector. Furthermore, there has been little systematic research on managerial response tactics to online consumer complaints and negative reviews. Researchers have contributed by applying artificial intelligence, including deep learning and machine learning models, to analyze customer sentiment and understand customer brand perceptions. This study presents a Hybrid Deep Neural Network Model for Customer Satisfaction Analysis (HDNNM-CSA), with the aim of developing an efficient model which is capable of accurately classifying customer satisfaction levels in delivery apps based on textual responses provided by customer experience managers. To achieve this, the model initially pre-processes text data using text cleaning, emoji removal, normalization, tokenization, stop word removal, and stemming to clean and standardize the unstructured text data for further analysis. Following this, term frequency–inverse document frequency-based word embedding is utilized to transform the pre-processed text into meaningful feature representations. Lastly, an ensemble architecture involving bidirectional long short-term memory, temporal convolutional, and graph convolutional networks is deployed to classify customer satisfaction levels with managers’ responses. A series of experimental analyses are performed, and the results are examined for numerous features. A comparative analysis demonstrates the enhanced performance of the HDNNM-CSA technique with respect to existing approaches. Full article
Show Figures

Figure 1

25 pages, 2185 KB  
Article
A Bidirectional Spatiotemporal Deep Learning Model with Integrated Vegetation–Thermal Features for Wildfire Detection
by Han Luo, Ming Wang, Lei He, Bin Liu, Yuxia Li and Dan Tang
Remote Sens. 2026, 18(9), 1376; https://doi.org/10.3390/rs18091376 - 29 Apr 2026
Abstract
Quicker identifying abilities are required due to the rising frequency and severity of wildfires. Although polar-orbiting satellites with medium and high resolution can accurately identify wildfires, the majority of available fire detection images originate from such platforms. However, their low temporal revisit rates [...] Read more.
Quicker identifying abilities are required due to the rising frequency and severity of wildfires. Although polar-orbiting satellites with medium and high resolution can accurately identify wildfires, the majority of available fire detection images originate from such platforms. However, their low temporal revisit rates restrict the potential for early warning. Geostationary satellites provide minute-level, continuous monitoring that corresponds with the quick onset of wildfires; however, their dependence on conventional threshold methods and coarse spatial resolution result in notable detection errors. This study developed an integrated deep learning framework for accurate wildfire detection in low-resolution geostationary imagery in order to get over these restrictions. A novel dynamic index, the Dynamic Normalized Burn Ratio—Thermal (DNBRT), was proposed to characterize wildfire progression by integrating instantaneous thermal anomalies with dynamic vegetation signals. Based on this, a Fire Spatiotemporal Network (FST-Net) was designed, with an efficient residual backbone, a Convolutional Block Attention Module (CBAM) for feature refinement, and a Bidirectional Long Short-Term Memory (BiLSTM) network to capture temporal evolution. Trained and evaluated on an FY-4B-based fire/non-fire dataset, the proposed framework demonstrated superior performance. FST-Net outperformed benchmark models, improving accuracy and recall by averages of 10.30% and 9.32% respectively while achieving faster inference speed. An ablation experiment confirmed the critical role of fusing thermal and vegetation features in DNBRT, with 92.7% accuracy and 94.9% recall. Compared to the FY-4B fire product, the proposed framework enables earlier detection, maintains more complete tracking of fire progression, and exhibits greater robustness under complex burning conditions while achieving sub-hectare (0.36 ha) detection sensitivity at the 2 km resolution. By synergizing a discriminative dynamic index with an efficient spatiotemporal architecture, this work provides an effective solution for operational, real-time monitoring of small and early-stage wildfires from geostationary satellites. Full article
(This article belongs to the Special Issue Remote Sensed Image Processing and Geospatial Intelligence)
27 pages, 3810 KB  
Article
Real-Time Energy Management of a Series Hybrid Wheel Loader Using Operating-Stage Recognition and ISSA-Optimized ECMS
by Tao Yu, Zhiguo Lei, Yubo Xiao and Xuesheng Shen
Energies 2026, 19(9), 2149; https://doi.org/10.3390/en19092149 - 29 Apr 2026
Abstract
Driven by increasingly stringent requirements for energy saving and emission reduction in non-road machinery, hybrid wheel loaders have attracted growing attention as a practical pathway toward cleaner construction equipment. However, conventional energy management strategies often show limited adaptability to highly transient operating cycles [...] Read more.
Driven by increasingly stringent requirements for energy saving and emission reduction in non-road machinery, hybrid wheel loaders have attracted growing attention as a practical pathway toward cleaner construction equipment. However, conventional energy management strategies often show limited adaptability to highly transient operating cycles and struggle to balance fuel economy, real-time applicability, and battery charge sustainability. To address these issues, this study proposes an improved sparrow-search-algorithm-based equivalent consumption minimization strategy (ISSA-ECMS) for a series hybrid wheel loader. A quasi-static powertrain model was established, while ISSA was used to optimize both the hyperparameters of a Convolutional Neural Network-Long Short-Term Memory (CNN–LSTM) stage-recognition model and the stage-dependent ECMS parameters. A hidden Markov model (HMM)-based post-processing framework was further introduced to improve temporal consistency in operating-stage recognition. The results show that the optimized ISSA-CNN–LSTM achieved 93.22% accuracy, 93.08% Macro-F1, and 93.21% Weighted-F1, while HMM refinement further improved recognition accuracy from 94.02% to 97.92%. In energy management simulations, ISSA-ECMS maintained the terminal state of charge (SOC) at 50.0069%, reduced fuel consumption by 2.1% and 1.4% compared with conventional ECMS and A-ECMS, respectively, and increased the proportion of engine operating points in the economical region to 77.549%. Compared with dynamic programming, its fuel-consumption increase was only 0.28%, while retaining online applicability. These results demonstrate that the proposed method provides an effective and practical solution for real-time energy management of series hybrid wheel loaders. Full article
Show Figures

Figure 1

37 pages, 2045 KB  
Article
A Hybrid Artificial Intelligence Framework for Reliable and Seamless Vertical Handover in Next-Generation Heterogeneous Networks
by Sunisa Kunarak
Big Data Cogn. Comput. 2026, 10(5), 139; https://doi.org/10.3390/bdcc10050139 - 29 Apr 2026
Abstract
Next-generation heterogeneous wireless networks (HetNets) comprising LTE macro-cells, 5G New Radio (NR) small cells, and WiFi 6 access points aim to provide seamless connectivity under diverse mobility scenarios. However, vertical handover (VHO) remains a performance bottleneck because of the highly variable radio environments, [...] Read more.
Next-generation heterogeneous wireless networks (HetNets) comprising LTE macro-cells, 5G New Radio (NR) small cells, and WiFi 6 access points aim to provide seamless connectivity under diverse mobility scenarios. However, vertical handover (VHO) remains a performance bottleneck because of the highly variable radio environments, dynamic user mobility, stringent quality of service (QoS) requirements, and the coexistence of multi-tier access technologies. Existing handover approaches based on deep learning and deep reinforcement learning (DRL) suffer from limitations: deep learning models lack decision-making capabilities, whereas DRL models, particularly deep Q-network (DQN)-based policies, face Q-value overestimation and unstable convergence. To overcome these limitations, this paper introduces a Hybrid deep double-Q networks (DDQN)–bidirectional long short-term memory (Bi-LSTM) Framework that integrates bi-directional mobility prediction and DRL-based adaptive decision-making. The Bi-LSTM module captures forward and backward temporal dependencies and predicts future Received Signal Strength (RSS) trajectories, mobility dynamics, and cell-edge transitions. The DDQN module stabilizes the action value estimation, mitigates overestimation bias, and enables context-aware handover decisions. A multi-tier simulation environment consisting of LTE, 5G NR, and WiFi 6 networks was developed using realistic path loss, shadowing, interference, and mobility models. Extensive evaluations demonstrated substantial improvements in mobility prediction accuracy, handover stability, radio link reliability, throughput efficiency, and latency reduction compared to conventional RSS-based and DQN-based schemes. The findings highlight the effectiveness of integrating predictive intelligence with reinforcement learning for reliable mobility management in 5G-Advanced and emerging 6G networks. Full article
36 pages, 14306 KB  
Article
Enhancing SDN Intrusion Detection via Multi-Hybrid Deep Learning Fusion and Explainable AI
by Usman Ahmed and Muhammad Tariq Sadiq
Mathematics 2026, 14(9), 1498; https://doi.org/10.3390/math14091498 - 29 Apr 2026
Abstract
Software-defined networking (SDN) represents a paradigm shift in network management, but its centralized control plane introduces new and severe security vulnerabilities. Conventional intrusion detection systems, including signature- and rule-based methods, lack adaptability and interpretability in the face of evolving threats. This paper proposes [...] Read more.
Software-defined networking (SDN) represents a paradigm shift in network management, but its centralized control plane introduces new and severe security vulnerabilities. Conventional intrusion detection systems, including signature- and rule-based methods, lack adaptability and interpretability in the face of evolving threats. This paper proposes a multi-hybrid deep learning fusion ensemble (MHDLFE) to enhance intrusion detection in SDN environments. The framework integrates Deep Neural Networks (DNNs), Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Long Short-Term Memory (LSTM) models via feature fusion and a meta-classifier, thereby improving both detection performance and robustness. To address the critical need for transparency in security systems, the proposed approach incorporates Explainable AI techniques, specifically Shapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), providing interpretable insights into model decisions. The proposed model achieves strong performance on the NSL-KDD and CIC-IDS2017 datasets, attaining near-perfect binary classification scores of 97.91% and 93.30%, and multiclass accuracies of 98.61% and 97.91%, respectively. These results demonstrate that the proposed framework delivers an effective and trustworthy SDN intrusion detection system by combining deep learning, ensemble fusion, and explainable AI to support accurate, transparent, and reliable cybersecurity decision-making. Full article
22 pages, 11482 KB  
Article
Deployment-Oriented Lithium-Ion Battery Remaining Useful Life Prediction with Adaptive History Selection and Parameter-Efficient Updating
by Dongxiao Ren, Xinyu Zhong, Zixiang Ye and Xing-Liang Xu
Energies 2026, 19(9), 2135; https://doi.org/10.3390/en19092135 - 29 Apr 2026
Abstract
For battery management systems, accurate remaining useful life (RUL) prediction is important, yet models trained offline may not remain well matched to individual cells during operation, because degradation trajectories differ across cells and evolve over aging stages. This study examines a lightweight online [...] Read more.
For battery management systems, accurate remaining useful life (RUL) prediction is important, yet models trained offline may not remain well matched to individual cells during operation, because degradation trajectories differ across cells and evolve over aging stages. This study examines a lightweight online personalization strategy under a representative convolutional neural network–long short-term memory (CNN–LSTM) online-transfer setting while keeping the backbone architecture and fixed input length unchanged. The proposed method restricts online updates to a small adaptation path and adjusts the effective history span according to recent degradation behavior. Experiments on 22 test cells under unseen protocols show that the method improves average post-adaptation RUL performance relative to the representative baseline, reducing the root mean square error (RMSE) from 186.00 to 160.58. The number of trainable parameters involved in online updating is reduced from 74,880 to 2193, while the average update time per step decreases slightly from 2.54 s to 2.29 s. Cell-level analysis further shows that the benefit is not uniform across all cells, motivating more selective updating for safer deployment. Overall, the results indicate that lightweight online personalization can improve the accuracy–cost trade-off of deployment-oriented battery prognostics. Full article
(This article belongs to the Section F5: Artificial Intelligence and Smart Energy)
Show Figures

Figure 1

Back to TopTop