Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (93)

Search Parameters:
Keywords = BLSTM

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 4452 KiB  
Article
Upper Limb Joint Angle Estimation Using a Reduced Number of IMU Sensors and Recurrent Neural Networks
by Kevin Niño-Tejada, Laura Saldaña-Aristizábal, Jhonathan L. Rivas-Caicedo and Juan F. Patarroyo-Montenegro
Electronics 2025, 14(15), 3039; https://doi.org/10.3390/electronics14153039 - 30 Jul 2025
Viewed by 398
Abstract
Accurate estimation of upper-limb joint angles is essential in biomechanics, rehabilitation, and wearable robotics. While inertial measurement units (IMUs) offer portability and flexibility, systems requiring multiple inertial sensors can be intrusive and complex to deploy. In contrast, optical motion capture (MoCap) systems provide [...] Read more.
Accurate estimation of upper-limb joint angles is essential in biomechanics, rehabilitation, and wearable robotics. While inertial measurement units (IMUs) offer portability and flexibility, systems requiring multiple inertial sensors can be intrusive and complex to deploy. In contrast, optical motion capture (MoCap) systems provide precise tracking but are constrained to controlled laboratory environments. This study presents a deep learning-based approach for estimating shoulder and elbow joint angles using only three IMU sensors positioned on the chest and both wrists, validated against reference angles obtained from a MoCap system. The input data includes Euler angles, accelerometer, and gyroscope data, synchronized and segmented into sliding windows. Two recurrent neural network architectures, Convolutional Neural Network with Long-short Term Memory (CNN-LSTM) and Bidirectional LSTM (BLSTM), were trained and evaluated using identical conditions. The CNN component enabled the LSTM to extract spatial features that enhance sequential pattern learning, improving angle reconstruction. Both models achieved accurate estimation performance: CNN-LSTM yielded lower Mean Absolute Error (MAE) in smooth trajectories, while BLSTM provided smoother predictions but underestimated some peak movements, especially in the primary axes of rotation. These findings support the development of scalable, deep learning-based wearable systems and contribute to future applications in clinical assessment, sports performance analysis, and human motion research. Full article
(This article belongs to the Special Issue Wearable Sensors for Human Position, Attitude and Motion Tracking)
Show Figures

Figure 1

12 pages, 351 KiB  
Article
A Combined Method for Localizing Two Overlapping Acoustic Sources Based on Deep Learning
by Alexander Lyapin, Ghiath Shahoud and Evgeny Agafonov
Appl. Sci. 2025, 15(12), 6768; https://doi.org/10.3390/app15126768 - 16 Jun 2025
Viewed by 483
Abstract
Deep learning approaches for multi-source sound localization face significant challenges, particularly the need for extensive training datasets encompassing diverse spatial configurations to achieve robust generalization. This requirement leads to substantial computational demands, which are further exacerbated when localizing overlapping sources in complex acoustic [...] Read more.
Deep learning approaches for multi-source sound localization face significant challenges, particularly the need for extensive training datasets encompassing diverse spatial configurations to achieve robust generalization. This requirement leads to substantial computational demands, which are further exacerbated when localizing overlapping sources in complex acoustic environments with reverberation and noise. In this paper, a new methodology is proposed for simultaneous localization of two overlapping sound sources in the time–frequency domain in a closed, reverberant environment with a spatial resolution of 10° using a small-sized microphone array. The proposed methodology is based on the integration of the sound source separation method with a single-source sound localization model. A hybrid model was proposed to separate the sound source signals received by each microphone in the array. The model was built using a bidirectional long short-term memory (BLSTM) network and trained on a dataset using the ideal binary mask (IBM) as the training target. The modeling results show that the proposed localization methodology is efficient in determining the directions for two overlapping sources simultaneously, with an average localization accuracy of 86.1% for the test dataset containing short-term signals of 500 ms duration with different signal-to-signal ratio values. Full article
(This article belongs to the Section Acoustics and Vibrations)
Show Figures

Figure 1

23 pages, 3006 KiB  
Article
Enhancing Upper Limb Exoskeletons Using Sensor-Based Deep Learning Torque Prediction and PID Control
by Farshad Shakeriaski and Masoud Mohammadian
Sensors 2025, 25(11), 3528; https://doi.org/10.3390/s25113528 - 3 Jun 2025
Viewed by 720
Abstract
Upper limb assistive exoskeletons help stroke patients by assisting arm movement in impaired individuals. However, effective control of these systems to help stroke survivors is a complex task. In this paper, a novel approach is proposed to enhance the control of upper limb [...] Read more.
Upper limb assistive exoskeletons help stroke patients by assisting arm movement in impaired individuals. However, effective control of these systems to help stroke survivors is a complex task. In this paper, a novel approach is proposed to enhance the control of upper limb assistive exoskeletons by using torque estimation and prediction in a proportional–integral–derivative (PID) controller loop to more optimally integrate the torque of the exoskeleton robot, which aims to eliminate system uncertainties. First, a model for torque estimation from Electromyography (EMG) signals and a predictive torque model for the upper limb exoskeleton robot for the elbow are trained. The trained data consisted of two-dimensional high-density surface EMG (HD-sEMG) signals to record myoelectric activity from five upper limb muscles (biceps brachii, triceps brachii, anconeus, brachioradialis, and pronator teres) during voluntary isometric contractions for twelve healthy subjects performing four different isometric tasks (supination/pronation and elbow flexion/extension) for one minute each, which were trained on long short-term memory (LSTM), bidirectional LSTM (BLSTM), and gated recurrent units (GRU) deep neural network models. These models estimate and predict torque requirements. Finally, the estimated and predicted torque from the trained network is used online as input to a PID control loop and robot dynamic, which aims to control the robot optimally. The results showed that using the proposed method creates a strong and innovative approach to greater independence and rehabilitation improvement. Full article
Show Figures

Figure 1

16 pages, 1174 KiB  
Article
Natural Language Processing for Aviation Safety: Predicting Injury Levels from Incident Reports in Australia
by Aziida Nanyonga, Keith Joiner, Ugur Turhan and Graham Wild
Modelling 2025, 6(2), 40; https://doi.org/10.3390/modelling6020040 - 28 May 2025
Viewed by 1247
Abstract
This study investigates the application of advanced deep learning models for the classification of aviation safety incidents, focusing on four models: Simple Recurrent Neural Network (sRNN), Gated Recurrent Unit (GRU), Bidirectional Long Short-Term Memory (BLSTM), and DistilBERT. The models were evaluated based on [...] Read more.
This study investigates the application of advanced deep learning models for the classification of aviation safety incidents, focusing on four models: Simple Recurrent Neural Network (sRNN), Gated Recurrent Unit (GRU), Bidirectional Long Short-Term Memory (BLSTM), and DistilBERT. The models were evaluated based on key performance metrics, including accuracy, precision, recall, and F1-score. DistilBERT achieved perfect performance with an accuracy of 1.00 across all metrics, while BLSTM demonstrated the highest performance among the deep learning models, with an accuracy of 0.9896, followed by GRU (0.9893) and sRNN (0.9887). Class-wise evaluations revealed that DistilBERT excelled across all injury categories, with BLSTM outperforming the other deep learning models, particularly in detecting fatal injuries, achieving a precision of 0.8684 and an F1-score of 0.7952. The study also addressed the challenges of class imbalance by applying class weighting, although the use of more sophisticated techniques, such as focal loss, is recommended for future work. This research highlights the potential of transformer-based models for aviation safety classification and provides a foundation for future research to improve model interpretability and generalizability across diverse datasets. These findings contribute to the growing body of research on applying deep learning techniques to aviation safety and underscore opportunities for further exploration. Full article
Show Figures

Figure 1

17 pages, 2886 KiB  
Article
Online Pre-Diagnosis of Multiple Faults in Proton Exchange Membrane Fuel Cells by Convolutional Neural Network Based Bi-Directional Long Short-Term Memory Parallel Model with Attention Mechanism
by Junyi Chen, Huijun Ran, Ziyang Chen, Trevor Hocksun Kwan and Qinghe Yao
Energies 2025, 18(10), 2669; https://doi.org/10.3390/en18102669 - 21 May 2025
Viewed by 476
Abstract
Proton exchange membrane fuel cell (PEMFC) fault diagnosis faces two critical limitations: conventional offline methods lack real-time predictive capability, while existing prediction approaches are confined to single fault types. To address these gaps, this study proposes an online multi-fault prediction framework integrating three [...] Read more.
Proton exchange membrane fuel cell (PEMFC) fault diagnosis faces two critical limitations: conventional offline methods lack real-time predictive capability, while existing prediction approaches are confined to single fault types. To address these gaps, this study proposes an online multi-fault prediction framework integrating three novel contributions: (1) a sensor fusion strategy leveraging existing thermal/electrochemical measurements (voltage, current, temperature, humidity, and pressure) without requiring embedded stack sensors; (2) a real-time sliding window mechanism enabling dynamic prediction updates every 1 s under variable load conditions; and (3) a modified CNN-based Bi-LSTM parallel model with attention mechanism (ConvBLSTM-PMwA) architecture featuring multi-input multi-output (MIMO) capability for simultaneous flooding/air-starvation detection. Through comparative analysis of different neural architectures using experimental datasets, the optimized ConvBLSTM-PMwA achieved 96.49% accuracy in predicting dual faults 64.63 s pre-occurrence, outperforming conventional LSTM models in both temporal resolution and long-term forecast reliability. Full article
Show Figures

Graphical abstract

25 pages, 5774 KiB  
Article
A Novel Integrated Fault Diagnosis Method Based on Digital Twins
by Xiangrui Hu, Linglin Liu, Zhengyu Quan, Jinguo Huang and Jing Liu
Signals 2025, 6(2), 18; https://doi.org/10.3390/signals6020018 - 3 Apr 2025
Viewed by 1201
Abstract
Fault diagnosis is essential in industrial production. With the advancement of IoT technology, real-time data acquisition and storage have become feasible, enabling deep learning-based fault diagnosis methods to achieve remarkable results. However, existing approaches often overlook the temporal characteristics of fault occurrences and [...] Read more.
Fault diagnosis is essential in industrial production. With the advancement of IoT technology, real-time data acquisition and storage have become feasible, enabling deep learning-based fault diagnosis methods to achieve remarkable results. However, existing approaches often overlook the temporal characteristics of fault occurrences and struggle with data imbalance between normal and faulty conditions, impacting diagnostic performance. To address these challenges, this paper proposes an integrated fault diagnosis method that incorporates data balancing, feature extraction, and temporal information analysis. The approach consists of two key components: (1) dataset construction using digital twin technology and (2) an integrated fault diagnosis model (CNN-BLSTM-attention). Digital twin technology generates virtual data under various operating conditions, mitigating the small-sample issue. The proposed model leverages a sliding window mechanism to capture both feature and temporal information, enhancing fault pattern recognition. Experimental results demonstrate that, compared to traditional methods, this approach effectively reduces noise interference and achieves a high diagnostic accuracy of 96.46%, validating its robustness in complex industrial settings. This research provides valuable theoretical and practical insights for improving fault diagnosis in industrial equipment such as screw presses. Full article
Show Figures

Figure 1

20 pages, 3519 KiB  
Article
Attention-Based Hybrid Deep Learning Models for Classifying COVID-19 Genome Sequences
by A. M. Mutawa
AI 2025, 6(1), 4; https://doi.org/10.3390/ai6010004 - 2 Jan 2025
Cited by 4 | Viewed by 1702
Abstract
Background: COVID-19 genetic sequence research is crucial despite immunizations and pandemic control. COVID-19-causing SARS-CoV-2 must be understood genomically for several reasons. New viral strains may resist vaccines. Categorizing genetic sequences helps researchers track changes and assess immunization efficacy. Classifying COVID-19 genome sequences with [...] Read more.
Background: COVID-19 genetic sequence research is crucial despite immunizations and pandemic control. COVID-19-causing SARS-CoV-2 must be understood genomically for several reasons. New viral strains may resist vaccines. Categorizing genetic sequences helps researchers track changes and assess immunization efficacy. Classifying COVID-19 genome sequences with other viruses helps to understand its evolution and interactions with other illnesses. Methods: The proposed study introduces a deep learning-based COVID-19 genomic sequence categorization approach. Attention-based hybrid deep learning (DL) models categorize 1423 COVID-19 and 11,388 other viral genome sequences. An unknown dataset is also used to assess the models. The five models’ accuracy, f1-score, area under the curve (AUC), precision, Matthews correlation coefficient (MCC), and recall are evaluated. Results: The results indicate that the Convolutional neural network (CNN) with Bidirectional long short-term memory (BLSTM) with attention layer (CNN-BLSTM-Att) achieved an accuracy of 99.99%, which outperformed the other models. For external validation, the model shows an accuracy of 99.88%. It reveals that DL-based approaches with an attention layer can accurately classify COVID-19 genomic sequences with a high degree of accuracy. This method might assist in identifying and classifying COVID-19 virus strains in clinical situations. Immunizations have lowered COVID-19 danger, but categorizing its genetic sequences is crucial to global health activities to plan for recurrence or future viral threats. Full article
Show Figures

Graphical abstract

24 pages, 4002 KiB  
Article
An Intelligent Approach for Early and Accurate Predication of Cardiac Disease Using Hybrid Artificial Intelligence Techniques
by Hazrat Bilal, Yibin Tian, Ahmad Ali, Yar Muhammad, Abid Yahya, Basem Abu Izneid and Inam Ullah
Bioengineering 2024, 11(12), 1290; https://doi.org/10.3390/bioengineering11121290 - 19 Dec 2024
Cited by 17 | Viewed by 1913
Abstract
This study proposes a new hybrid machine learning (ML) model for the early and accurate diagnosis of heart disease. The proposed model is a combination of two powerful ensemble ML models, namely ExtraTreeClassifier (ETC) and XGBoost (XGB), resulting in a hybrid model named [...] Read more.
This study proposes a new hybrid machine learning (ML) model for the early and accurate diagnosis of heart disease. The proposed model is a combination of two powerful ensemble ML models, namely ExtraTreeClassifier (ETC) and XGBoost (XGB), resulting in a hybrid model named ETCXGB. At first, all the features of the utilized heart disease dataset were given as input to the ETC model, which processed it by extracting the predicted probabilities and produced an output. The output of the ETC model was then added to the original feature space by producing an enriched feature matrix, which is then used as input for the XGB model. The new feature matrix is used for training the XGB model, which produces the final result that whether a person has cardiac disease or not, resulting in a high diagnosis accuracy for cardiac disease. In addition to the proposed model, three other hybrid DL models, such as convolutional neural network + recurrent neural network (CNN-RNN), convolutional neural network + long short-term memory (CNN-LSTM), and convolutional neural network + bidirectional long short-term memory (CNN-BLSTM), were also investigated. The proposed ETCXGB model improved the prediction accuracy by 3.91%, while CNN-RNN, CNN-LSTM, and CNN-BLSTM enhanced the prediction accuracy by 1.95%, 2.44%, and 2.45%, respectively, for the diagnosis of cardiac disease. The simulation outcomes illustrate that the proposed ETCXGB hybrid ML outperformed the classical ML and DL models in terms of all performance measures. Therefore, using the proposed hybrid ML model for the diagnosis of cardiac disease will help the medical practitioner make an accurate diagnosis of the disease and will help the healthcare society decrease the mortality rate caused by cardiac disease. Full article
Show Figures

Figure 1

25 pages, 8887 KiB  
Article
A Gaussian Process-Enhanced Non-Linear Function and Bayesian Convolution–Bayesian Long Term Short Memory Based Ultra-Wideband Range Error Mitigation Method for Line of Sight and Non-Line of Sight Scenarios
by A. S. M. Sharifuzzaman Sagar, Samsil Arefin, Eesun Moon, Md Masud Pervez Prince, L. Minh Dang, Amir Haider and Hyung Seok Kim
Mathematics 2024, 12(23), 3866; https://doi.org/10.3390/math12233866 - 9 Dec 2024
Viewed by 1325
Abstract
Relative positioning accuracy between two devices is dependent on the precise range measurements. Ultra-wideband (UWB) technology is one of the popular and widely used technologies to achieve centimeter-level accuracy in range measurement. Nevertheless, harsh indoor environments, multipath issues, reflections, and bias due to [...] Read more.
Relative positioning accuracy between two devices is dependent on the precise range measurements. Ultra-wideband (UWB) technology is one of the popular and widely used technologies to achieve centimeter-level accuracy in range measurement. Nevertheless, harsh indoor environments, multipath issues, reflections, and bias due to antenna delay degrade the range measurement performance in line-of-sight (LOS) and non-line-of-sight (NLOS) scenarios. This article proposes an efficient and robust method to mitigate range measurement error in LOS and NLOS conditions by combining the latest artificial intelligence technology. A GP-enhanced non-linear function is proposed to mitigate the range bias in LOS scenarios. Moreover, NLOS identification based on the sliding window and Bayesian Conv-BLSTM method is utilized to mitigate range error due to the non-line-of-sight conditions. A novel spatial–temporal attention module is proposed to improve the performance of the proposed model. The epistemic and aleatoric uncertainty estimation method is also introduced to determine the robustness of the proposed model for environment variance. Furthermore, moving average and min-max removing methods are utilized to minimize the standard deviation in the range measurements in both scenarios. Extensive experimentation with different settings and configurations has proven the effectiveness of our methodology and demonstrated the feasibility of our robust UWB range error mitigation for LOS and NLOS scenarios. Full article
(This article belongs to the Special Issue Modeling and Simulation in Engineering, 3rd Edition)
Show Figures

Figure 1

14 pages, 1739 KiB  
Article
Older Adult Fall Risk Prediction with Deep Learning and Timed Up and Go (TUG) Test Data
by Josu Maiora, Chloe Rezola-Pardo, Guillermo García, Begoña Sanz and Manuel Graña
Bioengineering 2024, 11(10), 1000; https://doi.org/10.3390/bioengineering11101000 - 5 Oct 2024
Cited by 1 | Viewed by 3253
Abstract
Falls are a major health hazard for older adults; therefore, in the context of an aging population, predicting the risk of a patient suffering falls in the near future is of great impact for health care systems. Currently, the standard prospective fall risk [...] Read more.
Falls are a major health hazard for older adults; therefore, in the context of an aging population, predicting the risk of a patient suffering falls in the near future is of great impact for health care systems. Currently, the standard prospective fall risk assessment instrument relies on a set of clinical and functional mobility assessment tools, one of them being the Timed Up and Go (TUG) test. Recently, wearable inertial measurement units (IMUs) have been proposed to capture motion data that would allow for the building of estimates of fall risk. The hypothesis of this study is that the data gathered from IMU readings while the patient is performing the TUG test can be used to build a predictive model that would provide an estimate of the probability of suffering a fall in the near future, i.e., assessing prospective fall risk. This study applies deep learning convolutional neural networks (CNN) and recurrent neural networks (RNN) to build such predictive models based on features extracted from IMU data acquired during TUG test realizations. Data were obtained from a cohort of 106 older adults wearing wireless IMU sensors with sampling frequencies of 100 Hz while performing the TUG test. The dependent variable is a binary variable that is true if the patient suffered a fall in the six-month follow-up period. This variable was used as the output variable for the supervised training and validations of the deep learning architectures and competing machine learning approaches. A hold-out validation process using 75 subjects for training and 31 subjects for testing was repeated one hundred times to obtain robust estimations of model performances At each repetition, 5-fold cross-validation was carried out to select the best model over the training subset. Best results were achieved by a bidirectional long short-term memory (BLSTM), obtaining an accuracy of 0.83 and AUC of 0.73 with good sensitivity and specificity values. Full article
Show Figures

Figure 1

20 pages, 933 KiB  
Article
Improving Volatility Forecasting: A Study through Hybrid Deep Learning Methods with WGAN
by Adel Hassan A. Gadhi, Shelton Peiris and David E. Allen
J. Risk Financial Manag. 2024, 17(9), 380; https://doi.org/10.3390/jrfm17090380 - 23 Aug 2024
Cited by 1 | Viewed by 2065
Abstract
This paper examines the predictive ability of volatility in time series and investigates the effect of tradition learning methods blending with the Wasserstein generative adversarial network with gradient penalty (WGAN-GP). Using Brent crude oil returns price volatility and environmental temperature for the city [...] Read more.
This paper examines the predictive ability of volatility in time series and investigates the effect of tradition learning methods blending with the Wasserstein generative adversarial network with gradient penalty (WGAN-GP). Using Brent crude oil returns price volatility and environmental temperature for the city of Sydney in Australia, we have shown that the corresponding forecasts have improved when combined with WGAN-GP models (i.e., ANN-(WGAN-GP), LSTM-ANN-(WGAN-GP) and BLSTM-ANN (WGAN-GP)). As a result, we conclude that incorporating with WGAN-GP will’ significantly improve the capabilities of volatility forecasting in standard econometric models and deep learning techniques. Full article
(This article belongs to the Section Financial Markets)
Show Figures

Figure 1

26 pages, 11930 KiB  
Article
Runoff Simulation in Data-Scarce Alpine Regions: Comparative Analysis Based on LSTM and Physically Based Models
by Jiajia Yue, Li Zhou, Juan Du, Chun Zhou, Silang Nimai, Lingling Wu and Tianqi Ao
Water 2024, 16(15), 2161; https://doi.org/10.3390/w16152161 - 31 Jul 2024
Cited by 5 | Viewed by 2232
Abstract
Runoff simulation is essential for effective water resource management and plays a pivotal role in hydrological forecasting. Improving the quality of runoff simulation and forecasting continues to be a highly relevant research area. The complexity of the terrain and the scarcity of long-term [...] Read more.
Runoff simulation is essential for effective water resource management and plays a pivotal role in hydrological forecasting. Improving the quality of runoff simulation and forecasting continues to be a highly relevant research area. The complexity of the terrain and the scarcity of long-term runoff observation data have significantly limited the application of Physically Based Models (PBMs) in the Qinghai–Tibet Plateau (QTP). Recently, the Long Short-Term Memory (LSTM) network has been found to be effective in learning the dynamic hydrological characteristics of watersheds and outperforming some traditional PBMs in runoff simulation. However, the extent to which the LSTM works in data-scarce alpine regions remains unclear. This study aims to evaluate the applicability of LSTM in alpine basins in QTP, as well as the simulation performance of transfer-based LSTM (T-LSTM) in data-scarce alpine regions. The Lhasa River Basin (LRB) and Nyang River Basin (NRB) were the study areas, and the performance of the LSTM model was compared to that of PBMs by relying solely on the meteorological inputs. The results show that the average values of Nash–Sutcliffe efficiency (NSE), Kling–Gupta efficiency (KGE), and Relative Bias (RBias) for B-LSTM were 0.80, 0.85, and 4.21%, respectively, while the corresponding values for G-LSTM were 0.81, 0.84, and 3.19%. In comparison to a PBM- the Block-Wise use of TOPMEDEL (BTOP), LSTM has an average enhancement of 0.23, 0.36, and −18.36%, respectively. In both basins, LSTM significantly outperforms the BTOP model. Furthermore, the transfer learning-based LSTM model (T-LSTM) at the multi-watershed scale demonstrates that, when the input data are somewhat representative, even if the amount of data are limited, T-LSTM can obtain more accurate results than hydrological models specifically calibrated for individual watersheds. This result indicates that LSTM can effectively improve the runoff simulation performance in alpine regions and can be applied to runoff simulation in data-scarce regions. Full article
(This article belongs to the Section Hydrology)
Show Figures

Figure 1

23 pages, 9001 KiB  
Article
Remaining Useful Life Prediction of Lithium-Ion Batteries Based on Neural Network and Adaptive Unscented Kalman Filter
by Lingtao Wu, Wenhao Guo, Yuben Tang, Youming Sun and Tuanfa Qin
Electronics 2024, 13(13), 2619; https://doi.org/10.3390/electronics13132619 - 4 Jul 2024
Cited by 9 | Viewed by 2280
Abstract
Accurate prediction of remaining useful life (RUL) plays an important role in maintaining the safe and stable operation of Lithium-ion battery management systems. Aiming at the problem of poor prediction stability of a single model, this paper combines the advantages of data-driven and [...] Read more.
Accurate prediction of remaining useful life (RUL) plays an important role in maintaining the safe and stable operation of Lithium-ion battery management systems. Aiming at the problem of poor prediction stability of a single model, this paper combines the advantages of data-driven and model-based methods and proposes a RUL prediction method combining convolutional neural network (CNN), bi-directional long and short-term memory neural network (Bi-LSTM), SE attention mechanism (AM) and adaptive unscented Kalman filter (AUKF). First, three types of indirect features that are highly correlated with RUL decay are selected as inputs to the model to improve the accuracy of RUL prediction. Second, a CNN-BLSTM-AM network is used to further extract, select and fuse the indirect features to form predictive measurements of the identified degradation metrics. In addition, we introduce the AUKF model to increase the uncertainty representation of the RUL prediction. Finally, the method is validated on the NASA dataset and the CALCE dataset and compared with other methods. The experimental results show that the method is able to achieve an accurate estimation of RUL, a minimum RMSE of up to 0.0030, and a minimum MAE of up to 0.0024, which has high estimation accuracy and robustness. Full article
(This article belongs to the Special Issue Energy Storage, Analysis and Battery Usage)
Show Figures

Figure 1

19 pages, 10556 KiB  
Article
Self-Adaptive Server Anomaly Detection Using Ensemble Meta-Reinforcement Learning
by Bao Rong Chang, Hsiu-Fen Tsai and Guan-Ru Chen
Electronics 2024, 13(12), 2348; https://doi.org/10.3390/electronics13122348 - 15 Jun 2024
Cited by 5 | Viewed by 1719
Abstract
As the user’s behavior changes at any time with cloud computing and network services, abnormal server resource utilization traffic will lead to severe service crashes and system downtime. The traditional single anomaly detection model cannot handle the rapid failure prediction ahead. Therefore, this [...] Read more.
As the user’s behavior changes at any time with cloud computing and network services, abnormal server resource utilization traffic will lead to severe service crashes and system downtime. The traditional single anomaly detection model cannot handle the rapid failure prediction ahead. Therefore, this study proposed ensemble learning combined with model-agnostic meta-reinforcement learning called ensemble meta-reinforcement learning (EMRL) to implement self-adaptive server anomaly detection rapidly and precisely, according to the time series of server resource utilization. The proposed ensemble approach combines hidden Markov model (HMM), variational autoencoder (VAE), temporal convolutional autoencoder (TCN-AE), and bidirectional long short-term memory (BLSTM). The EMRL algorithm trains this combination with several tasks to learn the implicit representation of various anomalous traffic, where each task executes trust region policy optimization (TRPO) to quickly adapt the time-varying data distribution and make rapid decisions precisely for an agent response. As a result, our proposed approach can improve the precision of anomaly prediction by 2.4 times and reduce the model deployment speed by 5.8 times on average because a meta-learner can immediately be applied to new tasks. Full article
Show Figures

Figure 1

15 pages, 1505 KiB  
Article
A Deep Learning-Based Framework for Strengthening Cybersecurity in Internet of Health Things (IoHT) Environments
by Sarah A. Algethami and Sultan S. Alshamrani
Appl. Sci. 2024, 14(11), 4729; https://doi.org/10.3390/app14114729 - 30 May 2024
Cited by 11 | Viewed by 2847
Abstract
The increasing use of IoHT devices in healthcare has brought about revolutionary advancements, but it has also exposed some critical vulnerabilities, particularly in cybersecurity. IoHT is characterized by interconnected medical devices sharing sensitive patient data, which amplifies the risk of cyber threats. Therefore, [...] Read more.
The increasing use of IoHT devices in healthcare has brought about revolutionary advancements, but it has also exposed some critical vulnerabilities, particularly in cybersecurity. IoHT is characterized by interconnected medical devices sharing sensitive patient data, which amplifies the risk of cyber threats. Therefore, ensuring healthcare data’s integrity, confidentiality, and availability is essential. This study proposes a hybrid deep learning-based intrusion detection system that uses an Artificial Neural Network (ANN) with Bidirectional Long Short-Term Memory (BLSTM) and Gated Recurrent Unit (GRU) architectures to address critical cybersecurity threats in IoHT. The model was tailored to meet the complex security demands of IoHT and was rigorously tested using the Electronic Control Unit ECU-IoHT dataset. The results are impressive, with the system achieving 100% accuracy, precision, recall, and F1-Score in binary classifications and maintaining exceptional performance in multiclass scenarios. These findings demonstrate the potential of advanced AI methodologies in safeguarding IoHT environments, providing high-fidelity detection while minimizing false positives. Full article
Show Figures

Figure 1

Back to TopTop