Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,064)

Search Parameters:
Keywords = RNN

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 2242 KB  
Article
A Multi-Source Feedback-Driven Framework for Generating WAF Test Cases
by Pengcheng Lu, Xiaofeng Zhong, Wenbo Xu and Yongjie Wang
Future Internet 2026, 18(3), 167; https://doi.org/10.3390/fi18030167 (registering DOI) - 20 Mar 2026
Abstract
Web application firewalls (WAFs) are critical defenses against persistent threats to web applications, yet their security evaluation remains challenging. Traditional manual testing methods are often inefficient and resource-intensive, while existing reinforcement learning (RL)-based automated approaches face two key limitations: (1) attackers cannot perceive [...] Read more.
Web application firewalls (WAFs) are critical defenses against persistent threats to web applications, yet their security evaluation remains challenging. Traditional manual testing methods are often inefficient and resource-intensive, while existing reinforcement learning (RL)-based automated approaches face two key limitations: (1) attackers cannot perceive opaque WAF rule logic; (2) boolean feedback from WAFs results in sparse/delayed rewards—sparse rewards trap agents in blind exploration, and delayed rewards hinder the association between early actions and final outcomes, adversely affecting learning efficiency. To address those challenges, we propose Ouroboros—a framework integrating genetic algorithm-based symbolic rule reconstruction (translating WAF rules into interpretable RNNs for fine-grained confidence scoring), timing side-channel analysis (evaluating rule-matching depth), and a multi-tiered reward mechanism to enable self-evolving RL testing. Experiments show that the framework reaches 89.2% bypass success rate on signature-based WAFs. This paper presents an efficient solution for automated WAF testing and delivers insights for optimizing rule logic and anomaly detection mechanisms. Full article
(This article belongs to the Special Issue Adversarial Attacks and Cyber Security)
Show Figures

Figure 1

29 pages, 6240 KB  
Article
Explainable Prediction of Power Generation for Cascaded Hydropower Systems Under Complex Spatiotemporal Dependencies
by Zexin Li, Xiaodong Shen, Yuhang Huang and Yuchen Ren
Energies 2026, 19(6), 1540; https://doi.org/10.3390/en19061540 - 20 Mar 2026
Abstract
Hydropower plays a key regulating role in new-type power systems, and both forecasting accuracy and interpretability are critical for power dispatch. However, cascade hydropower forecasting is constrained by strong spatiotemporal coupling among multi-dimensional features, flow propagation delays, as well as the limited transparency [...] Read more.
Hydropower plays a key regulating role in new-type power systems, and both forecasting accuracy and interpretability are critical for power dispatch. However, cascade hydropower forecasting is constrained by strong spatiotemporal coupling among multi-dimensional features, flow propagation delays, as well as the limited transparency of deep learning models. To tackle these issues, this paper develops a hybrid framework integrating Maximal Information Coefficient (MIC), the Long- and Short-term Time-series Network (LSTNet), and the SHapley Additive exPlanations (SHAP) interpretability method. First, an MIC-based nonlinear screening mechanism is employed to remove redundant noise and construct a high-quality input space. Second, an LSTNet model is developed to deeply extract spatiotemporal coupling features among cascade stations and flow evolution patterns, achieving high-accuracy forecasting of both system-level and station-level outputs. Finally, SHAP is used for global and local interpretability analysis to perform physics-consistency verification with respect to the model’s decision-making rationale. Experimental results indicate that the proposed approach achieves low errors in total output forecasting, reducing error levels by approximately 57–88% compared with Recurrent Neural Network (RNN), Gated Recurrent Unit (GRU), and Informer. Moreover, SHAP feature-dependence analysis reveals a nonlinear response change of station D around 7.8 MW, providing evidence for the physical consistency of the model outputs and improving model interpretability. Full article
(This article belongs to the Section F5: Artificial Intelligence and Smart Energy)
Show Figures

Figure 1

33 pages, 2201 KB  
Review
Machine Learning Models for Non-Intrusive Load Monitoring: A Systematic Review and Meta-Analysis
by Herman Cristiano Jaime, Adler Diniz de Souza, Raphael Carlos Santos Machado and Otávio de Souza Martins Gomes
Inventions 2026, 11(2), 29; https://doi.org/10.3390/inventions11020029 - 19 Mar 2026
Abstract
Non-Intrusive Load Monitoring (NILM) systems are increasingly applied in residential and commercial environments to disaggregate energy consumption without requiring additional hardware sensors. The integration of Machine Learning (ML) techniques has enhanced the accuracy and efficiency of load identification and classification in smart meter-based [...] Read more.
Non-Intrusive Load Monitoring (NILM) systems are increasingly applied in residential and commercial environments to disaggregate energy consumption without requiring additional hardware sensors. The integration of Machine Learning (ML) techniques has enhanced the accuracy and efficiency of load identification and classification in smart meter-based systems. This study presents a systematic review and meta-analysis aimed at identifying, classifying, and quantitatively evaluating ML models applied to NILM. Searches were conducted in the IEEE Xplore and Scopus databases, restricted to peer-reviewed publications from 2017 to 2024. Thirty studies met the eligibility criteria and were included in the quantitative synthesis using a random-effects meta-analysis model (DerSimonian–Laird estimator). The primary effect measure was the F1-score. Statistical analyses were performed using R (version 4.5.0) and Python (version 3.10.0), including heterogeneity assessment and subgroup analyses according to model type. Hybrid models, such as SVDT-KNN-MLP, LE-CRNN, and RBFNN-MOGA, achieved the highest pooled F1-scores, although supported by a limited number of studies. Traditional approaches, including CNN, KNN, and Random Forest, demonstrated consistently strong performance and broader validation, whereas Boosted Trees and RNN-based models showed lower or more variable results. Substantial heterogeneity was observed across studies, highlighting the need for dataset standardization, reproducible evaluation frameworks, and further validation of emerging hybrid architectures in diverse operational scenarios. This study contributes by providing a quantitative synthesis of machine learning models applied to NILM using a structured PRISMA-based methodology and subgroup analysis by model architecture. Unlike previous narrative reviews, this work integrates scientometric analysis with meta-analytic performance aggregation, offering a consolidated and comparative evidence base for future NILM research. Full article
Show Figures

Figure 1

26 pages, 2033 KB  
Article
AI-Driven Dynamic Resource Allocation for Energy-Efficient Optical Fiber Communication Networks: Modeling, Algorithms, and Performance Evaluation
by Askar Abdykadyrov, Gulzada Mussapirova, Nurzhigit Smailov, Zhanna Seissenbiyeva, Gulbakhar Yussupova, Ainur Tasieva, Ainur Kuttybayeva, Altyngul Turebekova, Rizat Kenzhegaliyev and Nurlan Kystaubayev
J. Sens. Actuator Netw. 2026, 15(2), 28; https://doi.org/10.3390/jsan15020028 - 17 Mar 2026
Viewed by 143
Abstract
The object of this research is resource management and energy consumption processes in optical fiber communication networks with access–metro–core architectures. The study addresses the problem that conventional static and semi-dynamic control methods are unable to simultaneously ensure energy efficiency and QoS stability under [...] Read more.
The object of this research is resource management and energy consumption processes in optical fiber communication networks with access–metro–core architectures. The study addresses the problem that conventional static and semi-dynamic control methods are unable to simultaneously ensure energy efficiency and QoS stability under conditions of exponentially growing and highly variable traffic. To solve this problem, an AI-based integrated control model was developed that combines traffic prediction, dynamic resource allocation, spectrum management, and power optimization within a unified framework. Traffic prediction is performed using LSTM–BiRNN neural networks (1.2–1.8 million parameters, 300–500 thousand records), while control decisions are generated by an Actor–Critic reinforcement learning algorithm. Simulation results obtained in the Python 3.12 and OptiSystem 17.0 environments demonstrate that, in the Access segment (1–10 Gb/s), latency is stabilized within 1–10 ms; in the Metro segment (40–120 Gb/s), energy consumption is reduced by 18–27%; and in the Core segment (400–1000 Gb/s), the efficiency of RSA algorithms increases by 22–35%. When the EDFA output power is maintained within +17 to +23 dBm, amplifier power consumption decreases by 10–15%, resulting in overall network energy savings of 20–40%. The obtained results are explained by the synergy of accurate traffic prediction provided by the LSTM–BiRNN model and proactive real-time decision-making enabled by the Actor–Critic algorithm. The distinctive feature of the proposed approach is the simultaneous optimization of energy efficiency and QoS across all access, metro, and core segments within a single integrated architecture. The results can be practically applied in the design and modernization of optical fiber communication networks, as well as in the deployment of energy-efficient intelligent network management systems. Full article
Show Figures

Figure 1

13 pages, 1027 KB  
Article
Predicting Cybersickness in Virtual Reality from Head–Torso Kinematics Using a Hybrid Convolutional–Recurrent Network Model
by Ala Hag, Houshyar Asadi, Mohammad Reza Chalak Qazani, Thuong Hoang, Ambarish Kulkarni, Stefan Greuter and Saeid Nahavandi
Computers 2026, 15(3), 193; https://doi.org/10.3390/computers15030193 - 17 Mar 2026
Viewed by 125
Abstract
Motion sickness (MS) is a prevalent condition that can significantly degrade user comfort and immersion, particularly in virtual reality (VR) environments. Accurate prediction models are essential for early detection and mitigation of MS symptoms, thereby improving the overall VR experience. Most existing approaches [...] Read more.
Motion sickness (MS) is a prevalent condition that can significantly degrade user comfort and immersion, particularly in virtual reality (VR) environments. Accurate prediction models are essential for early detection and mitigation of MS symptoms, thereby improving the overall VR experience. Most existing approaches rely on bio-physiological data acquired through body-mounted sensors, which may restrict user mobility and diminish immersion. This study proposes a less intrusive alternative, leveraging head and torso kinematic data for MS prediction. We introduce a hybrid Convolutional–Recurrent Neural Network (C-RNN) designed to capture both spatial and temporal features for enhanced classification accuracy. Using a dataset of 40 participants, the proposed C-RNN outperformed traditional machine learning models—including Support Vector Machines (SVMs), k-Nearest Neighbors (KNN), Decision Trees (DT), and a baseline Recurrent Neural Network (RNN)—across multiple evaluation metrics. The C-RNN achieved 85.63% accuracy, surpassing SVM (60%), KNN (73.75%), DT (74.38%), and RNN (81.88%), with corresponding gains in precision, recall, F1-score, and ROC AUC. These results demonstrate that head–torso motion patterns provide sufficient predictive signal for accurate MS detection, offering a non-intrusive, efficient alternative to physiological sensing that supports improved comfort and sustained immersion in VR. Full article
(This article belongs to the Special Issue Innovative Research in Human–Computer Interactions)
Show Figures

Figure 1

17 pages, 2631 KB  
Article
Monitoring of Liquid Metal Reactor Heater Zones with Recurrent Neural Network Learning of Temperature Time Series
by Maria Pantopoulou, Derek Kultgen, Lefteri Tsoukalas and Alexander Heifetz
Energies 2026, 19(6), 1462; https://doi.org/10.3390/en19061462 - 14 Mar 2026
Viewed by 160
Abstract
Advanced high-temperature fluid reactors (ARs), such as sodium fast reactors (SFRs) and molten salt cooled reactors (MSCRs) utilize high-temperature fluids at ambient pressure. To melt the fluid during reactor startup and prevent fluid freezing during cooldown, the thermal–hydraulic systems of such ARs include [...] Read more.
Advanced high-temperature fluid reactors (ARs), such as sodium fast reactors (SFRs) and molten salt cooled reactors (MSCRs) utilize high-temperature fluids at ambient pressure. To melt the fluid during reactor startup and prevent fluid freezing during cooldown, the thermal–hydraulic systems of such ARs include heater zones consisting of specific heaters with controllers, temperature sensors, and thermal insulation. The failure of heater zones due to insulation material degradation or improper installation, resulting in parasitic heat losses, can lead to fluid freezing. The detection of faults using a heat-transfer model is difficult because of a lack of knowledge of the experimental details. Data-driven machine learning of heater zone temperature time series offers a viable alternative. In this study, we benchmarked the performance of recurrent neural networks (RNNs) in an analysis of heat-up transient temperature time series of heater zones installed on a liquid sodium vessel. The RNN models include long short-term memory (LSTM) and gated recurrent unit (GRU) networks, as well as their bi-directional variants, BiLSTM and BiGRU. Anomalous temperature points were designated using a percentile-based threshold applied to residual fluctuations in the detrended temperature time series. Additionally, the impact of the exponentially weighted moving average (EWMA) method on detection accuracy was examined. The RNN models’ performance was assessed using precision, recall, and F1 score metrics. Results demonstrated that RNN models effectively detect anomalies in temperature time series with the best models for each heater zone achieving F1 scores of over 93%. To explain the variations in RNN model performance across different heater zones, we used Kullback–Leibler (KL) divergence to quantify the relative entropy between training and testing data, and the Detrended Fluctuation Analysis (DFA) to assess long-range temporal correlations. For datasets with strong long-range correlations and minimal relative entropy between training and testing data, GRU is the best-performing model. When the data exhibits weaker long-term correlations and a significant relative entropy between training and testing distributions, BiGRU shows the best performance. For the data sets with intermediate values of both KL divergence and DFA, the best performance is obtained with LSTM and BiLSTM, respectively. Full article
Show Figures

Figure 1

25 pages, 5501 KB  
Article
VMRNN-DMSA: A Spatiotemporal Prediction Model for Shiitake Mushroom Fruiting Body Growth
by Xingmei Xu, Shujuan Wei, Zuocheng Jiang, Jiali Wang, Jinying Li and Jing Zhou
Agriculture 2026, 16(6), 642; https://doi.org/10.3390/agriculture16060642 - 11 Mar 2026
Viewed by 171
Abstract
In traditional time-series image prediction tasks, both accuracy and image quality tend to deteriorate as the prediction horizon extends. To address this challenge in Shiitake mushroom fruiting body growth prediction, this study selected Shiitake mushroom strain No. 509, cultivated by the Shanghai Academy [...] Read more.
In traditional time-series image prediction tasks, both accuracy and image quality tend to deteriorate as the prediction horizon extends. To address this challenge in Shiitake mushroom fruiting body growth prediction, this study selected Shiitake mushroom strain No. 509, cultivated by the Shanghai Academy of Agricultural Sciences, as the experimental subject and proposed an enhanced model, VMRNN-DMSA, based on the Vision Mamba RNN Depth architecture. This model integrates a skip-connection mechanism with a Max Feature Map module to effectively filter and fuse features, enhancing feature representation and prediction accuracy. Additionally, a Spatial Attention Mechanism was introduced to strengthen the perception of key regions and improve spatial modeling. Furthermore, an Adaptive Kernel Convolution module with irregular context convolution kernels was incorporated to extract fine-grained local features and enhance visual quality. A weighted loss function was used to balance pixel-level accuracy, structural fidelity, and perceptual quality. This function combines Mean Squared Error Loss, Multi-Scale Structural Similarity, and Perceptual Loss. Experimental results showed that the proposed method achieved an MSE of 39.4255, an SSIM of 0.8579, and a PSNR of 22.0774. Compared with baseline models, MSE decreased by 29.05%, while SSIM and PSNR increased by 19.34% and 14.52%, respectively. These results indicate that VMRNN-DMSA significantly improves both prediction accuracy and image quality in long-term forecasting tasks, providing a reliable reference for the growth prediction of other edible fungi. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

20 pages, 10594 KB  
Review
Review of Polymer Drug Therapy for Cancer Driven by Artificial Intelligence
by Jie Zheng and Yuanlv Ye
Polymers 2026, 18(6), 677; https://doi.org/10.3390/polym18060677 - 11 Mar 2026
Viewed by 211
Abstract
This review systematically evaluates the interdisciplinary convergence of artificial intelligence (AI) and polymer science in cancer therapy. Beyond mere description, we provide an integrated framework spanning synthetic optimization, biocompatibility prediction, and the design of tumor microenvironment (TME)-responsive carriers. We highlight how AI algorithms [...] Read more.
This review systematically evaluates the interdisciplinary convergence of artificial intelligence (AI) and polymer science in cancer therapy. Beyond mere description, we provide an integrated framework spanning synthetic optimization, biocompatibility prediction, and the design of tumor microenvironment (TME)-responsive carriers. We highlight how AI algorithms (ML, DL, and RNNs) transform traditional trial-and-error methods into a data-driven paradigm, enabling precise spatiotemporal drug release and individualized pharmacokinetic modeling. Crucially, this work addresses the critical gap between computational modeling and clinical realization by providing a balanced critical analysis of current bottlenecks, including the “small data” challenge, publication bias, and regulatory hurdles. We conclude with a roadmap for AI-guided precision oncology, shifting the focus from predictive accuracy to mechanistic interpretability and prospective in vivo validation. Full article
(This article belongs to the Section Artificial Intelligence in Polymer Science)
Show Figures

Graphical abstract

13 pages, 2079 KB  
Article
Trend Prediction of Distribution Network Fault Symptoms Based on XLSTM-Informer Fusion Model
by Zhen Chen, Lin Gao and Yuanming Cheng
Energies 2026, 19(6), 1389; https://doi.org/10.3390/en19061389 - 10 Mar 2026
Viewed by 190
Abstract
Accurate prediction of distribution network operating states is essential for implementing proactive fault warning systems. However, with the high penetration of distributed energy resources, measurement data exhibit strong nonlinearity and multi-scale temporal characteristics, posing significant challenges to existing prediction methods. Current mainstream approaches [...] Read more.
Accurate prediction of distribution network operating states is essential for implementing proactive fault warning systems. However, with the high penetration of distributed energy resources, measurement data exhibit strong nonlinearity and multi-scale temporal characteristics, posing significant challenges to existing prediction methods. Current mainstream approaches face a critical dilemma: traditional recurrent neural network (RNN) models (e.g., LSTM) suffer from vanishing gradients and memory bottlenecks in long-sequence forecasting, making it difficult to capture long-term evolutionary trends. In contrast, while standard Transformer models excel at global modeling, their smoothing effect renders them insensitive to subtle transient abrupt changes such as voltage sags, and they incur high computational complexity. To address the dual challenges of “difficulty in capturing transient abrupt changes” and “inability to simultaneously handle long-term trends,” this paper proposes a fault precursor trend prediction model that integrates Extended Long Short-Term Memory (XLSTM) with Informer, termed XLSTM-Informer. To tackle the challenge of extracting transient features, an XLSTM-based local encoder is constructed. By replacing the conventional Sigmoid activation with an improved exponential gating mechanism, the model achieves significantly enhanced sensitivity to instantaneous fluctuations in voltage and current. Additionally, a matrix memory structure is introduced to effectively mitigate information forgetting issues during long-sequence training. To overcome the challenge of modeling long-term dependencies, Informer is employed as the global decoder. Leveraging its ProbSparse sparse self-attention mechanism, the model substantially reduces computational complexity while accurately capturing long-range temporal dependencies. Experimental results on a real-world distribution network dataset demonstrate that the proposed model achieves substantially lower Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE) compared to standalone CNN, LSTM, and other baseline models, as well as conventional LSTM–Informer hybrid approaches. Particularly under extreme operating conditions—such as sustained high summer loads and winter heating peak loads—the model successfully overcomes the trade-off limitations of traditional methods, enabling simultaneous and accurate prediction of both local precursors and global trends. This provides a reliable technical foundation for proactive warning systems in distribution networks. Full article
Show Figures

Figure 1

16 pages, 2031 KB  
Article
A Comparative Study of Transformer-Based and Classical Models for Financial Time-Series Forecasting
by Ting Liu
J. Risk Financial Manag. 2026, 19(3), 203; https://doi.org/10.3390/jrfm19030203 - 9 Mar 2026
Viewed by 406
Abstract
This study compares classical and deep learning models (ARIMA, Random Forest, RNN, LSTM, CNN, and Transformer) for forecasting one-day-ahead log returns rt+1=ln(Pt+1/Pt) using daily data for six U.S.-listed equities [...] Read more.
This study compares classical and deep learning models (ARIMA, Random Forest, RNN, LSTM, CNN, and Transformer) for forecasting one-day-ahead log returns rt+1=ln(Pt+1/Pt) using daily data for six U.S.-listed equities (NVDA, TSLA, SMCI, GOOGL, PYPL, SNAP) from 2014 to 2024. Predictors include lagged price/return information, lagged macroeconomic variables (CPI, policy rate, GDP) to reflect information availability, and technical indicators (SMA, RSI, MACD) computed using rolling windows ending at day t to avoid look-ahead bias. Performance is evaluated in a walk-forward out-of-sample design, with hyperparameters selected using time-series validation within each training window. Empirically, results are asset-dependent: ARIMA and Random Forest remain strong baselines; deep learning models show asset-dependent performance, with LSTM occasionally competitive in some settings, and the Transformer competitive but not uniformly dominant. For context, this study also reports a rule-based SMA(10/50) crossover benchmark evaluated net of transaction costs. Overall, the findings suggest that predictive signals in daily equity returns, when present, are modest and must be assessed under strict leakage controls and realistic evaluation protocols. Full article
Show Figures

Figure 1

34 pages, 8947 KB  
Article
Lightweight Evidential Time Series Imputation Method for Bridge Structural Health Monitoring
by Die Liu, Jianxi Yang, Lihua Chen, Tingjun Xu, Youjia Zhang, Lei Zhou and Jingyuan Shen
Buildings 2026, 16(5), 1076; https://doi.org/10.3390/buildings16051076 - 9 Mar 2026
Viewed by 279
Abstract
Long-term data loss resulting from sensor malfunctions, communication interruptions, and other factors in Structural Health Monitoring (SHM) significantly undermines the reliability of damage identification and safety assessment. Existing methods—ranging from statistical approaches and low-rank matrix completion to traditional machine learning and deep learning [...] Read more.
Long-term data loss resulting from sensor malfunctions, communication interruptions, and other factors in Structural Health Monitoring (SHM) significantly undermines the reliability of damage identification and safety assessment. Existing methods—ranging from statistical approaches and low-rank matrix completion to traditional machine learning and deep learning imputation techniques—often suffer from either limited accuracy or excessive model size and slow inference, making deployment in resource-constrained scenarios difficult. To address these challenges, this paper proposes TEFN–Imputation, a lightweight and efficient time-series imputation model. This model utilizes observation-driven non-stationary normalization to mitigate the impact of time-varying characteristics and dimensional discrepancies. It employs linear projection for temporal length alignment and constructs BPA-style mass representations from dual perspectives of time and channel. Furthermore, it replaces strict Dempster–Shafer belief combination with an expectation-based evidential aggregation (readout), thereby significantly reducing computational overhead while enabling uncertainty-aware evidential indicators for interpretation rather than claiming a direct accuracy gain from uncertainty modeling. The observed accuracy and robustness improvements are primarily attributed to the normalization and dual temporal–channel modeling design under the same lightweight readout. Systematic experiments on two real-world bridge monitoring datasets, Z24 and Hell Bridge, demonstrate that TEFN consistently maintains low Mean Absolute Error (MAE) and minimal volatility across various combinations of training and testing missing rates, exhibiting high robustness against variations in missing rates and train–test mismatches. Concurrently, compared to RNN and large-scale Transformer baselines, TEFN reduces parameter count and CPU inference time by one to two orders of magnitude. Thus, it achieves a superior trade-off among accuracy, efficiency, and model scale, making it highly suitable for online SHM and imputation tasks in practical engineering applications. Across the settings on Z24, TEFN achieves a mean MAE of 0.218 with a standard deviation of 0.002, while using only 0.02 MB parameters and 2.73 ms per batch CPU inference. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

17 pages, 2386 KB  
Article
Comparative Evaluation of Deep Learning Models for Respiratory Rate Estimation Using PPG-Derived Numerical Features
by Syed Mahedi Hasan, Mercy Golda Sam Raj and Kunal Mitra
Electronics 2026, 15(5), 1108; https://doi.org/10.3390/electronics15051108 - 7 Mar 2026
Viewed by 261
Abstract
Respiratory rate (RR) is a critical vital sign for the early detection of hypoxia and respiratory deterioration, yet its continuous monitoring remains challenging in clinical environments. Photoplethysmography (PPG) provides a non-invasive source of physiological information from which respiratory dynamics can be inferred. In [...] Read more.
Respiratory rate (RR) is a critical vital sign for the early detection of hypoxia and respiratory deterioration, yet its continuous monitoring remains challenging in clinical environments. Photoplethysmography (PPG) provides a non-invasive source of physiological information from which respiratory dynamics can be inferred. In this study, numerical physiological features derived from PPG data were used to comparatively evaluate multiple deep learning models for respiratory rate estimation. Fixed-length sliding windows were constructed from the dataset and used to train five neural network architectures: a Deep Feedforward Neural Network (DFNN), unidirectional and bidirectional Recurrent Neural Networks (RNN, Bi-RNN), and unidirectional and bidirectional Long Short-Term Memory networks (LSTM, Bi-LSTM). Model performance was assessed using mean absolute error (MAE), root mean squared error (RMSE), coefficient of determination (R2), and computational runtime. Results indicate that models incorporating temporal dependencies outperform the static feedforward baseline, achieving MAE values as low as 0.521 breaths/min, making them competitive with or lower than previously reported PPG-based approaches. These findings highlight the effectiveness of temporal deep learning models for respiratory rate estimation from PPG-derived numerical features and provide insight into accuracy–efficiency trade-offs relevant to real-time monitoring applications. Full article
Show Figures

Figure 1

19 pages, 2002 KB  
Article
Application of Machine Learning Approach to Classify Human Activity Level Based on Lifelog Data
by Si-Hwa Jeong, Woomin Nam and Keon Chul Park
Sensors 2026, 26(5), 1612; https://doi.org/10.3390/s26051612 - 4 Mar 2026
Viewed by 260
Abstract
The present paper provides a human activity-level classification model based on the patient’s lifelog collected from wearable devices. During about two months, the heart rate, step count, and calorie consumption for a total of 182 patients were collected from a wearable device. Using [...] Read more.
The present paper provides a human activity-level classification model based on the patient’s lifelog collected from wearable devices. During about two months, the heart rate, step count, and calorie consumption for a total of 182 patients were collected from a wearable device. Using the lifelog data, the machine learning models were developed to classify the physical activity status of patients into five levels. Three types of wearable data with heart rate, step count, and calorie consumption were pre-processed as integrated data in time series. A total of 80% of the integrated data was used as the training dataset, and the remaining 20% was used as the test dataset. Sixteen algorithms were evaluated, including 12 traditional machine learning models (SVM, KNN, RF, etc.) and 4 deep learning models (CNN, RNN, etc.), and cross-validation was performed by dividing the training dataset into 5 folds. By changing the parameters required for training, the models with optimal parameters were derived. The performance of the final models with the new patient lifelog data was evaluated, and it was shown that the classification for human activity level based on heart rate and step count can be performed with high accuracy. Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition: 3rd Edition)
Show Figures

Figure 1

23 pages, 760 KB  
Article
Trajectory Data Publishing Scheme Based on Transformer Decoder and Differential Privacy
by Haiyong Wang and Wei Huang
ISPRS Int. J. Geo-Inf. 2026, 15(3), 106; https://doi.org/10.3390/ijgi15030106 - 3 Mar 2026
Viewed by 232
Abstract
The proliferation of Location-Based Services (LBSs) has generated vast trajectory datasets that offer immense analytical value but pose critical privacy risks. Achieving an optimal balance between data utility and privacy preservation remains a challenge, a difficulty compounded by the limitations of existing methods [...] Read more.
The proliferation of Location-Based Services (LBSs) has generated vast trajectory datasets that offer immense analytical value but pose critical privacy risks. Achieving an optimal balance between data utility and privacy preservation remains a challenge, a difficulty compounded by the limitations of existing methods in modeling complex, long-term spatiotemporal dependencies. To address this, this paper proposes a trajectory data publishing scheme combining a Transformer decoder with differential privacy. Unlike traditional single-layer approaches, the proposed method establishes a systematic generation–generalization framework. First, a Transformer decoder is integrated into a Generative Adversarial Network (GAN). This architecture mitigates the gradient vanishing issues common in RNN-based models, generating high-fidelity synthetic trajectories that capture long-range correlations while decoupling them from sensitive source data. Second, to provide rigorous privacy guarantees, a clustering-based generalization strategy is implemented, utilizing Exponential and Laplace mechanisms to ensure ϵ-differential privacy. Experiments on the Geolife and Foursquare NYC datasets demonstrate that the scheme significantly outperforms leading baselines, achieving a superior trade-off between privacy protection and data utility. Full article
(This article belongs to the Topic Recent Advances in Security, Privacy, and Trust)
Show Figures

Figure 1

23 pages, 13416 KB  
Article
An Adaptive Ensemble Model Based on Deep Reinforcement Learning for the Prediction of Step-like Landslide Displacement
by Tengfei Gu, Lei Huang, Shunyao Tian, Zhichao Zhang, Huan Zhang and Yanke Zhang
Remote Sens. 2026, 18(5), 761; https://doi.org/10.3390/rs18050761 - 3 Mar 2026
Viewed by 261
Abstract
Accurate prediction of landslide displacement is crucial for hazard prevention. However, recurrent neural network (RNN) models have limitations in simultaneously capturing lag time and feature importance, and their black-box nature limits their interpretability. Moreover, the performance of single models varies across different deformation [...] Read more.
Accurate prediction of landslide displacement is crucial for hazard prevention. However, recurrent neural network (RNN) models have limitations in simultaneously capturing lag time and feature importance, and their black-box nature limits their interpretability. Moreover, the performance of single models varies across different deformation stages, especially during acceleration. To address these challenges, we propose an interpretable deep reinforcement learning-based adaptive ensemble (DRL-AE) framework. The method employs Seasonal and Trend decomposition using Loess to separate cumulative displacement into trend and periodic components. Trend and periodic sequences are predicted using double exponential smoothing and three RNN variants, respectively. An improved Convolutional Block Attention Module (ICBAM) enhances periodic feature extraction and provides temporal–spatial interpretability. The Deep Deterministic Policy Gradient algorithm adaptively integrates multi-model predictions in response to evolving environmental conditions. To validate the DRL-AE, a case study is conducted on the Baijiabao landslide in Zigui County, China. The results indicate that the DRL-AE substantially enhances prediction accuracy. For periodic displacement, it reduces MAE by 10.02% and RMSE by 6.65%, and increases R2 by 4.27% compared with the ICBAM-GRU model. The results also confirm the effectiveness of ICBAM in feature extraction, and the generated heatmaps provide intuitive interpretability of the relevant triggering factors. Full article
Show Figures

Figure 1

Back to TopTop