Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,590)

Search Parameters:
Keywords = long short-term memory network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 1648 KB  
Article
Application of Recurrent Neural Networks for Time-Series Analysis of Low-Frequency Signals Generated by Power Transformers
by Daniel Jancarczyk, Marcin Bernas and Tomasz Boczar
Appl. Sci. 2026, 16(9), 4295; https://doi.org/10.3390/app16094295 - 28 Apr 2026
Abstract
Traditional diagnostics of power transformers heavily rely on signal transformations, such as Welch’s method, to analyze low-frequency noise signals. This study proposes a novel approach using Recurrent Neural Networks (RNNs), specifically Long Short-Term Memory (LSTM) networks, for direct time-series analysis of raw low-frequency [...] Read more.
Traditional diagnostics of power transformers heavily rely on signal transformations, such as Welch’s method, to analyze low-frequency noise signals. This study proposes a novel approach using Recurrent Neural Networks (RNNs), specifically Long Short-Term Memory (LSTM) networks, for direct time-series analysis of raw low-frequency signals without frequency-domain transformation. By training and testing multiple LSTM architectures on transformer vibroacoustic data, the proposed approach achieved approximately 86% accuracy in the fine-grained multi-class benchmark and up to 95.54% in the broader grouped categorization scenario. The model further demonstrated near-perfect classification accuracy in distinguishing transformer types (normal vs. overload) using a simplified RNN architecture. These findings illustrate that RNN-based models can streamline transformer diagnostics and improve accuracy in identifying operational states and types, potentially advancing non-invasive monitoring techniques in power system infrastructure. Full article
47 pages, 1732 KB  
Review
Multi-Temporal InSAR and Machine Learning for Geohazard Monitoring: A Systematic Review with Emphasis on Noise Mitigation and Model Transferability
by Alex Alonso-Díaz, Miguel Fontes, Ana Cláudia Teixeira, Shimon Wdowinski and Joaquim J. Sousa
Remote Sens. 2026, 18(9), 1356; https://doi.org/10.3390/rs18091356 - 28 Apr 2026
Abstract
Interferometric Synthetic Aperture Radar (InSAR) enables regional monitoring of ground deformation, but operational geohazard analysis remains challenged by atmospheric artefacts, temporal decorrelation, and the need for scalable interpretation of multi-temporal products. A systematic review was conducted through searches in Scopus and Web of [...] Read more.
Interferometric Synthetic Aperture Radar (InSAR) enables regional monitoring of ground deformation, but operational geohazard analysis remains challenged by atmospheric artefacts, temporal decorrelation, and the need for scalable interpretation of multi-temporal products. A systematic review was conducted through searches in Scopus and Web of Science, resulting in 135 peer-reviewed scientific articles on the integration of Machine Learning (ML) and Deep Learning (DL) with multi-temporal InSAR (MT-InSAR). The literature is dominated by applications to landslides and land subsidence, with additional studies addressing volcanic unrest and other deformation-related hazards. Persistent Scatterer (PS) and Small-Baseline Subset (SBAS) approaches are frequently used to derive deformation time series, which are then coupled with ML/DL for the detection and mapping of active phenomena and for short-horizon forecasting. Convolutional architectures, such as Convolutional Neural Networks (CNNs), are commonly reported for spatial recognition tasks, while recurrent models like Long Short-Term Memory (LSTM) networks are often applied to time-series prediction. Reported benefits include improved automation and predictive performance, although sensitivity to noise sources remains a challenge. Overall, the evidence supports AI-enabled InSAR workflows for scalable geohazard monitoring, while highlighting the need for standardized benchmarks and systematic transferability assessment. This review provides a roadmap for transitioning from research prototypes to operational early-warning systems. Full article
20 pages, 3466 KB  
Review
AI-Driven Hybrid Detection and Classification Framework for Secure Sleep Health IoT Networks
by Prajoona Valsalan and Mohammad Maroof Siddiqui
Clocks & Sleep 2026, 8(2), 23; https://doi.org/10.3390/clockssleep8020023 - 28 Apr 2026
Abstract
Sleep disorders, such as insomnia, obstructive sleep apnea (OSA), narcolepsy, REM sleep behavior disorder, and circadian rhythm disturbances, represent a rapidly expanding global health burden that is strongly associated with cardiovascular, metabolic, neurological, and psychiatric diseases. Advancements in wearable sensing technologies and Internet [...] Read more.
Sleep disorders, such as insomnia, obstructive sleep apnea (OSA), narcolepsy, REM sleep behavior disorder, and circadian rhythm disturbances, represent a rapidly expanding global health burden that is strongly associated with cardiovascular, metabolic, neurological, and psychiatric diseases. Advancements in wearable sensing technologies and Internet of Medical Things (IoMT) infrastructures have expanded the possibilities for continuous, home-based sleep assessment beyond conventional polysomnography laboratories. These Sleep Health Internet of Things (S-HIoT) systems combine multimodal physiological sensing (EEG, ECG, SpO2, respiratory effort and actigraphy) with wireless communication and cloud-based analytics for automated sleep-stage classification and disorder detection. Nonetheless, the digitization of sleep medicine brings about significant cybersecurity concerns. The constant transmission of sensitive biomedical information makes S-HIoT networks open to anomalous traffic flows, signal manipulation, replay attacks, spoofing, and data integrity violation. Existing studies mostly focus on analyzing physiological signals and network intrusion detection independently, resulting in a systemic vulnerability of cyber–physical sleep monitoring ecosystems. With the aim of addressing this empirical deficiency, this review integrates emerging advances (2022–2026) in the AI-assisted categorization of sleep phases and IoMT anomaly detector designs on the finer analysis of CNN, LSTM/BiLSTM, Transformer-based systems, and a component part of federated schemes and the lightweight, edge-deployable intruder assessor models available. The aim of this study is to uncover a gap in the literature: integrated architectures to trade off audiences of faithfulness of physiological modeling with communication-layer security. To counter it, we present a single framework to include CNN-based spatial feature extraction, Bidirectional Long Short-Term Memory (BiLSTM)-based temporal models and Random Forest-based ensemble classification using a dual task-learning approach. We propose a multi-objective optimization framework to jointly optimize the performance of sleep-stage prediction and that of network anomaly detection. Performance on publicly available datasets (Sleep-EDF and CICIoMT2024) confirms that hybrid integration can be tailored to achieve high accuracy [99.8% sleep staging; 98.6% anomaly detection] whilst being characterized by low inference latency (<45 ms), which is promising for feasibility in real-time deployment in view of targeting edge devices. This work presents a comprehensive framework for developing secure, intelligent, and clinically robust digital sleep health ecosystems by bridging chronobiological signal modeling with cybersecurity mechanisms. Furthermore, it highlights future research directions, including explainable AI, federated secure learning, adversarial robustness, and energy-aware edge optimization. Full article
(This article belongs to the Section Computational Models)
Show Figures

Figure 1

17 pages, 2618 KB  
Article
Improving Coastal Bottom Dissolved Oxygen Forecasting Using Tide-Derived Features with an LSTM-Based Model
by Eun-Joo Lee, Sung-Eun Park, Junmo Jo, Jong-Hong Kim, Chung-Sook Kim, Jiyoung Lee and Wol-Ae Lim
Water 2026, 18(9), 1045; https://doi.org/10.3390/w18091045 - 28 Apr 2026
Abstract
Coastal bottom dissolved oxygen (DO) depletion poses a serious threat to marine ecosystems and aquaculture, and hypoxic events in the semi-enclosed Jinhae Bay, Korea, repeatedly cause large-scale damage to fish farms. Accurate DO prediction models are therefore crucial for ecosystem management and loss [...] Read more.
Coastal bottom dissolved oxygen (DO) depletion poses a serious threat to marine ecosystems and aquaculture, and hypoxic events in the semi-enclosed Jinhae Bay, Korea, repeatedly cause large-scale damage to fish farms. Accurate DO prediction models are therefore crucial for ecosystem management and loss mitigation. This study analyzes how different tidal input representations affect the performance of data-driven DO prediction models in a tide-dominated coastal environment. Using time-series data of oceanographic and meteorological variables from nearby observation sites, we develop an long short-term memory (LSTM)-based neural network ensemble model with four experimental configurations. These include not only water level but also tidal envelope, tidal-intensity proxy, and temporal differences in water level and DO (Δtide, ΔDO) as additional inputs. Compared with the baseline configuration, the full tide-informed input case reduced the 72 h mean root mean square error (RMSE) from 1.16 to 1.12 and increased the Pearson correlation coefficient from 0.873 to 0.883. It also improved the representation of intraday variability and prediction stability. These results show that tide-derived variables help the model more effectively capture tidal-phase-locked DO fluctuations, while temporal-difference inputs further strengthen short-term variability and sensitivity to DO changes. These results indicate that properly representing tidal forcing is essential for learning the temporal structure and variability of coastal bottom DO. Full article
(This article belongs to the Section Oceans and Coastal Zones)
Show Figures

Figure 1

16 pages, 919 KB  
Article
A Comparative Performance Study of Host-Based Intrusion Detection Using TextRank-Based System Call Preprocessing and Deep Learning Models
by Hyunwook You, Chulgyun Park, Dongkyoo Shin and Dongil Shin
Electronics 2026, 15(9), 1856; https://doi.org/10.3390/electronics15091856 - 27 Apr 2026
Abstract
Host-based intrusion detection systems (HIDSs) can address the limitations of network-based detection by analyzing system calls and other low-level events. Many existing benchmark datasets remain inadequate for evaluating modern attacks because they were built in outdated environments and cover only a limited set [...] Read more.
Host-based intrusion detection systems (HIDSs) can address the limitations of network-based detection by analyzing system calls and other low-level events. Many existing benchmark datasets remain inadequate for evaluating modern attacks because they were built in outdated environments and cover only a limited set of attack behaviors. To address this gap, this study builds a TextRank-based preprocessing pipeline on the LID-DS 2021 dataset and compares five end-to-end pipelines: Random Forest (RF), Long Short-Term Memory (LSTM), Convolutional Neural Network(CNN) + LSTM, LSTM, Bidirectional LSTM (BiLSTM), and CNN + Bidirectional Gated Recurrent Unit (BiGRU). Of the 15 scenarios in the dataset, six multi-stage attacks were excluded, and three representative scenarios were selected based on attack-category coverage and suitability for single-chunk host-level detection. Within these three selected scenarios and same-scenario file-level splits, the deep learning pipelines achieved F1-scores of 0.90–0.94, whereas RF ranged from 0.55 to 0.63. Among the evaluated pipelines, CNN + BiGRU produced the strongest overall results. These findings indicate that, under this constrained evaluation setting, sequential deep learning pipelines can be effective for scenario-specific system-call-based HIDS; however, broader generalization to unseen attacks or to the full LID-DS 2021 scenario set remains unverified. Full article
Show Figures

Figure 1

27 pages, 6230 KB  
Article
A Digital Twin Prototype for a Deep-Sea Observation Network: Virtual Environment Reconstruction and Data-Driven Predictive Analytics
by Xinya Zhang, Ruixin Chen and Rufu Qin
J. Mar. Sci. Eng. 2026, 14(9), 800; https://doi.org/10.3390/jmse14090800 (registering DOI) - 27 Apr 2026
Abstract
Effective operation and maintenance (O&M) of deep-sea observation networks are challenged by complex environments and energy limitations. While digital twin (DT) technology offers promising solutions, existing frameworks struggle with high-fidelity, multi-platform orchestration and predictions of electrical energy state. This study proposes a DT [...] Read more.
Effective operation and maintenance (O&M) of deep-sea observation networks are challenged by complex environments and energy limitations. While digital twin (DT) technology offers promising solutions, existing frameworks struggle with high-fidelity, multi-platform orchestration and predictions of electrical energy state. This study proposes a DT framework for a deep-sea observation network (DSON-DT), encompassing telemetry acquisition, predictive analytics, and feedback control to realize a closed-loop workflow for monitoring and managing platform states within virtual scenes. Powered by real-time Internet of underwater things (IoUT) data, a high-fidelity virtual environment is constructed in the Unreal Engine 5 game engine, accurately mapping ambient marine environments and reconstructing platform dynamic behaviors via data-driven approaches and geometric constraints. An improved auto-regressive long short-term memory (AR-LSTM) network is proposed to forecast the battery state of charge (SoC). Experimental results show that this algorithm effectively mitigates the impacts of severe deep-sea noise and the flat open-circuit voltage plateau, suppressing state oscillations to provide reliable references for proactive endurance management. The Vue.js-based web prototype, deployed via pixel streaming, offers seamless interfaces for interactive visualization, analysis, and remote operation. This research achieves comprehensive situational awareness for deep-sea platforms, providing validated technical support for the holistic evaluation and intelligent O&M of heterogeneous marine infrastructures. Full article
(This article belongs to the Special Issue Advances in Ocean Observing Technology and System)
Show Figures

Figure 1

35 pages, 3140 KB  
Article
An LSTM Autoencoder-Based Approach for Monitoring Railway Bridges
by Viviana Giorgi, Ciro Tordela, Lorenzo Bernardini, Pablo Alex Ramírez Balbiano, Claudio Somaschini, Salvatore Strano and Mario Terzo
Appl. Sci. 2026, 16(9), 4272; https://doi.org/10.3390/app16094272 - 27 Apr 2026
Abstract
Continuous monitoring of railway bridges is essential for ensuring safety and operational reliability, considering aging mechanisms, rising traffic, and elevated speeds of railway vehicles. Frequently, traditional vibration-based approaches, including modal identification and data-driven diagnostic strategies, are strongly influenced by environmental and operational variability, [...] Read more.
Continuous monitoring of railway bridges is essential for ensuring safety and operational reliability, considering aging mechanisms, rising traffic, and elevated speeds of railway vehicles. Frequently, traditional vibration-based approaches, including modal identification and data-driven diagnostic strategies, are strongly influenced by environmental and operational variability, requiring labeled damaged datasets or numerical simulations to provide reliable outcomes. However, the acquisition of complete and representative datasets for training neural networks in structural health monitoring remains a challenging task, particularly for large-scale civil structures such as bridges. In these cases, unsupervised learning approaches represent promising solutions. An unsupervised anomaly detection methodology for railway bridge monitoring based on a long short-term memory (LSTM) autoencoder (AE) trained exclusively on bridge accelerations under healthy structural conditions is proposed in the present work. Specifically, the acceleration responses are obtained from simulations made on a calibrated finite element model of the bridge, reproducing realistic train–bridge interaction scenarios. The multi-channel acceleration signals are reconstructed by the proposed LSTM AE to produce the Root Mean Square Error (RMSE) between measured and reconstructed acceleration responses as indicators of potential structural anomalies. A dual-threshold strategy is adopted for damage detection purposes, including a global threshold for identifying anomalies in the overall dynamic response and per-sensor thresholds derived from the healthy-condition RMSE distribution for detecting localized damages. Only healthy-condition data are required for employing the proposed technique, avoiding labeled damaged data for training purposes. The LSTM AE constitutes an effective and computationally efficient tool for anomaly detection and continuous structural health monitoring of railway bridges, as demonstrated by the obtained results, representing a promising alternative to classical modal-based approaches and existing deep learning-based methods. Full article
42 pages, 10246 KB  
Article
Enhancing Karst Spring Discharge Simulation Through a Hybrid XGBoost–BiLSTM Machine Learning Framework
by Mohamed Hamdy Eid, Attila Kovács and Péter Szűcs
Water 2026, 18(9), 1038; https://doi.org/10.3390/w18091038 - 27 Apr 2026
Abstract
Accurate simulation of karst spring discharge is critical for sustainable water resource management, yet it remains a significant challenge due to the inherent complexity, heterogeneity, and non-linearity of karst systems. While machine learning models have been increasingly applied to this problem, standalone algorithms [...] Read more.
Accurate simulation of karst spring discharge is critical for sustainable water resource management, yet it remains a significant challenge due to the inherent complexity, heterogeneity, and non-linearity of karst systems. While machine learning models have been increasingly applied to this problem, standalone algorithms often struggle to simultaneously capture complex temporal dependencies and maintain robust generalization. This study provides a comprehensive comparative assessment of five state-of-the-art machine learning (ML) models for forecasting the daily discharge of the Jósva Spring, located in the World Heritage Aggtelek karst area. The main goal of the study is to determine which modern machine learning approach can most accurately forecast the daily discharge of the Jósva Spring using meteorological data and the discharge of a hydraulically connected upstream spring. This is motivated by the need for a reliable operational prediction tool for complex karst aquifers, the improved water-resource management in a climate-sensitive region, and a lack of comparative studies evaluating multiple ML paradigms on the same karst system. The study also aimed at comparing the predictive performance of five state-of-the-art ML models to identify the most accurate and robust model and to understand the predictability of the karst system by analyzing feature importance, lag effects, and temporal dependencies. Three tree-based ensemble models (Random Forest, XGBoost, and Extra Trees) and two deep learning architectures (a Bidirectional Long Short-Term Memory network, BiLSTM, and a novel Hybrid XGBoost–BiLSTM model) were trained using a five-year (2015–2019) daily dataset comprising rainfall, temperature, and upstream discharge. The modeling framework was designed for synchronous simulation (lead time = 0 days), estimating concurrent downstream discharge using upstream and meteorological measurements from the same time step. A rigorous feature-engineering workflow was implemented based on statistical characterization, correlation analysis, and time-series diagnostics. Models were trained on 80% of the dataset and evaluated on an independent 20% test set. The results demonstrate that the proposed Hybrid XGBoost-BiLSTM model achieved the highest predictive accuracy on the unseen test data (R2 = 0.74, NSE = 0.74, RMSE = 716.35 L/min). While the standalone tree-based models, particularly XGBoost (R2 = 0.66), also exhibited strong and competitive performance, the hybrid architecture provided a consistent and measurable improvement across all evaluation metrics. The hybrid model’s success is attributed to its synergistic design, which leverages the powerful feature extraction and refinement capabilities of XGBoost to provide a more informative input space for the BiLSTM, thereby enhancing its ability to capture complex temporal dependencies while mitigating overfitting. Feature importance analysis confirmed that upstream discharge at a 3-day lag was the most critical predictor, highlighting the system’s hydraulic connectivity. This research provides clear, evidence-based guidance showing that hybrid machine learning architectures, which integrate the strengths of different modeling paradigms, represent the most effective approach for developing robust and reliable operational prediction tools for complex karst aquifers. Full article
Show Figures

Figure 1

20 pages, 1515 KB  
Article
A Study on the Prediction Model of Corrosion Rate of Different Metal Pipe Sleeves Based on CNN-LSTM Hybrid Deep Learning Model
by Yanyongxu Bai, Haoyu Mao, Shaoxuan Sun and Yu Suo
Processes 2026, 14(9), 1399; https://doi.org/10.3390/pr14091399 - 27 Apr 2026
Abstract
The phenomenon of CO2 corrosion of downhole tubing is widespread in oil and gas extraction. Currently, there is a lack of applicable prediction methods for the corrosion rates of different metal tubing in the liquid phase CO2 environment. To address this [...] Read more.
The phenomenon of CO2 corrosion of downhole tubing is widespread in oil and gas extraction. Currently, there is a lack of applicable prediction methods for the corrosion rates of different metal tubing in the liquid phase CO2 environment. To address this issue, this paper systematically investigates the anti-corrosion mechanisms and influencing factors of different metal casings and proposes a deep learning model combining convolutional neural networks and long short-term memory networks. Based on laboratory corrosion experimental data, the model extracts spatial features of parameters affecting the corrosion rate through CNN and captures their temporal dependencies through LSTM. This paper builds a pipe corrosion rate prediction model based on the TensorFlow framework and compares the prediction results with those of the traditional D-W empirical model and the SRV machine learning model. The results showed that the CNN-LSTM model maintained high prediction accuracy regardless of high or low chromium content, with R2 reaching 0.83 and 0.94 respectively, solving the problem that existing models have difficulty effectively simulating complex corrosion behavior under flowing corrosive media conditions. The model was verified using the remaining wall thickness of the actual application casing in the field, and the accuracy was over 80%. The established prediction method can be extended to predict the corrosion rate of pipes under similar corrosion conditions. Full article
(This article belongs to the Section Chemical Processes and Systems)
20 pages, 8588 KB  
Article
Robust SOH Estimation for Batteries via Deep Learning Under Incomplete Measurements
by Jenhao Teng, Kuanyu Lin and Pingtse Lee
Energies 2026, 19(9), 2100; https://doi.org/10.3390/en19092100 - 27 Apr 2026
Abstract
Battery state-of-health (SOH) estimation is essential for the safety and reliability of energy storage systems. However, incomplete measurements due to sensor or communication failures pose significant challenges for accurate prediction. This paper proposes a robust SOH estimation framework using a minimal 5 min [...] Read more.
Battery state-of-health (SOH) estimation is essential for the safety and reliability of energy storage systems. However, incomplete measurements due to sensor or communication failures pose significant challenges for accurate prediction. This paper proposes a robust SOH estimation framework using a minimal 5 min observation window to handle high data sparsity in both random and latter-half missing scenarios. Three Deep Learning (DL) architectures—Long Short-Term Memory (LSTM), Bidirectional LSTM (BiLSTM), and Transformer—are evaluated for data imputation and SOH estimation against traditional polynomial fitting. Simulation results on the NASA benchmark dataset demonstrate that the proposed LSTM model achieves high accuracy, with an RMSE of 0.8522 on complete data. For imperfect data scenarios, BiLSTM-based imputation effectively suppresses extreme deviations, reducing the Maximum Error (MxE) by 44% (from 14.04 to 7.85) compared to traditional polynomial methods. Furthermore, in challenging terminal missing-data cases, a hybrid LSTM-Transformer strategy maintains physical consistency, achieving a superior RMSE of 1.0026. These findings confirm that the proposed DL-based framework significantly outperforms conventional techniques, providing a robust and reliable solution for real-time battery health monitoring under unpredictable data conditions. Full article
(This article belongs to the Section D: Energy Storage and Application)
Show Figures

Figure 1

27 pages, 1862 KB  
Article
A Fine-Grained Sentiment Classification Metric for Dynamic E-Commerce Content Relationships
by Ahad AlQabasani and Hana Al-Nuaim
Information 2026, 17(5), 419; https://doi.org/10.3390/info17050419 - 27 Apr 2026
Abstract
E-commerce web content is dynamic and diverse, necessitating continuous monitoring and adaptation. This presents researchers with the challenge of discovering methods to improve delivered services. Hence, integrating natural language processing (NLP), Machine Learning (ML), Deep Learning (DL), and sentiment analysis (SA) provides businesses [...] Read more.
E-commerce web content is dynamic and diverse, necessitating continuous monitoring and adaptation. This presents researchers with the challenge of discovering methods to improve delivered services. Hence, integrating natural language processing (NLP), Machine Learning (ML), Deep Learning (DL), and sentiment analysis (SA) provides businesses with robust frameworks to utilize customer feedback and enhance decision-making. Therefore, we introduce a novel dataset collection methodology that captures the dynamic relationships between e-commerce web content and consumer sentiment. Additionally, we introduce a novel, real-consumer-based quality metric on product content through FG-CSrP, extending SA into a new Fine-Grained Consumer Sentiment related to the Product. We evaluated our dataset using baseline models: Deep Neural Network (DNN), Long Short-Term Memory (LSTM), DistilBERT, and twelve automatically optimized models created by AutoGluon-Tabular across three scenarios, each with varying feature inputs (numerical, textual, and both). We then applied Explainable Artificial Intelligence (XAI) to the DNN model to explain feature importance in prediction. Our findings showed that LightGBMXT outperformed the other models in two out of three scenarios, and XAI interpretations highlighted the significant role of vendor-provided web content details in consumer sentiment. Overall, our approach provides actionable insights that can help vendors improve e-commerce strategies and strengthen customer engagement. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

32 pages, 4668 KB  
Article
Aggressive Guided Exploitation Optimized Sparse-Dual Attention Enabled Meta-Learning-Based Deep Learning Model for Quantum Error Correction
by Umesh Uttamrao Shinde, Ravi Kumar Bandaru and Amal S. Alali
Mathematics 2026, 14(9), 1459; https://doi.org/10.3390/math14091459 - 26 Apr 2026
Viewed by 49
Abstract
Quantum error-correcting codes are essential for achieving fault-tolerant quantum computing. Heavy hexagonal code is a type of topological code that leverages the arrangement of qubits to find and correct errors. The heavy hexagonal code is suitable for superconducting architectures, specifically graph layouts with [...] Read more.
Quantum error-correcting codes are essential for achieving fault-tolerant quantum computing. Heavy hexagonal code is a type of topological code that leverages the arrangement of qubits to find and correct errors. The heavy hexagonal code is suitable for superconducting architectures, specifically graph layouts with a limited number of connections. The topological error correction methods work well, but they need more qubits, cannot be used for different sizes of quantum systems, are less reliable, and do not work well with changing quantum distributions. Thus, the research proposes an Ardea-guided exploit optimized sparse-dual attention enabled meta-learning-based convolutional neural network with bi-directional long short-term memory model (AGuESD-MCBiTM). The method exhibits effective correction over dynamic environments with the utilization of meta-learning and the extraction of statistical information, which provides a detailed representation of the qubit patterns. The Ardea-guided exploit optimized (AGuEO) algorithm tunes the weights of MCBiTM and acquires optimal solutions with higher convergence. Moreover, the sparse-dual attention module and meta-learning-based MCBiTM model, which together provide scalable, real-time identification of non-linear qubit noise fluctuations with lower computational cost. Comparatively, the proposed AGuESD-MCBiTM exhibits superior error correction with a higher correlation of 0.97, accuracy of 98.93%, and R-squared value of 0.93, as well as a lower Root mean square error of 1.87, Mean absolute error of 1.20, Bit error rate of 1.85, Logical error rate of 3.82, and mean square error of 3.49 in circuit 2, respectively. Full article
(This article belongs to the Special Issue Recent Advances in Quantum Information and Quantum Computing)
28 pages, 3444 KB  
Article
A Lightweight Method for Power Quality Disturbance Recognition Based on Optimized VMD and CNN–Transformer
by Dongya Xiao, Jiaming Liu, Haining Liu and Yang Zhao
Electronics 2026, 15(9), 1832; https://doi.org/10.3390/electronics15091832 - 26 Apr 2026
Viewed by 58
Abstract
Aiming at the issues of low recognition accuracy and high model computational complexity for power quality disturbances (PQDs) in strong-noise environments, this paper proposes a novel lightweight PQD-recognition method that integrates a hybrid architecture of variational mode decomposition (VMD), convolutional neural network (CNN), [...] Read more.
Aiming at the issues of low recognition accuracy and high model computational complexity for power quality disturbances (PQDs) in strong-noise environments, this paper proposes a novel lightweight PQD-recognition method that integrates a hybrid architecture of variational mode decomposition (VMD), convolutional neural network (CNN), and transformer. Firstly, a hybrid optimization algorithm named the monkey–genetic hybrid optimization algorithm (MGHOA) is proposed to optimize VMD parameters for denoising disturbance signals, thereby enhancing recognition accuracy in noisy environments. Secondly, to fully extract disturbance signal features and reduce the computational complexity of the model, a lightweight CNN–transformer model is designed. Depthwise separable convolution (DSC) is employed to extract local features and the multi-head attention mechanism of transformer is utilized to mine the long-distance dependence and global features, thereby enhancing the feature representation. Thirdly, a multitask joint-learning method is proposed to collaboratively optimize classification accuracy and temporal localization tasks, enhancing the discrimination of similar disturbances. Additionally, a dual-pooling global feature fusion strategy is designed to further enhance the model’s ability to discriminate complex disturbances. Comparative experiments on 16 typical PQD types demonstrate that the proposed method achieves excellent performance in recognition accuracy, model robustness, and computational efficiency. The integration of the MGHOA–VMD module improves recognition accuracy by 1.08%, while the multitask joint-learning method contributes an additional 0.55% improvement. When achieving recognition accuracy comparable to complex models, the training time of the proposed method is 36.51% of that required by DeepCNN and merely 5.90% of that required by bidirectional long short-term memory (BiLSTM), with a 31.22% reduction in parameter scale. This work provides a novel solution for intelligent power quality disturbance recognition. Full article
(This article belongs to the Section Power Electronics)
Show Figures

Figure 1

21 pages, 2612 KB  
Article
A Hybrid LSTM Framework for Short-Term Regional Wind Speed Forecasting Based on PCA and SSA-Optimized VMD
by Huachen Li, Zhengzheng Ma, Liang Chen, Qinglin Zhu, Xiang Dong, Bin Xu, Yuanming Li and Mantong Zhang
Appl. Sci. 2026, 16(9), 4225; https://doi.org/10.3390/app16094225 - 26 Apr 2026
Viewed by 91
Abstract
Accurate regional wind speed forecasting is critical yet challenging due to inherent spatiotemporal correlations and data non-stationarity. This paper proposes a hybrid framework combining Principal Component Analysis (PCA), Variational Mode Decomposition (VMD), and Long Short-Term Memory (LSTM) networks. First, PCA extracts dominant spatial [...] Read more.
Accurate regional wind speed forecasting is critical yet challenging due to inherent spatiotemporal correlations and data non-stationarity. This paper proposes a hybrid framework combining Principal Component Analysis (PCA), Variational Mode Decomposition (VMD), and Long Short-Term Memory (LSTM) networks. First, PCA extracts dominant spatial features from a regional wind field (9 × 9 grid), retaining 99.5% of the information to reduce redundancy. Next, an adaptive VMD strategy, optimized by the Sparrow Search Algorithm (SSA), decomposes these components to mitigate temporal non-stationarity. High-correlation sub-signals are then fed into the LSTM predictor. Experimental results demonstrate that the framework achieves an average coefficient of determination (R2) of approximately 0.41 in the first forecasting step. Crucially, it significantly mitigates error accumulation in multi-step forecasting, maintaining a stable R2 of 0.39 in the third step. Conversely, complex spatiotemporal models like ConvLSTM achieve high initial accuracy but suffer severe degradation (R2 dropping from 0.70 to 0.24) alongside significantly higher computational overhead. The proposed strategy effectively prevents overfitting to high-frequency noise, ensuring a computationally efficient and robust solution for multi-step regional wind forecasting. Full article
31 pages, 5682 KB  
Article
Developing Artificial Intelligence-Based Car-Following Models Using Improved Permutation Entropy Analysis Results
by Ali Muhssin Shahatha and İsmail Şahin
Appl. Sci. 2026, 16(9), 4224; https://doi.org/10.3390/app16094224 - 25 Apr 2026
Viewed by 117
Abstract
Vehicle trajectories are time series, and entropy is a powerful tool for testing or quantifying the complexity of a given series. Entropy tools are often applied to variables such as velocity, acceleration, space headway, and time headway, but the local position data have [...] Read more.
Vehicle trajectories are time series, and entropy is a powerful tool for testing or quantifying the complexity of a given series. Entropy tools are often applied to variables such as velocity, acceleration, space headway, and time headway, but the local position data have not been addressed previously. The novelty of this study is that it uses the Improved Permutation Entropy (IPE) for the first time to analyze vehicle position data and convert those data into a limited range (0–0.3317), aiming to understand individual vehicle behavior during car-following and introduce a new prediction method for developing artificial intelligence-based car-following models. The study uses the IPE analysis results as a new input variable, in addition to existing input variables, to improve the prediction accuracy of these models. Three types of neural networks were adopted according to the development of artificial intelligence models: artificial neural networks (ANNs), long short-term memory networks (LSTMs), and Transformer models. The results indicate that all models using the proposed prediction method, which includes the IPE analysis result, outperformed those using the traditional prediction method. The Transformer & IPE model shows the best performance in prediction accuracy of the follower acceleration output; the statistically significant percentage improvements were 2.04%, 1.42%, 1.22%, and 2.62% for RMSE, MAE, MASE, and R2, in that order. Furthermore, the results indicate that all models using the proposed prediction method outperformed the benchmarking Intelligent Driver Model (IDM) for the follower acceleration output. Full article
(This article belongs to the Section Transportation and Future Mobility)
Show Figures

Figure 1

Back to TopTop