Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,935)

Search Parameters:
Keywords = long–short memory network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1056 KB  
Article
Deep Learning Algorithms for Human Activity Recognition in Manual Material Handling Tasks
by Giulia Bassani, Carlo Alberto Avizzano and Alessandro Filippeschi
Sensors 2025, 25(21), 6705; https://doi.org/10.3390/s25216705 (registering DOI) - 2 Nov 2025
Abstract
Human Activity Recognition (HAR) is widely used for healthcare, but few works focus on Manual Material Handling (MMH) activities, despite their diffusion and impact on the workers’ health. We propose four Deep Learning algorithms for HAR in MMH: Bidirectional Long Short-Term Memory (BiLSTM), [...] Read more.
Human Activity Recognition (HAR) is widely used for healthcare, but few works focus on Manual Material Handling (MMH) activities, despite their diffusion and impact on the workers’ health. We propose four Deep Learning algorithms for HAR in MMH: Bidirectional Long Short-Term Memory (BiLSTM), Sparse Denoising Autoencoder (Sp-DAE), Recurrent Sp-DAE, and Recurrent Convolutional Neural Network (RCNN). We explored different hyperparameter combinations to maximize the classification performance (F1-score,) using wearable sensors’ data gathered from 14 subjects. We investigated the best three-parameter combinations for each network using the full dataset to select the two best-performing networks, which were then compared using 14 datasets with increasing subject numerosity, 70–30% split, and Leave-One-Subject-Out (LOSO) validation, to evaluate whether they may perform better with a larger dataset. The benchmarking network DeepConvLSTM was tested on the full dataset. BiLSTM performs best in classification and complexity (95.7% 70–30% split; 90.3% LOSO). RCNN performed similarly (95.9%; 89.2%) with a positive trend with subject numerosity. DeepConvLSTM achieves similar classification performance (95.2%; 90.3%) but requires ×57.1 and ×31.3 more Multiply and ACcumulate (MAC) and ×100.8 and ×28.3 more Multiplication and Addition (MA) operations, which measure the complexity of the network’s inference process, than BiLSTM and RCNN, respectively. The BILSTM and RCNN perform close to DeepConvLSTM while being computationally lighter, fostering their use in embedded systems. Such lighter algorithms can be readily used in the automatic ergonomic and biomechanical risk assessment systems, enabling personalization of risk assessment and easing the adoption of safety measures in industrial practices involving MMH. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

19 pages, 5704 KB  
Article
Rapid and Non-Destructive Assessment of Eight Essential Amino Acids in Foxtail Millet: Development of an Efficient and Accurate Detection Model Based on Near-Infrared Hyperspectral
by Anqi Gao, Xiaofu Wang, Erhu Guo, Dongxu Zhang, Kai Cheng, Xiaoguang Yan, Guoliang Wang and Aiying Zhang
Foods 2025, 14(21), 3760; https://doi.org/10.3390/foods14213760 (registering DOI) - 1 Nov 2025
Abstract
Foxtail millet is a vital grain whose amino acid content affects nutritional quality. Traditional detection methods are destructive, time-consuming, and inefficient. This work established a rapid and non-destructive method for detecting essential amino acids in the foxtail millet. To address these limitations, this [...] Read more.
Foxtail millet is a vital grain whose amino acid content affects nutritional quality. Traditional detection methods are destructive, time-consuming, and inefficient. This work established a rapid and non-destructive method for detecting essential amino acids in the foxtail millet. To address these limitations, this study developed a rapid, non-destructive approach for quantifying eight essential amino acids—lysine, phenylalanine, methionine, threonine, isoleucine, leucine, valine, and histidine—in foxtail millet (variety: Changnong No. 47) using near-infrared hyperspectral imaging. A total of 217 samples were collected and used for model development. The spectral data were preprocessed using Savitzky–Golay, adaptive iteratively reweighted penalized least squares, and standard normal variate. The key wavelengths were extracted using the competitive adaptive reweighted sampling algorithm, and four regression models—Partial Least Squares Regression (PLSR), Support Vector Regression (SVR), Convolutional Neural Network (CNN), and Bidirectional Long Short-Term Memory (BiLSTM)—were constructed. The results showed that the key wavelengths selected by CARS account for only 2.03–4.73% of the full spectrum. BiLSTM was most suitable for modeling lysine (R2 = 0.5862, RMSE = 0.0081, RPD = 1.6417). CNN demonstrated the best performance for phenylalanine, methionine, isoleucine, and leucine. SVR was most effective for predicting threonine (R2 = 0.8037, RMSE = 0.0090, RPD = 2.2570), valine, and histidine. This study offers an effective novel approach for intelligent quality assessment of grains. Full article
Show Figures

Figure 1

24 pages, 2511 KB  
Article
Modeling Hurricane Wave Forces Acting on Coastal Bridges by Artificial Neural Networks
by Hong Xiao, Wenrui Huang and Jiahui Wang
J. Mar. Sci. Eng. 2025, 13(11), 2080; https://doi.org/10.3390/jmse13112080 (registering DOI) - 1 Nov 2025
Abstract
Artificial neural networks have been evaluated and compared for modeling extreme wave forces exerted on coastal bridges during hurricanes. Long Short-Term Memory (LSTM) is selected for deep learning neural networks. A feedforward neural network (FFNN) is employed to represent the shallow learning network [...] Read more.
Artificial neural networks have been evaluated and compared for modeling extreme wave forces exerted on coastal bridges during hurricanes. Long Short-Term Memory (LSTM) is selected for deep learning neural networks. A feedforward neural network (FFNN) is employed to represent the shallow learning network for comparison purposes. The two case studies consist of an emerged bridge deck destroyed by Hurricane Ivan and a submerged bridge deck impaired in Hurricane Katrina. Datasets for model training and verifications consist of wave elevation and force time series resulting from previous validated numerical wave load modeling studies. Results indicate that both deep LSTM and shallow FFNNs are able to provide very good predictions of wave forces with correlation coefficients above 0.98 by comparing model simulations and data. Effects of training algorithms on network performance have been investigated. Among several training algorithms, the adaptive moment estimation (Adam) training optimizer leads to the best LSTM performance, while Levenberg–Marquardt (LM) optimized backpropagation is among the most effective training algorithms for FFNNs. In general, a shallow FFNN-LM network results in slightly higher correlation coefficients and lower error than those from an LSTM-Adam network. For sharp variation in nonlinear wave forces in the emerged bridge case study during Hurricane Ivan, FFNN-LM predictions of wave forces show better matching with the quick variations in nonlinear wave forces. FFNN-LM’s speed is approximately 4 times faster in model training but is about twice as slow in model verification and application than the LSTM-Adam network. Neural network simulations have shown substantially faster than CFD wave load modeling in our case studies. Full article
(This article belongs to the Section Coastal Engineering)
Show Figures

Figure 1

24 pages, 10066 KB  
Article
CSLTNet: A CNN-LSTM Dual-Branch Network for Particulate Matter Concentration Retrieval
by Linjun Yao, Zhaobin Wang and Yaonan Zhang
Remote Sens. 2025, 17(21), 3616; https://doi.org/10.3390/rs17213616 (registering DOI) - 31 Oct 2025
Abstract
The concentrations of atmospheric particulate matter (PM10 and PM2.5) significantly impact global environment, human health, and climate change. This study developed a particulate matter concentration retrieval method based on multi-source data, proposing a dual-branch retrieval network architecture named CSLTNet that [...] Read more.
The concentrations of atmospheric particulate matter (PM10 and PM2.5) significantly impact global environment, human health, and climate change. This study developed a particulate matter concentration retrieval method based on multi-source data, proposing a dual-branch retrieval network architecture named CSLTNet that integrates Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) networks. The CNN branch is designed to extract spatial features, while the LSTM branch captures temporal characteristics, with attention modules incorporated into both the CNN and LSTM branches to enhance feature extraction capabilities. Notably, the model demonstrates robust spatial generalization capability across different geographical regions.Comprehensive experimental evaluations demonstrate the outstanding performance of the CSLTNet model. For the Beijing–Tianjin–Hebei region in China: in PM10 retrieval, sample-based 10-fold cross-validation achieved R² = 0.9427 (RMSE = 16.47μg/m3), while station-based validation yielded R² = 0.9213 (RMSE = 19.50μg/m3); for PM2.5 retrieval, sample-based 10-fold cross-validation resulted in R² = 0.9579 (RMSE = 6.49μg/m3), with station-based validation reaching R² = 0.9296 (RMSE = 8.32μg/m3). For Northwest China: in PM10 retrieval, sample-based 10-fold cross-validation achieved R² = 0.9236 (RMSE = 34.52μg/m3), while station-based validation yielded R² = 0.9046 (RMSE = 37.24μg/m3); for PM2.5 retrieval, sample-based 10-fold cross-validation resulted in R² = 0.9279 (RMSE = 10.56μg/m3), with station-based validation reaching R² = 0.8787 (RMSE = 13.71μg/m3). Full article
(This article belongs to the Section Atmospheric Remote Sensing)
26 pages, 55590 KB  
Article
Advancing Machine Learning-Based Streamflow Prediction Through Event Greedy Selection, Asymmetric Loss Function, and Rainfall Forecasting Uncertainty
by Soheyla Tofighi, Faruk Gurbuz, Ricardo Mantilla and Shaoping Xiao
Appl. Sci. 2025, 15(21), 11656; https://doi.org/10.3390/app152111656 (registering DOI) - 31 Oct 2025
Abstract
This paper advances machine learning (ML)-based streamflow prediction by strategically selecting rainfall events, introducing a new loss function, and addressing rainfall forecast uncertainties. Focusing on the Iowa River Basin, we applied the stochastic storm transposition (SST) method to create realistic rainfall events, which [...] Read more.
This paper advances machine learning (ML)-based streamflow prediction by strategically selecting rainfall events, introducing a new loss function, and addressing rainfall forecast uncertainties. Focusing on the Iowa River Basin, we applied the stochastic storm transposition (SST) method to create realistic rainfall events, which were input into a hydrological model to generate corresponding streamflow data for training and testing deterministic and probabilistic ML models. Long short-term memory (LSTM) networks were employed to predict streamflow up to 12 h ahead. An active learning approach was used to identify the most informative rainfall events, reducing data generation effort. Additionally, we introduced a novel asymmetric peak loss function to improve peak streamflow prediction accuracy. Incorporating rainfall forecast uncertainties, our probabilistic LSTM model provided uncertainty quantification for streamflow predictions. Performance evaluation using different metrics improved the accuracy and reliability of our models. These contributions enhance flood forecasting and decision-making while significantly reducing computational time and costs. Full article
(This article belongs to the Topic Data Science and Intelligent Management)
Show Figures

Figure 1

18 pages, 1486 KB  
Article
A Deep Learning-Based Ensemble System for Brent and WTI Crude Oil Price Analysis and Prediction
by Yiwen Zhang and Salim Lahmiri
Entropy 2025, 27(11), 1122; https://doi.org/10.3390/e27111122 (registering DOI) - 31 Oct 2025
Abstract
Crude oil price forecasting is an important task in energy management and storage. In this regard, deep learning has been applied in the literature to generate accurate forecasts. The main purpose of this study is to design an ensemble prediction system based on [...] Read more.
Crude oil price forecasting is an important task in energy management and storage. In this regard, deep learning has been applied in the literature to generate accurate forecasts. The main purpose of this study is to design an ensemble prediction system based on various deep learning systems. Specifically, in the first stage of our proposed ensemble system, convolutional neural networks (CNNs), long short-term memory networks (LSTMs), bidirectional LSTM (BiLSTM), gated recurrent units (GRUs), bidirectional GRU (BiGRU), and deep feedforward neural networks (DFFNNs) are used as individual predictive systems to predict crude oil prices. Their respective parameters are fine-tuned by Bayesian optimization (BO). In the second stage, forecasts from the previous stage are all weighted by using the sequential least squares programming (SLSQP) algorithm. The standard tree-based ensemble models, namely, extreme gradient boosting (XGBoost) and random forest (RT), are implemented as baseline models. The main findings can be summarized as follows. First, the proposed ensemble system outperforms the individual CNN, LSTM, BiLSTM, GRU, BiGRU, and DFFNN. Second, it outperforms the standard XGBoost and RT models. Governments and policymakers can use these models to design more effective energy policies and better manage supply in fluctuating markets. For investors, improved predictions of price trends present opportunities for strategic investments, reducing risk while maximizing returns in the energy market. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

9 pages, 7778 KB  
Proceeding Paper
Adaptive IoT-Based Platform for CO2 Forecasting Using Generative Adversarial Networks: Enhancing Indoor Air Quality Management with Minimal Data
by Alessandro Leone, Andrea Manni, Andrea Caroppo and Gabriele Rescio
Eng. Proc. 2025, 110(1), 3; https://doi.org/10.3390/engproc2025110003 - 30 Oct 2025
Viewed by 75
Abstract
Monitoring indoor air quality is vital for health, as CO2 is a major pollutant. An automated system that accurately forecasts CO2 levels can optimize HVAC management, preventing sudden increases and reducing energy waste while maintaining occupant comfort. Traditionally, such systems require [...] Read more.
Monitoring indoor air quality is vital for health, as CO2 is a major pollutant. An automated system that accurately forecasts CO2 levels can optimize HVAC management, preventing sudden increases and reducing energy waste while maintaining occupant comfort. Traditionally, such systems require extensive datasets collected over months to train algorithms, making them computational expensive and inefficient. To address this limitation, an adaptive IoT-based platform has been developed, leveraging a limited set of recent data to forecast CO2 trends. Tested in a real-world setting, the system analyzed parameters such as physical activity, temperature, humidity, and CO2 to ensure accurate predictions. Data acquisition was performed using the Smartex WWS T-shirt for physical activity data and the UPSense UPAI3-CPVTHA environmental sensor for other measurements. The chosen sensor devices are wireless and minimally invasive, while data processing was carried out on a low-power embedded PC. The proposed forecasting model adopts an innovative approach. After a 5-day training period, a Generative Adversarial Network enhances the dataset by simulating a 10-day training period. The model utilizes a Generative Adversarial Network with a Long Short-Term Memory network as the generator to predict future CO2 values based on historical data, while the discriminator, also a Long Short-Term Memory network, distinguishes between actual and generated CO2 values. This approach, based on Conditional Generative Adversarial Networks, effectively captures data distributions, enabling more accurate multi-step probabilistic forecasts. In this way, the framework maintains a Root Mean Square Error of approximately 8 ppm, matching the performance of our previous approach, while reducing the need for real training data from 10 to just 5 days. Furthermore, it achieves accuracy comparable to other state-of-the-art methods that typically requires weeks or even months of training. This advancement significantly enhances computational efficiency and reduces data requirements for model training, improving the system’s practicality for real-world applications. Full article
Show Figures

Figure 1

24 pages, 3435 KB  
Article
DAHG: A Dynamic Augmented Heterogeneous Graph Framework for Precipitation Forecasting with Incomplete Data
by Hailiang Tang, Hyunho Yang and Wenxiao Zhang
Information 2025, 16(11), 946; https://doi.org/10.3390/info16110946 (registering DOI) - 30 Oct 2025
Viewed by 84
Abstract
Accurate and timely precipitation forecasting is critical for climate risk management, agriculture, and hydrological regulation. However, this task remains challenging due to the dynamic evolution of atmospheric systems, heterogeneous environmental factors, and frequent missing data in multi-source observations. To address these issues, we [...] Read more.
Accurate and timely precipitation forecasting is critical for climate risk management, agriculture, and hydrological regulation. However, this task remains challenging due to the dynamic evolution of atmospheric systems, heterogeneous environmental factors, and frequent missing data in multi-source observations. To address these issues, we propose DAHG, a novel long-term precipitation forecasting framework based on dynamic augmented heterogeneous graphs with reinforced graph generation, contrastive representation learning, and long short-term memory (LSTM) networks. Specifically, DAHG constructs a temporal heterogeneous graph to model the complex interactions among multiple meteorological variables (e.g., precipitation, humidity, wind) and remote sensing indicators (e.g., NDVI). The forecasting task is formulated as a dynamic spatiotemporal regression problem, where predicting future precipitation values corresponds to inferring attributes of target nodes in the evolving graph sequence. To handle missing data, we present a reinforced dynamic graph generation module that leverages reinforcement learning to complete incomplete graph sequences, enhancing the consistency of long-range forecasting. Additionally, a self-supervised contrastive learning strategy is employed to extract robust representations of multi-view graph snapshots (i.e., temporally adjacent frames and stochastically augmented graph views). Finally, DAHG integrates temporal dependency through long short-term memory (LSTM) networks to capture the evolving precipitation patterns and outputs future precipitation estimations. Experimental evaluations on multiple real-world meteorological datasets show that DAHG reduces MAE by 3% and improves R2 by 0.02 over state-of-the-art baselines (p < 0.01), confirming significant gains in accuracy and robustness, particularly in scenarios with partially missing observations (e.g., due to sensor outages or cloud-covered satellite readings). Full article
Show Figures

Figure 1

20 pages, 2066 KB  
Article
Enhanced Single-Point Mass Dynamic Model of Urban Trains for Automatic Train Operation (ATO) Systems
by Hong-Kwan Yoo, Yan Linn Aung and Woo-Seong Che
Appl. Sci. 2025, 15(21), 11600; https://doi.org/10.3390/app152111600 - 30 Oct 2025
Viewed by 107
Abstract
The accurate prediction of train acceleration is an essential requirement for Automatic Train Operation (ATO) in urban railways. While traditional single-point mass models fail to capture the distributed dynamics of coupled vehicles, multi-point models are rarely practical due to their computational cost. In [...] Read more.
The accurate prediction of train acceleration is an essential requirement for Automatic Train Operation (ATO) in urban railways. While traditional single-point mass models fail to capture the distributed dynamics of coupled vehicles, multi-point models are rarely practical due to their computational cost. In this paper, we propose an enhanced single-point mass model based on Long Short-Term Memory (LSTM) networks. The model is trained on Train Control and Monitoring System (TCMS) data from Busan Metro Line 3. By averaging the coupled dynamics of sequence-cars, we obtain a realistic single-point representation. The input data undergoes kinematic preprocessing and feature engineering, including lagging, cross, and statistical measurements. The key innovation of this paper is the physics-based feedback loop mechanism, which is built into the LSTM. This mechanism uses the predicted train acceleration at each time step to update systematically the acceleration-dependent features and make new predictions. This maintains physical consistency and causal relationships without requiring future measurements, reflecting the real-world ATO operational limits. Results demonstrate very high accuracy (R2 = 0.9993, MAE = 0.0083 km/h2) without error accumulation, suggesting benefits for both ATO control accuracy and energy efficiency. Full article
Show Figures

Figure 1

24 pages, 3813 KB  
Article
VMD-SSA-LSTM-Based Cooling, Heating Load Forecasting, and Day-Ahead Coordinated Optimization for Park-Level Integrated Energy Systems
by Lintao Zheng, Dawei Li, Zezheng Zhou and Lihua Zhao
Buildings 2025, 15(21), 3920; https://doi.org/10.3390/buildings15213920 (registering DOI) - 30 Oct 2025
Viewed by 150
Abstract
Park-level integrated energy systems (IESs) are increasingly challenged by rapid electrification and higher penetration of renewable energy, which exacerbate source–load imbalances and scheduling uncertainty. This study proposes a unified framework that couples high-accuracy cooling and heating load forecasting with day-ahead coordinated optimization for [...] Read more.
Park-level integrated energy systems (IESs) are increasingly challenged by rapid electrification and higher penetration of renewable energy, which exacerbate source–load imbalances and scheduling uncertainty. This study proposes a unified framework that couples high-accuracy cooling and heating load forecasting with day-ahead coordinated optimization for an office park in Tianjin. The forecasting module employs correlation-based feature selection and variational mode decomposition (VMD) to capture multi-scale dynamics, and a sparrow search algorithm (SSA)-driven long short-term memory network (LSTM), with hyperparameters globally tuned by root mean square error to improve generalization and robustness. The scheduling module performs day-ahead optimization across source, grid, load, and storage to minimize either (i) the standard deviation (SD) of purchased power to reduce grid impact, or (ii) the total operating cost (OC) to achieve economic performance. On the case dataset, the proposed method achieves mean absolute percentage errors (MAPEs) of 8.32% for cooling and 5.80% for heating, outperforming several baselines and validating the benefits of multi-scale decomposition combined with intelligent hyperparameter searching. Embedding forecasts into day-ahead scheduling substantially reduces external purchases: on representative days, forecast-driven optimization lowers the SD of purchased electricity from 29.6% to 88.1% across heating and cooling seasons; seasonally, OCs decrease from 6.4% to 15.1% in heating and 3.8% to 11.6% in cooling. Overall, the framework enhances grid friendliness, peak–valley coordination, and the stability, flexibility, and low-carbon economics of park-level IESs. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

22 pages, 18068 KB  
Article
Deep Reinforcement Learning-Based Guidance Law for Intercepting Low–Slow–Small UAVs
by Peisen Zhu, Wanying Xu, Yongbin Zheng, Peng Sun and Zeyu Li
Aerospace 2025, 12(11), 968; https://doi.org/10.3390/aerospace12110968 - 30 Oct 2025
Viewed by 202
Abstract
Low, small, and slow (LSS) unmanned aerial vehicles (UAVs) pose great challenges for conventional guidance methods. However, existing deep reinforcement learning (DRL)-based interception guidance law has mostly focused on simplified two-dimensional planes and requires strict initial launch scenarios (constructing collision triangles). Designing more [...] Read more.
Low, small, and slow (LSS) unmanned aerial vehicles (UAVs) pose great challenges for conventional guidance methods. However, existing deep reinforcement learning (DRL)-based interception guidance law has mostly focused on simplified two-dimensional planes and requires strict initial launch scenarios (constructing collision triangles). Designing more robust guidance laws has therefore become a key research focus. In this paper, we propose a novel recurrent proximal policy optimization (RPPO)-based guidance law framework. Specifically, we first design initial launch conditions in three-dimensional space that are more applicable and realistic, without requiring to form a collision triangle at the initial launch. Then, considering the temporal continuity of the seeker’s observations, we introduce the long short-term memory (LSTM) networks into the proximal policy optimization (PPO) algorithm to extract hidden temporal information from the observation sequences, thus supporting the policy training. Finally, we propose a reward function based on velocity prediction and overload constraints. Simulation experiments show that the proposed RPPO framework achieves an interception rate of 95.3% and a miss distance of 1.2935 m under broader launch conditions. Moreover, the framework demonstrates strong generalization ability, effectively coping with unknown maneuvers of UAVs. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

19 pages, 134793 KB  
Article
A BERT–LSTM–Attention Framework for Robust Multi-Class Sentiment Analysis on Twitter Data
by Xinyu Zhang, Yang Liu, Tianhui Zhang, Lingmin Hou, Xianchen Liu, Zhen Guo and Aliya Mulati
Systems 2025, 13(11), 964; https://doi.org/10.3390/systems13110964 - 30 Oct 2025
Viewed by 230
Abstract
This paper proposes a hybrid deep learning model for robust and interpretable sentiment classification of Twitter data. The model integrates Bidirectional Encoder Representations from Transformers (BERT)-based contextual embeddings, a Bidirectional Long Short-Term Memory (BiLSTM) network, and a custom attention mechanism to classify tweets [...] Read more.
This paper proposes a hybrid deep learning model for robust and interpretable sentiment classification of Twitter data. The model integrates Bidirectional Encoder Representations from Transformers (BERT)-based contextual embeddings, a Bidirectional Long Short-Term Memory (BiLSTM) network, and a custom attention mechanism to classify tweets into four sentiment categories: Positive, Negative, Neutral, and Irrelevant. Addressing the challenges of noisy and multilingual social media content, the model incorporates a comprehensive preprocessing pipeline and data augmentation strategies including back-translation and synonym replacement. An ablation study demonstrates that combining BERT with BiLSTM improves the model’s sensitivity to sequence dependencies, while the attention mechanism enhances both classification accuracy and interpretability. Empirical results show that the proposed model outperforms BERT-only and BERT+BiLSTM baselines, achieving F1-scores (F1) above 0.94 across all sentiment classes. Attention weight visualizations further reveal the model’s ability to focus on sentiment-bearing tokens, providing transparency in decision-making. The proposed framework is well-suited for deployment in real-time sentiment monitoring systems and offers a scalable solution for multilingual and multi-class sentiment analysis in dynamic social media environments. We also include a focused characterization of the dataset via an Exploratory Data Analysis in the Methods section. Full article
(This article belongs to the Special Issue Data-Driven Insights with Predictive Marketing Analysis)
Show Figures

Figure 1

17 pages, 4959 KB  
Article
A Variational Mode Snake-Optimized Neural Network Prediction Model for Agricultural Land Subsidence Monitoring Based on Temporal InSAR Remote Sensing
by Zhenda Wang, Huimin Huang, Ruoxin Wang, Ming Guo, Longjun Li, Yue Teng and Yuefan Zhang
Processes 2025, 13(11), 3480; https://doi.org/10.3390/pr13113480 - 29 Oct 2025
Viewed by 200
Abstract
Interferometric Synthetic Aperture Radar (InSAR) technology is crucial for large-scale land subsidence analysis in cultivated areas within hilly and mountainous regions. Accurate prediction of this subsidence is of significant importance for agricultural resource management and planning. Addressing the limitations of existing subsidence prediction [...] Read more.
Interferometric Synthetic Aperture Radar (InSAR) technology is crucial for large-scale land subsidence analysis in cultivated areas within hilly and mountainous regions. Accurate prediction of this subsidence is of significant importance for agricultural resource management and planning. Addressing the limitations of existing subsidence prediction methods in terms of accuracy and model selection, this paper proposes a deep neural network prediction model based on Variational Mode Decomposition (VMD) and the Snake Optimizer (SO), termed VMD-SO-CNN-LSTM-MATT. VMD decomposes complex subsidence signals into stable intrinsic components, improving input data quality. The SO algorithm is introduced to globally optimize model parameters, preventing local optima and enhancing prediction accuracy. This model utilizes time–series subsidence data extracted via the SBAS-InSAR technique as input. Initially, the original sequence is decomposed into multiple intrinsic mode functions (IMFs) using VMD. Subsequently, a CNN-LSTM network incorporating a Multi-Head Attention mechanism (MATT) is employed to model and predict each component. Concurrently, the SO algorithm performs global optimization of the model hyperparameters. Experimental results demonstrate that the proposed model significantly outperforms comparative models (traditional Long Short-Term Memory (LSTM) neural network, VMD-CNN-LSTM-MATT, and Sparrow Search Algorithm (SSA)-optimized CNN-LSTM) across key metrics: Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Mean Absolute Percentage Error (MAPE). Specifically, the reductions achieved are minimum improvements of 29.85% for MAE, 8.42% for RMSE, and 33.69% for MAPE. This model effectively enhances the prediction accuracy of land subsidence in cultivated hilly and mountainous areas, validating its high reliability and practicality for subsidence monitoring and prediction tasks. Full article
(This article belongs to the Section AI-Enabled Process Engineering)
Show Figures

Figure 1

22 pages, 588 KB  
Article
Hybrid AI-Based Framework for Generating Realistic Attack-Related Network Flow Data for Cybersecurity Digital Twins
by Eider Iturbe, Javier Arcas, Gabriel Gaminde, Erkuden Rios and Nerea Toledo
Appl. Sci. 2025, 15(21), 11574; https://doi.org/10.3390/app152111574 - 29 Oct 2025
Viewed by 130
Abstract
In the context of cybersecurity digital twin environments, the ability to simulate realistic network traffic is critical for validating and training intrusion detection systems. However, generating synthetic data that accurately reflects the complex, time-dependent nature of network flows remains a significant challenge. This [...] Read more.
In the context of cybersecurity digital twin environments, the ability to simulate realistic network traffic is critical for validating and training intrusion detection systems. However, generating synthetic data that accurately reflects the complex, time-dependent nature of network flows remains a significant challenge. This paper presents an AI-based data generation approach designed to generate multivariate temporal network flow data that accurately reflects adversarial scenarios. The proposed method integrates a Long Short-Term Memory (LSTM) architecture trained to capture the temporal dynamics of both normal and attack traffic, ensuring the synthetic data preserves realistic, sequence-aware behavioral patterns. To further enhance data fidelity, a combination of deep learning-based generative models and statistical techniques is employed to synthesize both numerical and categorical features while maintaining the correct proportions and temporal relationships between attack and normal traffic. A key contribution of the framework is its ability to generate high-fidelity synthetic data that supports the simulation of realistic, production-like cybersecurity scenarios. Experimental results demonstrate the effectiveness of the approach in generating data that supports robust machine learning-based detection systems, making it a valuable tool for cybersecurity validation and training in digital twin environments. Full article
Show Figures

Figure 1

18 pages, 4640 KB  
Article
Cable Outer Sheath Defect Identification Using Multi-Scale Leakage Current Features and Graph Neural Networks
by Musong Lin, Hankun Wei, Xukai Duan, Zhi Li, Qiang Fu and Yong Liu
Energies 2025, 18(21), 5687; https://doi.org/10.3390/en18215687 - 29 Oct 2025
Viewed by 131
Abstract
The outer sheath of power cables is prone to mechanical damage and environmental stress during long-term operation, and early defects are often difficult to detect accurately using conventional methods. To address this challenge, this paper proposes an outer sheath defect identification method based [...] Read more.
The outer sheath of power cables is prone to mechanical damage and environmental stress during long-term operation, and early defects are often difficult to detect accurately using conventional methods. To address this challenge, this paper proposes an outer sheath defect identification method based on leakage current features and graph neural networks. An electro–thermal coupling physical model was first proposed to simulate the electric field distribution and thermal effects under typical defects, thereby revealing the mechanisms by which defects influence leakage current and harmonic components. A power-frequency high-voltage experimental platform was then constructed to collect leakage current signals under conditions such as scratches, indentations, moisture, and chemical corrosion. Multi-scale frequency band features were extracted using wavelet packet decomposition to construct correlation graphs, which were further modeled through a combination of graph convolutional networks and long short-term memory networks for spatiotemporal analysis. Experimental results demonstrate that the proposed method effectively improves defect type and severity identification. By integrating physical mechanism analysis with data-driven modeling, this approach provides a feasible pathway for condition monitoring and refined operation and maintenance of cable outer sheaths. Full article
Show Figures

Figure 1

Back to TopTop