Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,486)

Search Parameters:
Keywords = long-short term memory neural network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 2796 KB  
Article
Leveraging Distributional Symmetry in Credit Card Fraud Detection via Conditional Tabular GAN Augmentation and LightGBM
by Cichen Wang, Can Xie and Jialiang Li
Symmetry 2026, 18(2), 224; https://doi.org/10.3390/sym18020224 - 27 Jan 2026
Abstract
Credit card fraud detection remains a major challenge due to extreme class imbalance and evolving attack patterns. This paper proposes a practical hybrid pipeline that combines conditional tabular generative adversarial networks (CTGANs) for targeted minority-class synthesis with Light Gradient Boosting Machine (LightGBM) for [...] Read more.
Credit card fraud detection remains a major challenge due to extreme class imbalance and evolving attack patterns. This paper proposes a practical hybrid pipeline that combines conditional tabular generative adversarial networks (CTGANs) for targeted minority-class synthesis with Light Gradient Boosting Machine (LightGBM) for classification. Inspired by symmetry principles in machine learning, we leverage the adversarial equilibrium of CTGAN to generate realistic fraudulent transactions that maintain distributional symmetry with real fraud patterns, thereby preserving the structural and statistical balance of the original dataset. Synthetic fraud samples are merged with real data to form augmented training sets that restore the symmetry of class representation. We evaluate Simple Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU) classifiers, and a LightGBM model on a public dataset using stratified 5-fold validation and an independent hold-out test set. Models are compared using sensitivity, precision, F-measure(F1), and area under the precision–recall curve (PR-AUC), which reflects symmetry between detection and false-alarm trade-offs. Results show that CTGAN-based augmentation yields large and consistent gains across architectures. The best-performing configuration, CTGAN + LightGBM, attains sensitivity = 0.986, precision = 0.982, F1 = 0.984, and PR-AUC = 0.918 on the test data, substantially outperforming non-augmented baselines and recent methods. These findings indicate that conditional synthetic augmentation materially improves the detection of rare fraud modes while preserving low false-alarm rates, demonstrating the value of symmetry-aware data synthesis in classification under imbalance. We discuss generation-quality checks, risk of distributional shift, and deployment considerations. Future work will explore alternative generative models with explicit symmetry constraints and time-aware production evaluation. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

21 pages, 3516 KB  
Article
Visual Navigation Using Depth Estimation Based on Hybrid Deep Learning in Sparsely Connected Path Networks for Robustness and Low Complexity
by Huda Al-Saedi, Pedram Salehpour and Seyyed Hadi Aghdasi
Appl. Syst. Innov. 2026, 9(2), 29; https://doi.org/10.3390/asi9020029 - 27 Jan 2026
Abstract
Robot navigation refers to a robot’s ability to determine its position within a reference frame and plan a path to a target location. Visual navigation, which relies on visual sensors such as cameras, is one approach to this problem. Among visual navigation methods, [...] Read more.
Robot navigation refers to a robot’s ability to determine its position within a reference frame and plan a path to a target location. Visual navigation, which relies on visual sensors such as cameras, is one approach to this problem. Among visual navigation methods, Visual Teach and Repeat (VT&R) techniques are commonly used. To develop an effective robot navigation framework based on the VT&R method, accurate and fast depth estimation of the scene is essential. In recent years, event cameras have garnered significant interest from machine vision researchers due to their numerous advantages and applicability in various environments, including robotics and drones. However, the main gap is how these cameras are used in a navigation system. The current research uses the attention-based UNET neural network to estimate the depth of a scene using an event camera. The attention-based UNET structure leads to accurate depth detection of the scene. This depth information is then used, together with a hybrid deep neural network consisting of a Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM), for robot navigation. Simulation results on the DENSE dataset yield an RMSE of 8.15, which is an acceptable result compared to other similar methods. This method not only provides good accuracy but also operates at high speed, making it suitable for real-time applications and visual navigation methods based on VT&R. Full article
(This article belongs to the Special Issue AI-Driven Decision Support for Systemic Innovation)
Show Figures

Figure 1

16 pages, 2052 KB  
Article
Modeling Road User Interactions with Dynamic Graph Attention Networks for Traffic Crash Prediction
by Shihan Ma and Jidong J. Yang
Appl. Sci. 2026, 16(3), 1260; https://doi.org/10.3390/app16031260 - 26 Jan 2026
Abstract
This paper presents a novel deep learning framework for traffic crash prediction that leverages graph-based representations to model complex interactions among road users. At its core is a dynamic Graph Attention Network (GAT), which abstracts road users and their interactions as evolving nodes [...] Read more.
This paper presents a novel deep learning framework for traffic crash prediction that leverages graph-based representations to model complex interactions among road users. At its core is a dynamic Graph Attention Network (GAT), which abstracts road users and their interactions as evolving nodes and edges in a spatiotemporal graph. Each node represents an individual road user, characterized by its state as features, such as location and velocity. A node-wise Long Short-Term Memory (LSTM) network is employed to capture the temporal evolution of these features. Edges are dynamically constructed based on spatial and temporal proximity, existing only when distance and time thresholds are met for modeling interaction relevance. The GAT learns attention-weighted representations of these dynamic interactions, which are subsequently used by a classifier to predict the risk of a crash. Experimental results demonstrate that the proposed GAT-based method achieves 86.1% prediction accuracy, highlighting its effectiveness for proactive collision risk assessment and its potential to inform real-time warning systems and preventive safety interventions. Full article
Show Figures

Figure 1

44 pages, 1721 KB  
Systematic Review
Vibration-Based Predictive Maintenance for Wind Turbines: A PRISMA-Guided Systematic Review on Methods, Applications, and Remaining Useful Life Prediction
by Carlos D. Constantino-Robles, Francisco Alberto Castillo Leonardo, Jessica Hernández Galván, Yoisdel Castillo Alvarez, Luis Angel Iturralde Carrera and Juvenal Rodríguez-Reséndiz
Appl. Mech. 2026, 7(1), 11; https://doi.org/10.3390/applmech7010011 - 26 Jan 2026
Abstract
This paper presents a systematic review conducted under the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) framework, analyzing 286 scientific articles focused on vibration-based predictive maintenance strategies for wind turbines within the context of advanced Prognostics and Health Management (PHM). The [...] Read more.
This paper presents a systematic review conducted under the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) framework, analyzing 286 scientific articles focused on vibration-based predictive maintenance strategies for wind turbines within the context of advanced Prognostics and Health Management (PHM). The review combines international standards (ISO 10816, ISO 13373, and IEC 61400) with recent developments in sensing technologies, including piezoelectric accelerometers, microelectromechanical systems (MEMS), and fiber Bragg grating (FBG) sensors. Classical signal processing techniques, such as the Fast Fourier Transform (FFT) and wavelet-based methods, are identified as key preprocessing tools for feature extraction prior to the application of machine-learning-based diagnostic algorithms. Special emphasis is placed on machine learning and deep learning techniques, including Support Vector Machines (SVM), Random Forest (RF), Convolutional Neural Networks (CNN), Long Short-Term Memory networks (LSTM), and autoencoders, as well as on hybrid digital twin architectures that enable accurate Remaining Useful Life (RUL) estimation and support autonomous decision-making processes. The bibliometric and case study analysis covering the period 2020–2025 reveals a strong shift toward multisource data fusion—integrating vibration, acoustic, temperature, and Supervisory Control and Data Acquisition (SCADA) data—and the adoption of cloud-based platforms for real-time monitoring, particularly in offshore wind farms where physical accessibility is constrained. The results indicate that vibration-based predictive maintenance strategies can reduce operation and maintenance costs by more than 20%, extend component service life by up to threefold, and achieve turbine availability levels between 95% and 98%. These outcomes confirm that vibration-driven PHM frameworks represent a fundamental pillar for the development of smart, sustainable, and resilient next-generation wind energy systems. Full article
Show Figures

Graphical abstract

35 pages, 5876 KB  
Article
Automatic Sleep Staging Using SleepXLSTM Based on Heterogeneous Representation of Heart Rate Data
by Tianlong Wu, Zisen Mao, Luyang Shi, Huaren Zhou, Chaohua Xie and Bowen Ran
Electronics 2026, 15(3), 505; https://doi.org/10.3390/electronics15030505 - 23 Jan 2026
Viewed by 122
Abstract
Automatic sleep staging technology based on wearable photoplethysmography can provide a non-invasive and continuous solution for large-scale sleep health monitoring. This study accordingly developed a novel cross-scale dynamically coupled extended long short-term memory network (SleepXLSTM) to realize automatic sleep staging based on heart [...] Read more.
Automatic sleep staging technology based on wearable photoplethysmography can provide a non-invasive and continuous solution for large-scale sleep health monitoring. This study accordingly developed a novel cross-scale dynamically coupled extended long short-term memory network (SleepXLSTM) to realize automatic sleep staging based on heart rate signals collected by wearable devices. SleepXLSTM models the relationship between heart rate fluctuations and sleep stage labels by correlating physiological features with clinical semantics using a knowledge graph neural network. Furthermore, an excitation–inhibition dual-effect regulator is applied in an improved multiplicative long short-term memory network along with memory mixing in a scalar long short-term memory network to extract and strengthen the key heart rate timing features while filtering out noise produced by motion artifacts, thereby facilitating subsequent high-precision sleep staging. The benefits and functions of this comprehensive heart rate feature extraction were demonstrated using sleep staging prediction and ablation experiments. The proposed model exhibited a superior accuracy of 91.25% and Cohen’s kappa coefficient of 0.876 compared to an extant state-of-the-art neural network sleep staging model with an accuracy of 69.80% and kappa coefficient of 0.040. On the ISRUC-Sleep dataset, the model achieved an accuracy of 87.51% and F1 score of 0.8760. The dynamic coupling strategy employed by SleepXLSTM for automatic sleep staging using the heterogeneous temporal representation of heart rate data can promote the development of smart wearable devices to provide early warning of sleep disorders and realize cost-effective technical support for sleep health management. Full article
(This article belongs to the Section Artificial Intelligence)
17 pages, 2959 KB  
Article
GABES-LSTM-Based Method for Predicting Draft Force in Tractor Rotary Tillage Operations
by Wenbo Wei, Maohua Xiao, Yue Niu, Min He, Zhiyuan Chen, Gang Yuan and Yejun Zhu
Agriculture 2026, 16(3), 297; https://doi.org/10.3390/agriculture16030297 - 23 Jan 2026
Viewed by 86
Abstract
During rotary tillage operations, the draft force is jointly affected by operating parameters and soil conditions, exhibiting pronounced nonlinearity, time-varying behavior, and historical dependence, which all impose higher requirements on tractor operating parameter matching and traction performance analysis. A draft force prediction method [...] Read more.
During rotary tillage operations, the draft force is jointly affected by operating parameters and soil conditions, exhibiting pronounced nonlinearity, time-varying behavior, and historical dependence, which all impose higher requirements on tractor operating parameter matching and traction performance analysis. A draft force prediction method that is based on a long short-term memory (LSTM) neural network jointly optimized by a genetic algorithm (GA) and the bald eagle search (BES) algorithm, termed GABES-LSTM, is proposed to address the limited prediction accuracy and stability of traditional empirical models and single data-driven approaches under complex field conditions. First, on the basis of the mechanical characteristics of rotary tillage operations, a time-series mathematical description of draft force is established, and the prediction problem is formulated as a multi-input single-output nonlinear temporal mapping driven by operating parameters such as travel speed, rotary speed, and tillage depth. Subsequently, an LSTM-based draft force prediction model is constructed, in which GA is employed for global hyperparameter search and BES is integrated for local fine-grained optimization, thereby improving the effectiveness of model parameter optimization. Finally, a dataset is established using measured field rotary tillage data to train and test the proposed model, and comparative analyses are conducted against LSTM, GA-LSTM, and BES-LSTM models. Experimental results indicate that the GABES-LSTM model outperforms the comparison models in terms of mean absolute percentage error, mean relative error, relative analysis error, and coefficient of determination, effectively capturing the dynamic variation characteristics of draft force during rotary tillage operations while maintaining stable prediction performance under repeated experimental conditions. This method provides effective data support for draft force prediction analysis and operating parameter adjustment during rotary tillage operations. Full article
(This article belongs to the Section Agricultural Technology)
13 pages, 613 KB  
Article
Selective Motor Entropy Modulation and Targeted Augmentation for the Identification of Parkinsonian Gait Patterns Using Multimodal Gait Analysis
by Yacine Benyoucef, Jouhayna Harmouch, Borhan Asadi, Islem Melliti, Antonio del Mastro, Pablo Herrero, Alberto Carcasona-Otal and Diego Lapuente-Hernández
Life 2026, 16(2), 193; https://doi.org/10.3390/life16020193 - 23 Jan 2026
Viewed by 201
Abstract
Background/Objectives: Parkinsonian gait is characterized by impaired motor adaptability, altered temporal organization, and reduced movement variability. While data augmentation is commonly used to mitigate class imbalance in gait-based machine learning models, conventional strategies often ignore physiological differences between healthy and pathological movements, potentially [...] Read more.
Background/Objectives: Parkinsonian gait is characterized by impaired motor adaptability, altered temporal organization, and reduced movement variability. While data augmentation is commonly used to mitigate class imbalance in gait-based machine learning models, conventional strategies often ignore physiological differences between healthy and pathological movements, potentially distorting meaningful motor dynamics. This study explores whether preserving healthy motor variability while selectively augmenting pathological gait signals can improve the robustness and physiological coherence of gait pattern classification models. Methods: Eight patients with Parkinsonian gait patterns and forty-eight healthy participants performed walking tasks on the Motigravity platform under hypogravity conditions. Full-body kinematic data were acquired using wearable inertial sensors. A selective augmentation strategy based on smooth time-warping was applied exclusively to pathological gait segments (×5, σ = 0.2), while healthy gait signals were left unaltered to preserve natural motor variability. Model performance was evaluated using a hybrid convolutional neural network–long short-term memory (CNN–LSTM) architecture across multiple augmentation configurations. Results: Selective augmentation of pathological gait signals achieved the highest classification performance (94.1% accuracy, AUC = 0.97), with balanced sensitivity (93.8%) and specificity (94.3%). Performance decreased when augmentation exceeded an optimal range of variability, suggesting that beneficial augmentation is constrained by physiologically plausible temporal dynamics. Conclusions: These findings demonstrate that physiology-informed, selective data augmentation can improve gait pattern classification under constrained data conditions. Rather than supporting disease-specific diagnosis, this proof-of-concept study highlights the importance of respecting intrinsic differences in motor variability when designing augmentation strategies for clinical gait analysis. Future studies incorporating disease-control cohorts and subject-independent validation are required to assess specificity and clinical generalizability. Full article
(This article belongs to the Section Biochemistry, Biophysics and Computational Biology)
Show Figures

Figure 1

30 pages, 2761 KB  
Article
HST–MB–CREH: A Hybrid Spatio-Temporal Transformer with Multi-Branch CNN/RNN for Rare-Event-Aware PV Power Forecasting
by Guldana Taganova, Jamalbek Tussupov, Assel Abdildayeva, Mira Kaldarova, Alfiya Kazi, Ronald Cowie Simpson, Alma Zakirova and Bakhyt Nurbekov
Algorithms 2026, 19(2), 94; https://doi.org/10.3390/a19020094 - 23 Jan 2026
Viewed by 71
Abstract
We propose the Hybrid Spatio-Temporal Transformer with Multi-Branch CNN/RNN and Extreme-Event Head (HST–MB–CREH), a hybrid spatio-temporal deep learning architecture for joint short-term photovoltaic (PV) power forecasting and the detection of rare extreme events, to support the reliable operation of renewable-rich power systems. The [...] Read more.
We propose the Hybrid Spatio-Temporal Transformer with Multi-Branch CNN/RNN and Extreme-Event Head (HST–MB–CREH), a hybrid spatio-temporal deep learning architecture for joint short-term photovoltaic (PV) power forecasting and the detection of rare extreme events, to support the reliable operation of renewable-rich power systems. The model combines a spatio-temporal transformer encoder with three convolutional neural network (CNN)/recurrent neural network (RNN) branches (CNN → long short-term memory (LSTM), LSTM → gated recurrent unit (GRU), CNN → GRU) and a dense pathway for tabular meteorological and calendar features. A multitask output head simultaneously performs the regression of PV power and binary classification of extremes defined above the 95th percentile. We evaluate HST–MB–CREH on the publicly available Renewable Power Generation and Weather Conditions dataset with hourly resolutions from 2017 to 2022, using a 5-fold TimeSeriesSplit protocol to avoid temporal leakage and to cover multiple seasons. Compared with tree ensembles (RandomForest, XGBoost), recurrent baselines (Stacked GRU, LSTM), and advanced hybrid/transformer models (Hybrid Multi-Branch CNN–LSTM/GRU with Dense Path and Extreme-Event Head (HMB–CLED) and Spatio-Temporal Multitask Transformer with Extreme-Event Head (STM–EEH)), the proposed architecture achieves the best overall trade-off between accuracy and rare-event sensitivity, with normalized performance of RMSE_z = 0.2159 ± 0.0167, MAE_z = 0.1100 ± 0.0085, mean absolute percentage error (MAPE) = 9.17 ± 0.45%, R2 = 0.9534 ± 0.0072, and AUC_ext = 0.9851 ± 0.0051 across folds. Knowledge extraction is supported via attention-based analysis and permutation feature importance, which highlight the dominant role of global horizontal irradiance, diurnal harmonics, and solar geometry features. The results indicate that hybrid spatio-temporal multitask architectures can substantially improve both the forecast accuracy and robustness to extremes, making HST–MB–CREH a promising building block for intelligent decision-support tools in smart grids with a high share of PV generation. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
15 pages, 2333 KB  
Article
Prediction of Fatigue Damage Evolution in 3D-Printed CFRP Based on Ultrasonic Testing and LSTM
by Erzhuo Li, Sha Xu, Hongqing Wan, Hao Chen, Yali Yang and Yongfang Li
Appl. Sci. 2026, 16(2), 1139; https://doi.org/10.3390/app16021139 - 22 Jan 2026
Viewed by 28
Abstract
To address the prediction of fatigue damage for 3D-printed Carbon Fiber Reinforced Polymer (CFRP), this study used 3D-printing technology to fabricate CFRP specimens. Through multi-stage fatigue testing, samples with varying porosity levels were obtained. Based on porosity test results and ultrasonic attenuation coefficient [...] Read more.
To address the prediction of fatigue damage for 3D-printed Carbon Fiber Reinforced Polymer (CFRP), this study used 3D-printing technology to fabricate CFRP specimens. Through multi-stage fatigue testing, samples with varying porosity levels were obtained. Based on porosity test results and ultrasonic attenuation coefficient measurements of specimens under different fatigue cycle counts, a quantitative relationship model was established between the porosity and ultrasonic attenuation coefficient of 3D-printed CFRP. According to the porosity and fatigue-loading cycles obtained from tests, the Time-series Generative Adversarial Network (TimeGAN) algorithm was employed for data augmentation to meet the requirements for neural-network training. Subsequently, the Long Short-Term Memory (LSTM) neural network was utilized to predict the fatigue damage evolution of 3D-printed CFRP specimens. Research findings indicate that by integrating the established relationship between porosity and ultrasonic attenuation coefficient, non-destructive testing of material fatigue damage evolution based on ultrasonic attenuation coefficient can be achieved. Full article
Show Figures

Figure 1

20 pages, 1124 KB  
Article
Scalable Neural Cryptanalysis of Block Ciphers in Federated Attack Environments
by Ongee Jeong, Seonghwan Park and Inkyu Moon
Mathematics 2026, 14(2), 373; https://doi.org/10.3390/math14020373 - 22 Jan 2026
Viewed by 46
Abstract
This paper presents an extended investigation into deep learning-based cryptanalysis of block ciphers by introducing and evaluating a multi-server attack environment. Building upon our prior work in centralized settings, we explore the practicality and scalability of deploying such attacks across multiple distributed edge [...] Read more.
This paper presents an extended investigation into deep learning-based cryptanalysis of block ciphers by introducing and evaluating a multi-server attack environment. Building upon our prior work in centralized settings, we explore the practicality and scalability of deploying such attacks across multiple distributed edge servers. We assess the vulnerability of five representative block ciphers—DES, SDES, AES-128, SAES, and SPECK32/64—under two neural attack models: Encryption Emulation (EE) and Plaintext Recovery (PR), using both fully connected neural networks and Recurrent Neural Networks (RNNs) based on bidirectional Long Short-Term Memory (BiLSTM). Our experimental results show that the proposed federated learning-based cryptanalysis framework achieves performance nearly identical to that of centralized attacks, particularly for ciphers with low round complexity. Even as the number of edge servers increases to 32, the attack models maintain high accuracy in reduced-round settings. We validate our security assessments through formal statistical significance testing using two-tailed binomial tests with 99% confidence intervals. Additionally, our scalability analysis demonstrates that aggregation times remain negligible (<0.01% of total training time), confirming the computational efficiency of the federated framework. Overall, this work provides both a scalable cryptanalysis framework and valuable insights into the design of cryptographic algorithms that are resilient to distributed, deep learning-based threats. Full article
(This article belongs to the Section E: Applied Mathematics)
Show Figures

Figure 1

27 pages, 10518 KB  
Article
DL-PCMNet: Distributed Learning Enabled Parallel Convolutional Memory Network for Skin Cancer Classification with Dermatoscopic Images
by Afnan M. Alhassan and Nouf I. Altmami
Diagnostics 2026, 16(2), 359; https://doi.org/10.3390/diagnostics16020359 - 22 Jan 2026
Viewed by 48
Abstract
Background/Objectives: Globally, one of the most dreadful and rapidly spreading illnesses is skin cancer, and it is acknowledged as a lethal form of cancer due to the abnormal growth of skin cells. Mostly, classifying and diagnosing the types of skin lesions is [...] Read more.
Background/Objectives: Globally, one of the most dreadful and rapidly spreading illnesses is skin cancer, and it is acknowledged as a lethal form of cancer due to the abnormal growth of skin cells. Mostly, classifying and diagnosing the types of skin lesions is complex, and recognizing tumors from dermoscopic images remains challenging. The existing methods have limitations like insufficient datasets, computational complexity, class imbalance issues, and poor classification performance. Methods: This research presents a method named the Distributed Learning enabled Parallel Convolutional Memory Network (DL-PCMNet) model to effectively classify skin cancer by overcoming the existing limitations. Hence, the proposed DL-PCMNet model utilizes a distributed learning framework to provide greater flexibility during the learning process, and it increases the reliability of the model. Moreover, the model integrates the Convolutional Neural Network (CNN) and Long Short-Term Memory model (LSTM) in a parallel distribution, which enhances robustness and accuracy by capturing the information of long-term dependencies. Furthermore, the utilization of advanced preprocessing and feature extraction techniques increases the accuracy of classification. Results: The evaluation results exhibit an accuracy of 97.28%, precision of 97.30%, sensitivity of 97.17%, and specificity of 97.72% at 90% of training by using the ISIC 2019 skin lesion dataset, respectively. Conclusions: Specifically, the proposed DL-PCMNet model achieved efficient and accurate skin cancer classification compared with other existing models. Full article
(This article belongs to the Special Issue Artificial Intelligence in Dermatology)
Show Figures

Figure 1

19 pages, 3742 KB  
Article
Short-Term Solar and Wind Power Forecasting Using Machine Learning Algorithms for Microgrid Operation
by Vidhi Rajeshkumar Patel, Havva Sena Cakar and Mohsin Jamil
Energies 2026, 19(2), 550; https://doi.org/10.3390/en19020550 - 22 Jan 2026
Viewed by 41
Abstract
Accurate short-term forecasting of renewable energy sources is essential for stable and efficient microgrid operation. Existing models primarily focus on either solar or wind prediction, often neglecting their combined stochastic behavior within isolated systems. This study presents a comparative evaluation of three machine-learning [...] Read more.
Accurate short-term forecasting of renewable energy sources is essential for stable and efficient microgrid operation. Existing models primarily focus on either solar or wind prediction, often neglecting their combined stochastic behavior within isolated systems. This study presents a comparative evaluation of three machine-learning models—Random Forest, ANN, and LSTM—for short-term solar and wind forecasting in microgrid environments. Historical meteorological data and power generation records are used to train and validate three ML models: Random Forest, Long Short-Term Memory, and Artificial Neural Networks. Each model is optimized to capture nonlinear and rapidly fluctuating weather dynamics. Forecasting performance is quantitatively evaluated using Mean Absolute Error, Root Mean Square Error, and Mean Percentage Error. The predicted values are integrated into a microgrid energy management system to enhance operational decisions such as battery storage scheduling, diesel generator coordination, and load balancing. Among the evaluated models, the ANN achieved the lowest prediction error with an MAE of 64.72 kW on the one-year dataset, outperforming both LSTM and Random Forest. The novelty of this study lies in integrating multi-source data into a unified ML-based predictive framework, enabling improved reliability, reduced fossil fuel usage, and enhanced energy resilience in remote microgrids. This research used Orange 3.40 software and Python 3.12 code for prediction. By enhancing forecasting accuracy, the project seeks to reduce reliance on fossil fuels, lower operational costs, and improve grid stability. Outcomes will provide scalable insights for remote microgrids transitioning to renewables. Full article
Show Figures

Figure 1

32 pages, 472 KB  
Review
Electrical Load Forecasting in the Industrial Sector: A Literature Review of Machine Learning Models and Architectures for Grid Planning
by Jannis Eckhoff, Simran Wadhwa, Marc Fette, Jens Peter Wulfsberg and Chathura Wanigasekara
Energies 2026, 19(2), 538; https://doi.org/10.3390/en19020538 - 21 Jan 2026
Viewed by 91
Abstract
The energy transition, driven by the global shift toward renewable and electrification, necessitates accurate and efficient prediction of electrical load profiles to quantify energy consumption. Therefore, the systematic literature review (SLR), followed by PRISMA guidelines, synthesizes hybrid architectures for sequential electrical load profiles, [...] Read more.
The energy transition, driven by the global shift toward renewable and electrification, necessitates accurate and efficient prediction of electrical load profiles to quantify energy consumption. Therefore, the systematic literature review (SLR), followed by PRISMA guidelines, synthesizes hybrid architectures for sequential electrical load profiles, aiming to span statistical techniques, machine learning (ML), and deep learning (DL) strategies for optimizing performance and practical viability. The findings reveal a dominant trend towards complex hybrid models leveraging the combined strengths of DL architectures such as long short-term memory (LSTM) and optimization algorithms such as genetic algorithm and Particle Swarm Optimization (PSO) to capture non-linear relationships. Thus, hybrid models achieve superior performance by synergistically integrating components such as Convolutional Neural Network (CNN) for feature extraction and LSTMs for temporal modeling with feature selection algorithms, which collectively capture local trends, cross-correlations, and long-term dependencies in the data. A crucial challenge identified is the lack of an established framework to manage adaptable output lengths in dynamic neural network forecasting. Addressing this, we propose the first explicit idea of decoupling output length predictions from the core signal prediction task. A key finding is that while models, particularly optimization-tuned hybrid architectures, have demonstrated quantitative superiority over conventional shallow methods, their performance assessment relies heavily on statistical measures like Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Mean Absolute Percentage Error (MAPE). However, for comprehensive performance assessment, there is a crucial need for developing tailored, application-based metrics that integrate system economics and major planning aspects to ensure reliable domain-specific validation. Full article
(This article belongs to the Special Issue Power Systems and Smart Grids: Innovations and Applications)
Show Figures

Figure 1

17 pages, 2935 KB  
Article
A Hybrid Deep Learning Framework for Non-Intrusive Load Monitoring
by Xiangbin Kong, Zhihang Gui, Minghu Wu, Chuyu Miao and Zhe Luo
Electronics 2026, 15(2), 453; https://doi.org/10.3390/electronics15020453 - 21 Jan 2026
Viewed by 157
Abstract
In recent years, load disaggregation and non-intrusive load-monitoring (NILM) methods have garnered widespread attention for optimizing energy management systems, becoming crucial tools for achieving energy efficiency and analyzing power consumption. However, existing NILM methods face challenges in accurately handling appliances with multiple operational [...] Read more.
In recent years, load disaggregation and non-intrusive load-monitoring (NILM) methods have garnered widespread attention for optimizing energy management systems, becoming crucial tools for achieving energy efficiency and analyzing power consumption. However, existing NILM methods face challenges in accurately handling appliances with multiple operational states and suffer from low accuracy and poor computational efficiency, particularly in modeling long-term dependencies and complex appliance load patterns. This article proposes an improved NILM model optimized based on transformers. The model first utilizes a convolutional neural network (CNN) to extract features from the input sequence and employs a bidirectional long short-term memory (BiLSTM) network to model long-term dependencies. Subsequently, multiple transformer blocks are used to capture dependencies within the sequence. To validate the effectiveness of the proposed model, we applied it to real-world household energy datasets: UK-DALE and REDD. Compared with suboptimal models, our model significantly improves the F1 score by 24.5% and 22.8%. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

14 pages, 11925 KB  
Technical Note
Detecting Mowed Tidal Wetlands Using Time-Series NDVI and LSTM-Based Machine Learning
by Mayeesha Humaira, Stephen Aboagye-Ntow, Chuyuan Wang, Alexi Sanchez de Boado, Mark Burchick, Leslie Wood Mummert and Xin Huang
Land 2026, 15(1), 193; https://doi.org/10.3390/land15010193 - 21 Jan 2026
Viewed by 138
Abstract
This study presents the first application of machine learning (ML) to detect and map mowed tidal wetlands in the Chesapeake Bay region of Maryland and Virginia, focusing on emergent estuarine intertidal (E2EM) wetlands. Monitoring human disturbances like mowing is essential because repeated mowing [...] Read more.
This study presents the first application of machine learning (ML) to detect and map mowed tidal wetlands in the Chesapeake Bay region of Maryland and Virginia, focusing on emergent estuarine intertidal (E2EM) wetlands. Monitoring human disturbances like mowing is essential because repeated mowing stresses wetland vegetation, reducing habitat quality and diminishing other ecological services wetlands provide, including shoreline stabilization and water filtration. Traditional field-based monitoring is labor-intensive and impractical for large-scale assessments. To address these challenges, this study utilized 2021 and 2022 Sentinel-2 satellite imagery and a time-series analysis of the Normalized Difference Vegetation Index (NDVI) to distinguish between mowed and unmowed (control) wetlands. A bidirectional Long Short-Term Memory (BiLSTM) neural network was created to predict NDVI patterns associated with mowing events, such as rapid decreases followed by slow vegetation regeneration. The training dataset comprised 204 field-verified and desktop-identified samples, accounting for under 0.002% of the research area’s herbaceous E2EM wetlands. The model obtained 97.5% accuracy on an internal test set and was verified at eight separate Chesapeake Bay locations, indicating its promising generality. This work demonstrates the potential of remote sensing and machine learning for scalable, automated monitoring of tidal wetland disturbances to aid in conservation, restoration, and resource management. Full article
(This article belongs to the Section Land – Observation and Monitoring)
Show Figures

Figure 1

Back to TopTop