Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (244)

Search Parameters:
Keywords = gated recurrent unit (GRU) RNN

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 9222 KiB  
Article
Using Deep Learning in Forecasting the Production of Electricity from Photovoltaic and Wind Farms
by Michał Pikus, Jarosław Wąs and Agata Kozina
Energies 2025, 18(15), 3913; https://doi.org/10.3390/en18153913 - 23 Jul 2025
Viewed by 318
Abstract
Accurate forecasting of electricity production is crucial for the stability of the entire energy sector. However, predicting future renewable energy production and its value is difficult due to the complex processes that affect production using renewable energy sources. In this article, we examine [...] Read more.
Accurate forecasting of electricity production is crucial for the stability of the entire energy sector. However, predicting future renewable energy production and its value is difficult due to the complex processes that affect production using renewable energy sources. In this article, we examine the performance of basic deep learning models for electricity forecasting. We designed deep learning models, including recursive neural networks (RNNs), which are mainly based on long short-term memory (LSTM) networks; gated recurrent units (GRUs), convolutional neural networks (CNNs), temporal fusion transforms (TFTs), and combined architectures. In order to achieve this goal, we have created our benchmarks and used tools that automatically select network architectures and parameters. Data were obtained as part of the NCBR grant (the National Center for Research and Development, Poland). These data contain daily records of all the recorded parameters from individual solar and wind farms over the past three years. The experimental results indicate that the LSTM models significantly outperformed the other models in terms of forecasting. In this paper, multilayer deep neural network (DNN) architectures are described, and the results are provided for all the methods. This publication is based on the results obtained within the framework of the research and development project “POIR.01.01.01-00-0506/21”, realized in the years 2022–2023. The project was co-financed by the European Union under the Smart Growth Operational Programme 2014–2020. Full article
Show Figures

Figure 1

31 pages, 7723 KiB  
Article
A Hybrid CNN–GRU–LSTM Algorithm with SHAP-Based Interpretability for EEG-Based ADHD Diagnosis
by Makbal Baibulova, Murat Aitimov, Roza Burganova, Lazzat Abdykerimova, Umida Sabirova, Zhanat Seitakhmetova, Gulsiya Uvaliyeva, Maksym Orynbassar, Aislu Kassekeyeva and Murizah Kassim
Algorithms 2025, 18(8), 453; https://doi.org/10.3390/a18080453 - 22 Jul 2025
Viewed by 482
Abstract
This study proposes an interpretable hybrid deep learning framework for classifying attention deficit hyperactivity disorder (ADHD) using EEG signals recorded during cognitively demanding tasks. The core architecture integrates convolutional neural networks (CNNs), gated recurrent units (GRUs), and long short-term memory (LSTM) layers to [...] Read more.
This study proposes an interpretable hybrid deep learning framework for classifying attention deficit hyperactivity disorder (ADHD) using EEG signals recorded during cognitively demanding tasks. The core architecture integrates convolutional neural networks (CNNs), gated recurrent units (GRUs), and long short-term memory (LSTM) layers to jointly capture spatial and temporal dynamics. In addition to the final hybrid architecture, the CNN–GRU–LSTM model alone demonstrates excellent accuracy (99.63%) with minimal variance, making it a strong baseline for clinical applications. To evaluate the role of global attention mechanisms, transformer encoder models with two and three attention blocks, along with a spatiotemporal transformer employing 2D positional encoding, are benchmarked. A hybrid CNN–RNN–transformer model is introduced, combining convolutional, recurrent, and transformer-based modules into a unified architecture. To enhance interpretability, SHapley Additive exPlanations (SHAP) are employed to identify key EEG channels contributing to classification outcomes. Experimental evaluation using stratified five-fold cross-validation demonstrates that the proposed hybrid model achieves superior performance, with average accuracy exceeding 99.98%, F1-scores above 0.9999, and near-perfect AUC and Matthews correlation coefficients. In contrast, transformer-only models, despite high training accuracy, exhibit reduced generalization. SHAP-based analysis confirms the hybrid model’s clinical relevance. This work advances the development of transparent and reliable EEG-based tools for pediatric ADHD screening. Full article
Show Figures

Graphical abstract

24 pages, 26672 KiB  
Article
Short-Term Electric Load Forecasting Using Deep Learning: A Case Study in Greece with RNN, LSTM, and GRU Networks
by Vasileios Zelios, Paris Mastorocostas, George Kandilogiannakis, Anastasios Kesidis, Panagiota Tselenti and Athanasios Voulodimos
Electronics 2025, 14(14), 2820; https://doi.org/10.3390/electronics14142820 - 14 Jul 2025
Viewed by 598
Abstract
The increasing volatility in energy markets, particularly in Greece where electricity costs reached a peak of 236 EUR/MWh in 2022, underscores the urgent need for accurate short-term load forecasting models. In this study, the application of deep learning techniques, specifically Recurrent Neural Network [...] Read more.
The increasing volatility in energy markets, particularly in Greece where electricity costs reached a peak of 236 EUR/MWh in 2022, underscores the urgent need for accurate short-term load forecasting models. In this study, the application of deep learning techniques, specifically Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU), to forecast hourly electricity demand is investigated. The proposed models were trained on historical load data from the Greek power system spanning the years 2013 to 2016. Various deep learning architectures were implemented and their forecasting performances using statistical metrics such as Root Mean Squared Error (RMSE) and Mean Absolute Percentage Error (MAPE) were evaluated. The experiments utilized multiple time horizons (1 h, 2 h, 24 h) and input sequence lengths (6 h to 168 h) to assess model accuracy and robustness. The best performing GRU model achieved an RMSE of 83.2 MWh and a MAPE of 1.17% for 1 h ahead forecasting, outperforming both LSTM and RNN in terms of both accuracy and computational efficiency. The predicted values were integrated into a dynamic Power BI dashboard, to enable real-time visualization and decision support. These findings demonstrate the potential of deep learning architectures, particularly GRUs, for operational load forecasting and their applicability to intelligent energy systems in a market-strained environment. Full article
Show Figures

Figure 1

13 pages, 531 KiB  
Article
Adaptive Motion Planning Leveraging Speed-Differentiated Prediction for Mobile Robots in Dynamic Environments
by Tengfei Liu, Zihe Wang, Jiazheng Hu, Shuling Zeng, Xiaoxu Liu and Tan Zhang
Appl. Sci. 2025, 15(13), 7551; https://doi.org/10.3390/app15137551 - 4 Jul 2025
Viewed by 305
Abstract
This paper presents a novel motion planning framework for mobile robots operating in dynamic and uncertain environments, with an emphasis on accurate trajectory prediction and safe, efficient obstacle avoidance. The proposed approach integrates search-based planning with deep learning techniques to improve both robustness [...] Read more.
This paper presents a novel motion planning framework for mobile robots operating in dynamic and uncertain environments, with an emphasis on accurate trajectory prediction and safe, efficient obstacle avoidance. The proposed approach integrates search-based planning with deep learning techniques to improve both robustness and interpretability. A multi-sensor perception module is designed to classify obstacles as either static or dynamic, thereby enhancing environmental awareness and planning reliability. To address the challenge of motion prediction, we introduce the K-GRU Kalman method, which first applies K-means clustering to distinguish between high-speed and low-speed dynamic obstacles, then models their trajectories using a combination of Kalman filtering and gated recurrent units (GRUs). Compared to state-of-the-art RNN and LSTM-based predictors, the proposed method achieves superior accuracy and generalization. Extensive experiments in both simulated and real-world scenarios of varying complexity demonstrate the effectiveness of the framework. The results show an average planning success rate exceeding 60%, along with notable improvements in path safety and smoothness, validating the contribution of each module within the system. Full article
Show Figures

Figure 1

26 pages, 10233 KiB  
Article
Time-Series Forecasting Method Based on Hierarchical Spatio-Temporal Attention Mechanism
by Zhiguo Xiao, Junli Liu, Xinyao Cao, Ke Wang, Dongni Li and Qian Liu
Sensors 2025, 25(13), 4001; https://doi.org/10.3390/s25134001 - 26 Jun 2025
Viewed by 572
Abstract
In the field of intelligent decision-making, time-series data collected by sensors serves as the core carrier for interaction between the physical and digital worlds. Accurate analysis is the cornerstone of decision-making in critical scenarios, such as industrial monitoring and intelligent transportation. However, the [...] Read more.
In the field of intelligent decision-making, time-series data collected by sensors serves as the core carrier for interaction between the physical and digital worlds. Accurate analysis is the cornerstone of decision-making in critical scenarios, such as industrial monitoring and intelligent transportation. However, the inherent spatio-temporal coupling characteristics and cross-period long-range dependency of sensor data cause traditional time-series prediction methods to face performance bottlenecks in feature decoupling and multi-scale modeling. This study innovatively proposes a Spatio-Temporal Attention-Enhanced Network (TSEBG). Breaking through traditional structural designs, the model employs a Squeeze-and-Excitation Network (SENet) to reconstruct the convolutional layers of the Temporal Convolutional Network (TCN), strengthening the feature expression of key time steps through dynamic channel weight allocation to address the redundancy issue of traditional causal convolutions in local pattern capture. A Bidirectional Gated Recurrent Unit (BiGRU) variant based on a global attention mechanism is designed, leveraging the collaboration between gating units and attention weights to mine cross-period long-distance dependencies and effectively alleviate the gradient disappearance problem of Recurrent Neural Network (RNN-like) models in multi-scale time-series analysis. A hierarchical feature fusion architecture is constructed to achieve multi-dimensional alignment of local spatial and global temporal features. Through residual connections and the dynamic adjustment of attention weights, hierarchical semantic representations are output. Experiments show that TSEBG outperforms current dominant models in time-series single-step prediction tasks in terms of accuracy and performance, with a cross-dataset R2 standard deviation of only 3.7%, demonstrating excellent generalization stability. It provides a novel theoretical framework for feature decoupling and multi-scale modeling of complex time-series data. Full article
Show Figures

Figure 1

14 pages, 2070 KiB  
Article
Comparative Analysis of Machine/Deep Learning Models for Single-Step and Multi-Step Forecasting in River Water Quality Time Series
by Hongzhe Fang, Tianhong Li and Huiting Xian
Water 2025, 17(13), 1866; https://doi.org/10.3390/w17131866 - 23 Jun 2025
Viewed by 562
Abstract
There is a lack of a systematic comparison framework that can assess models in both single-step and multi-step forecasting situations while balancing accuracy, training efficiency, and prediction horizon. This study aims to evaluate the predictive capabilities of machine learning and deep learning models [...] Read more.
There is a lack of a systematic comparison framework that can assess models in both single-step and multi-step forecasting situations while balancing accuracy, training efficiency, and prediction horizon. This study aims to evaluate the predictive capabilities of machine learning and deep learning models in water quality time series forecasting. It made use of 22-month data with a 4 h interval from two monitoring stations located in a tributary of the Pearl River. Six models, specifically Support Vector Regression (SVR), XGBoost, K-Nearest Neighbors (KNN), Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM) Network, Gated Recurrent Unit (GRU), and PatchTST, were employed in this study. In single-step forecasting, LSTM Network achieved superior accuracy for a univariate feature set and attained an overall 22.0% (Welch’s t-test, p = 3.03 × 10−7) reduction in Mean Squared Error (MSE) compared with the machine learning models (SVR, XGBoost, KNN), while RNN demonstrated significantly reduced training time. For a multivariate feature set, the deep learning models exhibited comparable accuracy but with no model achieving a significant increase in accuracy compared to the univariate scenario. The KNN model underperformed across error evaluation metrics, with the lowest accuracy, and the XGBoost model exhibited the highest computational complexity. In multi-step forecasting, the direct multi-step PatchTST model outperformed the iterated multi-step models (RNN, LSTM, GRU), with a reduced time-delay effect and a slower decrease in accuracy with increasing prediction length, but it still required specific adjustments to be better suited for the task of river water quality time series forecasting. The findings provide actionable guidelines for model selection, balancing predictive accuracy, training efficiency, and forecasting horizon requirements in environmental time series analysis. Full article
Show Figures

Figure 1

26 pages, 2845 KiB  
Article
Short-Term Energy Consumption Forecasting Analysis Using Different Optimization and Activation Functions with Deep Learning Models
by Mehmet Tahir Ucar and Asim Kaygusuz
Appl. Sci. 2025, 15(12), 6839; https://doi.org/10.3390/app15126839 - 18 Jun 2025
Viewed by 763
Abstract
Modelling events that change over time is one of the most difficult problems in data analysis. Forecasting of time-varying electric power values is also an important problem in data analysis. Regression methods, machine learning, and deep learning methods are used to learn different [...] Read more.
Modelling events that change over time is one of the most difficult problems in data analysis. Forecasting of time-varying electric power values is also an important problem in data analysis. Regression methods, machine learning, and deep learning methods are used to learn different patterns from data and develop a consumption prediction model. The aim of this study is to determine the most successful models for short-term power consumption prediction with deep learning and to achieve the highest prediction accuracy. In this study, firstly, the data was evaluated and organized with exploratory data analysis (EDA) on a ready dataset and the features of the data were extracted. Studies were carried out on long short-term memory (LSTM), gated recurrent unit (GRU), simple recurrent neural networks (SimpleRNN) and bidirectional long short-term memory (BiLSTM) architectures. First, four architectures were used with 11 different optimization methods. In this study, it was seen that a high success rate of 0.9972 was achieved according to the R2 score index. In the following, the first study was tried with different epoch numbers. Afterwards, this study was carried out with 264 separate models produced using four architectures, 11 optimization methods, and six activation functions in order. The results of all these studies were obtained according to the root mean square error (RMSE), mean absolute error (MAE), and R2_score indexes. The R2_score indexes graphs are presented. Finally, the 10 most successful applications are listed. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

21 pages, 4530 KiB  
Article
Advancements in Hydrological Modeling: The Role of bRNN-CNN-GRU in Predicting Dam Reservoir Inflow Patterns
by Erfan Abdi, Mohammad Taghi Sattari, Adam Milewski and Osama Ragab Ibrahim
Water 2025, 17(11), 1660; https://doi.org/10.3390/w17111660 - 29 May 2025
Cited by 2 | Viewed by 938
Abstract
Accurate reservoir inflow predictions are critical for effective flood control and optimizing hydropower generation, thereby enhancing water resource management. This study introduces an advanced hydrological modeling approach that leverages a basic recurrent neural network (bRNN), convolutional neural network (CNN) with gated recurrent units [...] Read more.
Accurate reservoir inflow predictions are critical for effective flood control and optimizing hydropower generation, thereby enhancing water resource management. This study introduces an advanced hydrological modeling approach that leverages a basic recurrent neural network (bRNN), convolutional neural network (CNN) with gated recurrent units (GRU) (bRNN-CNN-GRU), GRU with long short-term memory (LSTM) (GRU-LSTM) hybrid models, and deep neural network (DNN) to predict daily reservoir inflow at the Sefid Roud Dam. By utilizing historical data from 2018 to 2024, this study examined the following two multivariate scenarios: one incorporating water parameters such as water level, evaporation, and temperature extremes, and another focused solely on inflow delays. Training and testing sets were created from the dataset, with 80% for training and 20% for testing. For benchmarking purposes, the performance of the bRNN-CNN-GRU was evaluated against a deep neural network (DNN) and a GRU-LSTM hybrid. The evaluation metrics used were root mean square error (RMSE), correlation coefficient (r), and Nash Sutcliffe coefficient (NSE). Results demonstrated that, while all models performed better under the scenario incorporating inflow delays, the bRNN-CNN-GRU model achieved the best performance, with an RMSE of 0.71, r of 0.97, and NSE of 0.95, outperforming both the DNN and GRU-LSTM models. These findings highlight the significant advancements in hydrological modeling and affirm the applicability of the bRNN-CNN-GRU model for improved reservoir management in diverse settings. Full article
Show Figures

Figure 1

16 pages, 1174 KiB  
Article
Natural Language Processing for Aviation Safety: Predicting Injury Levels from Incident Reports in Australia
by Aziida Nanyonga, Keith Joiner, Ugur Turhan and Graham Wild
Modelling 2025, 6(2), 40; https://doi.org/10.3390/modelling6020040 - 28 May 2025
Viewed by 1197
Abstract
This study investigates the application of advanced deep learning models for the classification of aviation safety incidents, focusing on four models: Simple Recurrent Neural Network (sRNN), Gated Recurrent Unit (GRU), Bidirectional Long Short-Term Memory (BLSTM), and DistilBERT. The models were evaluated based on [...] Read more.
This study investigates the application of advanced deep learning models for the classification of aviation safety incidents, focusing on four models: Simple Recurrent Neural Network (sRNN), Gated Recurrent Unit (GRU), Bidirectional Long Short-Term Memory (BLSTM), and DistilBERT. The models were evaluated based on key performance metrics, including accuracy, precision, recall, and F1-score. DistilBERT achieved perfect performance with an accuracy of 1.00 across all metrics, while BLSTM demonstrated the highest performance among the deep learning models, with an accuracy of 0.9896, followed by GRU (0.9893) and sRNN (0.9887). Class-wise evaluations revealed that DistilBERT excelled across all injury categories, with BLSTM outperforming the other deep learning models, particularly in detecting fatal injuries, achieving a precision of 0.8684 and an F1-score of 0.7952. The study also addressed the challenges of class imbalance by applying class weighting, although the use of more sophisticated techniques, such as focal loss, is recommended for future work. This research highlights the potential of transformer-based models for aviation safety classification and provides a foundation for future research to improve model interpretability and generalizability across diverse datasets. These findings contribute to the growing body of research on applying deep learning techniques to aviation safety and underscore opportunities for further exploration. Full article
Show Figures

Figure 1

30 pages, 6996 KiB  
Article
Time-Series Prediction of Failures in an Industrial Assembly Line Using Artificial Learning
by Mert Can Sen and Mahmut Alkan
Appl. Sci. 2025, 15(11), 5984; https://doi.org/10.3390/app15115984 - 26 May 2025
Viewed by 458
Abstract
This study evaluates the efficacy of six artificial learning (AL) models—nonlinear autoregressive (NAR), long short-term memory (LSTM), adaptive neuro-fuzzy inference system (ANFIS), gated recurrent unit (GRU), multilayer perceptron (MLP), and CNN-RNN hybrid networks—for time-series data for failure prediction in aerospace assembly lines. The [...] Read more.
This study evaluates the efficacy of six artificial learning (AL) models—nonlinear autoregressive (NAR), long short-term memory (LSTM), adaptive neuro-fuzzy inference system (ANFIS), gated recurrent unit (GRU), multilayer perceptron (MLP), and CNN-RNN hybrid networks—for time-series data for failure prediction in aerospace assembly lines. The data consist of 45,654 records of configurations of failure. The models are trained to predict failures and assessed via error metrics (RMSE, MAE, MAPE), residual analysis, variance analysis, and computational efficiency. The results indicate that NAR and MLP models, respectively, achieve the lowest residuals (clustered near zero) and minimal variance, demonstrating robust calibration and stability. MLP exhibits strong accuracy (MAE = 2.122, MAPE = 0.876%, RMSE = 1.418, and ME = 1.145) but higher residual variability, while LSTM and CNN-RNN show sensitivity to data noise and computational inefficiency. ANFIS balances interpretability and performance but requires extensive training iterations. The study underscores NAR as optimal for precision-critical aerospace applications, where error minimisation and generalisability are paramount. However, the reliance on a single failure-related variable “configuration” and exclusion of exogenous factors may constrain holistic failure prediction. These findings advance predictive maintenance strategies in high-stakes manufacturing environments with future work integrating multivariable datasets and domain-specific constraints. Full article
Show Figures

Graphical abstract

27 pages, 78121 KiB  
Article
Graph-Based Stock Volatility Forecasting with Effective Transfer Entropy and Hurst-Based Regime Adaptation
by Sangheon Lee and Poongjin Cho
Fractal Fract. 2025, 9(6), 339; https://doi.org/10.3390/fractalfract9060339 - 24 May 2025
Viewed by 1020
Abstract
This study proposes a novel hybrid model for stock volatility forecasting by integrating directional and temporal dependencies among financial time series and market regime changes into a unified modeling framework. Specifically, we design a novel Hurst Exponent Effective Transfer Entropy Graph Neural Network [...] Read more.
This study proposes a novel hybrid model for stock volatility forecasting by integrating directional and temporal dependencies among financial time series and market regime changes into a unified modeling framework. Specifically, we design a novel Hurst Exponent Effective Transfer Entropy Graph Neural Network (H-ETE-GNN) model that captures directional and asymmetric interactions based on Effective Transfer Entropy (ETE), and incorporates regime change detection using the Hurst exponent to reflect evolving global market conditions. To assess the effectiveness of the proposed approach, we compared the forecast performance of the hybrid GNN model with GNN models constructed using Transfer Entropy (TE), Granger causality, and Pearson correlation—each representing different measures of causality and correlation among time series. The empirical analysis was based on daily price data of 10 major country-level ETFs over a 19-year period (2006–2024), collected via Yahoo Finance. Additionally, we implemented recurrent neural network (RNN)-based models such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) under the same experimental conditions to evaluate their performance relative to the GNN-based models. The effect of incorporating regime changes was further examined by comparing the model performance with and without Hurst-exponent-based detection. The experimental results demonstrated that the hybrid GNN-based approach effectively captured the structure of information flow between time series, leading to substantial improvements in the forecast performance for one-day-ahead realized volatility. Furthermore, incorporating regime change detection via the Hurst exponent enhanced the model’s adaptability to structural shifts in the market. This study highlights the potential of H-ETE-GNN in jointly modeling interactions between time series and market regimes, offering a promising direction for more accurate and robust volatility forecasting in complex financial environments. Full article
Show Figures

Figure 1

21 pages, 1198 KiB  
Article
Modeling the Ningbo Container Freight Index Through Deep Learning: Toward Sustainable Shipping and Regional Economic Resilience
by Haochuan Wu and Chi Gong
Sustainability 2025, 17(10), 4655; https://doi.org/10.3390/su17104655 - 19 May 2025
Cited by 1 | Viewed by 736
Abstract
With the expansion of global trade, China’s commodity futures market has become increasingly intertwined with regional maritime logistics. The Ningbo Containerized Freight Index (NCFI), as a key regional indicator, reflects freight rate fluctuations and logistics efficiency in real time. However, limited research has [...] Read more.
With the expansion of global trade, China’s commodity futures market has become increasingly intertwined with regional maritime logistics. The Ningbo Containerized Freight Index (NCFI), as a key regional indicator, reflects freight rate fluctuations and logistics efficiency in real time. However, limited research has explored how commodity futures data can enhance NCFI forecasting accuracy. This study aims to bridge that gap by proposing a hybrid deep learning model that combines recurrent neural networks (RNNs) and gated recurrent units (GRUs) to predict NCFI trends. A comprehensive dataset comprising 28,830 daily observations from March 2017 to August 2022 is constructed, incorporating the futures prices of key commodities (e.g., rebar, copper, gold, and soybeans) and market indices, alongside Clarksons containership earnings. The data undergo standardized preprocessing, feature selection via Pearson correlation analysis, and temporal partitioning into training (80%) and testing (20%) sets. The model is evaluated using multiple metrics—mean absolute Error (MAE), mean squared error (MSE), root mean square error (RMSE), and R2—on both sets. The results show that the RNN–GRU model outperforms standalone RNN and GRU architectures, achieving an R2 of 0.9518 on the test set with low MAE and RMSE values. These findings confirm that integrating cross-market financial indicators with deep sequential modeling enhances the interpretability and accuracy of regional freight forecasting. This study contributes to sustainable shipping strategies and provides decision-making tools for logistics firms, port operators, and policymakers seeking to improve resilience and data-driven planning in maritime transport. Full article
Show Figures

Figure 1

20 pages, 3977 KiB  
Article
Investigation of Multiple Hybrid Deep Learning Models for Accurate and Optimized Network Slicing
by Ahmed Raoof Nasser and Omar Younis Alani
Computers 2025, 14(5), 174; https://doi.org/10.3390/computers14050174 - 2 May 2025
Viewed by 644
Abstract
In 5G wireless communication, network slicing is considered one of the key network elements, which aims to provide services with high availability, low latency, maximizing data throughput, and ultra-reliability and save network resources. Due to the exponential expansion of cellular networking in the [...] Read more.
In 5G wireless communication, network slicing is considered one of the key network elements, which aims to provide services with high availability, low latency, maximizing data throughput, and ultra-reliability and save network resources. Due to the exponential expansion of cellular networking in the number of users along with the new applications, delivering the desired Quality of Service (QoS) requires an accurate and fast network slicing mechanism. In this paper, hybrid deep learning (DL) approaches are investigated using convolutional neural networks (CNNs), Long Short-Term Memory (LSTM), recurrent neural networks (RNNs), and Gated Recurrent Units (GRUs) to provide an accurate network slicing model. The proposed hybrid approaches are CNN-LSTM, CNN-RNN, and CNN-GRU, where a CNN is initially used for effective feature extraction and then LSTM, an RNN, and GRUs are utilized to achieve an accurate network slice classification. To optimize the model performance in terms of accuracy and model complexity, the hyperparameters of each algorithm are selected using the Bayesian optimization algorithm. The obtained results illustrate that the optimized hybrid CNN-GRU algorithm provides the best performance in terms of slicing accuracy (99.31%) and low model complexity. Full article
Show Figures

Graphical abstract

22 pages, 561 KiB  
Article
Opinion Mining and Analysis Using Hybrid Deep Neural Networks
by Adel Hidri, Suleiman Ali Alsaif, Muteeb Alahmari, Eman AlShehri and Minyar Sassi Hidri
Technologies 2025, 13(5), 175; https://doi.org/10.3390/technologies13050175 - 28 Apr 2025
Viewed by 568
Abstract
Understanding customer attitudes has become a critical component of decision-making due to the growing influence of social media and e-commerce. Text-based opinions are the most structured, hence playing an important role in sentiment analysis. Most of the existing methods, which include lexicon-based approaches [...] Read more.
Understanding customer attitudes has become a critical component of decision-making due to the growing influence of social media and e-commerce. Text-based opinions are the most structured, hence playing an important role in sentiment analysis. Most of the existing methods, which include lexicon-based approaches and traditional machine learning techniques, are insufficient for handling contextual nuances and scalability. While the latter has limitations in model performance and generalization, deep learning (DL) has achieved improvement, especially on semantic relationship capturing with recurrent neural networks (RNNs) and convolutional neural networks (CNNs). The aim of the study is to enhance opinion mining by introducing a hybrid deep neural network model that combines a bidirectional gated recurrent unit (BGRU) and long short-term memory (LSTM) layers to improve sentiment analysis, particularly addressing challenges such as contextual nuance, scalability, and class imbalance. To substantiate the efficacy of the proposed model, we conducted comprehensive experiments utilizing benchmark datasets, encompassing IMDB movie critiques and Amazon product evaluations. The introduced hybrid BGRU-LSTM (HBGRU-LSTM) architecture attained a testing accuracy of 95%, exceeding the performance of traditional DL frameworks such as LSTM (93.06%), CNN+LSTM (93.31%), and GRU+LSTM (92.20%). Moreover, our model exhibited a noteworthy enhancement in recall for negative sentiments, escalating from 86% (unbalanced dataset) to 96% (balanced dataset), thereby ensuring a more equitable and just sentiment classification. Furthermore, the model diminished misclassification loss from 20.24% for unbalanced to 13.3% for balanced dataset, signifying enhanced generalization and resilience. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

17 pages, 875 KiB  
Article
Should We Reconsider RNNs for Time-Series Forecasting?
by Vahid Naghashi, Mounir Boukadoum and Abdoulaye Banire Diallo
AI 2025, 6(5), 90; https://doi.org/10.3390/ai6050090 - 25 Apr 2025
Viewed by 1119
Abstract
(1) Background: In recent years, Transformer-based models have dominated the time-series forecasting domain, overshadowing recurrent neural networks (RNNs) such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU). While Transformers demonstrate superior performance, their high computational cost limits their practical application in [...] Read more.
(1) Background: In recent years, Transformer-based models have dominated the time-series forecasting domain, overshadowing recurrent neural networks (RNNs) such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU). While Transformers demonstrate superior performance, their high computational cost limits their practical application in resource-constrained settings. (2) Methods: In this paper, we reconsider RNNs—specifically the GRU architecture—as an efficient alternative to time-series forecasting by leveraging this architecture’s sequential representation capability to capture cross-channel dependencies effectively. Our model also utilizes a feed-forward layer right after the GRU module to represent temporal dependencies, and aggregates it with the GRU layers to predict future values of a given time-series. (3) Results and conclusions: Our extensive experiments conducted on different real-world datasets show that our inverted GRU (iGRU) model achieves promising results in terms of error metrics and memory efficiency, challenging or surpassing state-of-the-art models on various benchmarks. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

Back to TopTop