Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (977)

Search Parameters:
Keywords = gated recurrent unit neural network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
10 pages, 1464 KB  
Communication
A Signal Detection Method Based on BiGRU for FSO Communications with Atmospheric Turbulence
by Zhenning Yi, Zhiyong Xu, Jianhua Li, Jingyuan Wang, Jiyong Zhao, Yang Su and Yimin Wang
Photonics 2025, 12(10), 980; https://doi.org/10.3390/photonics12100980 - 2 Oct 2025
Abstract
In free space optical (FSO) communications, signals are affected by turbulence when transmitted through the atmosphere. Fluctuations in intensity caused by atmospheric turbulence lead to an increase in the bit error rate of FSO systems. Deep learning (DL), as a current research hotspot, [...] Read more.
In free space optical (FSO) communications, signals are affected by turbulence when transmitted through the atmosphere. Fluctuations in intensity caused by atmospheric turbulence lead to an increase in the bit error rate of FSO systems. Deep learning (DL), as a current research hotspot, offers a promising approach to improve the accuracy of signal detection. In this paper, we propose a signal detection method based on a bidirectional gated recurrent unit (BiGRU) neural network for FSO communications. The proposed detection method considers the temporal correlation of received signals due to the properties of the BiGRU neural network, which is not available in existing detection methods based on DL. In addition, the proposed detection method does not require channel state information (CSI) for channel estimation, unlike maximum likelihood (ML) detection technology with perfect CSI. Numerical results demonstrate that the proposed BiGRU-based detector achieves significant improvements in bit error rate (BER) performance compared with a multilayer perceptron (MLP)-based detector. Specifically, under weak turbulence conditions, the BiGRU-based detector achieves an approximate 2 dB signal-to-noise ratio (SNR) gain at a target BER of 106 compared to the MLP-based detector. Under moderate turbulence conditions, it achieves an approximate 6 dB SNR gain at the same target BER of 106. Under strong turbulence conditions, the proposed detector obtains a 6 dB SNR gain at a target BER of 104. Additionally, it outperforms conventional methods by more than one order of magnitude in BER under the same turbulence and SNR conditions. Full article
(This article belongs to the Section Optical Communication and Network)
Show Figures

Figure 1

24 pages, 22010 KB  
Article
Improving the Temporal Resolution of Land Surface Temperature Using Machine and Deep Learning Models
by Mohsen Niroomand, Parham Pahlavani, Behnaz Bigdeli and Omid Ghorbanzadeh
Geomatics 2025, 5(4), 50; https://doi.org/10.3390/geomatics5040050 - 1 Oct 2025
Abstract
Land Surface Temperature (LST) is a critical parameter for analyzing urban heat islands, surface–atmosphere interactions, and environmental management. This study enhances the temporal resolution of LST data by leveraging machine learning and deep learning models. A novel methodology was developed using Landsat 8 [...] Read more.
Land Surface Temperature (LST) is a critical parameter for analyzing urban heat islands, surface–atmosphere interactions, and environmental management. This study enhances the temporal resolution of LST data by leveraging machine learning and deep learning models. A novel methodology was developed using Landsat 8 thermal data and Sentinel-2 multispectral imagery to predict LST at finer temporal intervals in an urban setting. Although Sentinel-2 lacks a thermal band, its high-resolution multispectral data, when integrated with Landsat 8 thermal observations, provide valuable complementary information for LST estimation. Several models were employed for LST prediction, including Random Forest Regression (RFR), Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM) network, and Gated Recurrent Unit (GRU). Model performance was assessed using the coefficient of determination (R2) and Mean Absolute Error (MAE). The CNN model demonstrated the highest predictive capability, achieving an R2 of 74.81% and an MAE of 1.588 °C. Feature importance analysis highlighted the role of spectral bands, spectral indices, topographic parameters, and land cover data in capturing the dynamic complexity of LST variations and directional patterns. A refined CNN model, trained with the features exhibiting the highest correlation with the reference LST, achieved an improved R2 of 84.48% and an MAE of 1.19 °C. These results underscore the importance of a comprehensive analysis of the factors influencing LST, as well as the need to consider the specific characteristics of the study area. Additionally, a modified TsHARP approach was applied to enhance spatial resolution, though its accuracy remained lower than that of the CNN model. The study was conducted in Tehran, a rapidly urbanizing metropolis facing rising temperatures, heavy traffic congestion, rapid horizontal expansion, and low energy efficiency. The findings contribute to urban environmental management by providing high-temporal-resolution LST data, essential for mitigating urban heat islands and improving climate resilience. Full article
Show Figures

Figure 1

19 pages, 2621 KB  
Article
A Lightweight and Efficient Deep Learning Model for Detection of Sector and Region in Three-Level Inverters
by Fatih Özen, Rana Ortaç Kabaoğlu and Tarık Veli Mumcu
Electronics 2025, 14(19), 3876; https://doi.org/10.3390/electronics14193876 - 29 Sep 2025
Abstract
In three-level inverters, high accuracy and low latency sector and region detection are of great importance for control and monitoring processes. This study aims to overcome the limitations of traditional methods and develop a model that can work in real time in industrial [...] Read more.
In three-level inverters, high accuracy and low latency sector and region detection are of great importance for control and monitoring processes. This study aims to overcome the limitations of traditional methods and develop a model that can work in real time in industrial applications. In this study, various deep learning (DL) architectures are systematically evaluated, and a comprehensive performance comparison is performed to automate sector and region detection for inverter systems. The proposed approach aims to detect sectors (6 classes) and regions (3 classes) with high accuracy using a Deep Neural Network (DNN), 1D Convolutional Neural Network (CNN), Long-Short Term Memory (LSTM), and Gated Recurrent Unit (GRU) based DL architectures. The performance of the considered DL approaches was systematically evaluated with cross-validation, confusion matrices, and statistical tests. The proposed GRU-based model offers both computational efficiency and high classification performance with a low number of parameters compared to other models. The proposed model achieved 99.27% and 97.62% accuracy in sector and region detection, respectively, and provided a more optimized solution compared to many heavily structured state-of-the-art DL models. The results show that the GRU model exhibits statistically significant superior performance and support that it has the potential to be easily integrated into hardware-based systems due to its low computational complexity. The comprehensive results show that DL-based approaches can be effectively used in sector and region detection in inverter systems, and especially the GRU architecture is a promising method. Full article
(This article belongs to the Special Issue Application of Machine Learning in Power Electronics)
Show Figures

Figure 1

29 pages, 2068 KB  
Article
Voice-Based Early Diagnosis of Parkinson’s Disease Using Spectrogram Features and AI Models
by Danish Quamar, V. D. Ambeth Kumar, Muhammad Rizwan, Ovidiu Bagdasar and Manuella Kadar
Bioengineering 2025, 12(10), 1052; https://doi.org/10.3390/bioengineering12101052 - 29 Sep 2025
Abstract
Parkinson’s disease (PD) is a progressive neurodegenerative disorder that significantly affects motor functions, including speech production. Voice analysis offers a less invasive, faster and more cost-effective approach for diagnosing and monitoring PD over time. This research introduces an automated system to distinguish between [...] Read more.
Parkinson’s disease (PD) is a progressive neurodegenerative disorder that significantly affects motor functions, including speech production. Voice analysis offers a less invasive, faster and more cost-effective approach for diagnosing and monitoring PD over time. This research introduces an automated system to distinguish between PD and non-PD individuals based on speech signals using state-of-the-art signal processing and machine learning (ML) methods. A publicly available voice dataset (Dataset 1, 81 samples) containing speech recordings from PD patients and non-PD individuals was used for model training and evaluation. Additionally, a small supplementary dataset (Dataset 2, 15 samples) was created although excluded from experiment, to illustrate potential future extensions of this work. Features such as Mel-frequency cepstral coefficients (MFCCs), spectrograms, Mel spectrograms and waveform representations were extracted to capture key vocal impairments related to PD, including diminished vocal range, weak harmonics, elevated spectral entropy and impaired formant structures. These extracted features were used to train and evaluate several ML models, including support vector machine (SVM), XGBoost and logistic regression, as well as deep learning (DL)architectures such as deep neural networks (DNN), convolutional neural networks (CNN) combined with long short-term memory (LSTM), CNN + gated recurrent unit (GRU) and bidirectional LSTM (BiLSTM). Experimental results show that DL models, particularly BiLSTM, outperform traditional ML models, achieving 97% accuracy and an AUC of 0.95. The comprehensive feature extraction from both datasets enabled robust classification of PD and non-PD speech signals. These findings highlight the potential of integrating acoustic features with DL methods for early diagnosis and monitoring of Parkinson’s Disease. Full article
Show Figures

Figure 1

16 pages, 1620 KB  
Article
An Attention-Driven Hybrid Deep Network for Short-Term Electricity Load Forecasting in Smart Grid
by Jinxing Wang, Sihui Xue, Liang Lin, Benying Tan and Huakun Huang
Mathematics 2025, 13(19), 3091; https://doi.org/10.3390/math13193091 - 26 Sep 2025
Abstract
With the large-scale development of smart grids and the integration of renewable energy, the operational complexity and load volatility of power systems have increased significantly, placing higher demands on the accuracy and timeliness of electricity load forecasting. However, existing methods struggle to capture [...] Read more.
With the large-scale development of smart grids and the integration of renewable energy, the operational complexity and load volatility of power systems have increased significantly, placing higher demands on the accuracy and timeliness of electricity load forecasting. However, existing methods struggle to capture the nonlinear and volatile characteristics of load sequences, often exhibiting insufficient fitting and poor generalization in peak and abrupt change scenarios. To address these challenges, this paper proposes a deep learning model named CGA-LoadNet, which integrates a one-dimensional convolutional neural network (1D-CNN), gated recurrent units (GRUs), and a self-attention mechanism. The model is capable of simultaneously extracting local temporal features and long-term dependencies. To validate its effectiveness, we conducted experiments on a publicly available electricity load dataset. The experimental results demonstrate that CGA-LoadNet significantly outperforms baseline models, achieving the best performance on key metrics with an R2 of 0.993, RMSE of 18.44, MAE of 13.94, and MAPE of 1.72, thereby confirming the effectiveness and practical potential of its architectural design. Overall, CGA-LoadNet more accurately fits actual load curves, particularly in complex regions, such as load peaks and abrupt changes, providing an efficient and robust solution for short-term load forecasting in smart grid scenarios. Full article
(This article belongs to the Special Issue AI, Machine Learning and Optimization)
Show Figures

Figure 1

15 pages, 603 KB  
Article
A Hybrid CNN–GRU Deep Learning Model for IoT Network Intrusion Detection
by Kuburat Oyeranti Adefemi, Murimo Bethel Mutanga and Oyeniyi Akeem Alimi
J. Sens. Actuator Netw. 2025, 14(5), 96; https://doi.org/10.3390/jsan14050096 - 26 Sep 2025
Abstract
Internet of Things (IoT) networks are constantly exposed to various security challenges and vulnerabilities, including manipulative data injections and cyberattacks. Traditional security measures are often inadequate, overburdened, and unreliable in adapting to the heterogeneous yet diverse nature of IoT networks. This emphasizes the [...] Read more.
Internet of Things (IoT) networks are constantly exposed to various security challenges and vulnerabilities, including manipulative data injections and cyberattacks. Traditional security measures are often inadequate, overburdened, and unreliable in adapting to the heterogeneous yet diverse nature of IoT networks. This emphasizes the need for intelligent and effective methodologies. In recent times, deep learning models have been extensively used to monitor and detect intrusions in complex applications. The models can effectively learn and understand the dynamic characteristics of voluminous IoT datasets to prompt efficient decision-making predictions. This study proposes a hybrid Convolutional Neural Network and Gated Recurrent Unit (CNN-GRU) algorithm to enhance intrusion detection in IoT environments. The proposed CNN-GRU model is validated using two benchmark datasets: the IoTID20 and BoT-IoT intrusion detection datasets. The proposed model incorporates an effective technique to handle the class imbalance issues that are peculiar to voluminous datasets. The results demonstrate superior accuracy, precision, recall, F1-score, and area under the curve, with a reduced false positive rate compared to similar models in the literature. Specifically, the proposed CNN–GRU achieved up to 99.83% and 99.01% accuracy, surpassing baseline models by a margin of 2–3% across both datasets. These findings highlight the model’s potential for real-time cybersecurity applications in IoT networks and general industrial control systems. Full article
Show Figures

Figure 1

22 pages, 1269 KB  
Article
LightFakeDetect: A Lightweight Model for Deepfake Detection in Videos That Focuses on Facial Regions
by Sarab AlMuhaideb, Hessa Alshaya, Layan Almutairi, Danah Alomran and Sarah Turki Alhamed
Mathematics 2025, 13(19), 3088; https://doi.org/10.3390/math13193088 - 25 Sep 2025
Abstract
In recent years, the proliferation of forged videos, known as deepfakes, has escalated significantly, primarily due to advancements in technologies such as Generative Adversarial Networks (GANs), diffusion models, and Vision Language Models (VLMs). These deepfakes present substantial risks, threatening political stability, facilitating celebrity [...] Read more.
In recent years, the proliferation of forged videos, known as deepfakes, has escalated significantly, primarily due to advancements in technologies such as Generative Adversarial Networks (GANs), diffusion models, and Vision Language Models (VLMs). These deepfakes present substantial risks, threatening political stability, facilitating celebrity impersonation, and enabling tampering with evidence. As the sophistication of deepfake technology increases, detecting these manipulated videos becomes increasingly challenging. Most of the existing deepfake detection methods use Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), or Vision Transformers (ViTs), achieving strong accuracy but exhibiting high computational demands. This highlights the need for a lightweight yet effective pipeline for real-time and resource-limited scenarios. This study introduces a lightweight deep learning model for deepfake detection in order to address this emerging threat. The model incorporates three integral components: MobileNet for feature extraction, a Convolutional Block Attention Module (CBAM) for feature enhancement, and a Gated Recurrent Unit (GRU) for temporal analysis. Additionally, a pre-trained Multi-Task Cascaded Convolutional Network (MTCNN) is utilized for face detection and cropping. The model is evaluated using the Deepfake Detection Challenge (DFDC) and Celeb-DF v2 datasets, demonstrating impressive performance, with 98.2% accuracy and a 99.0% F1-score on Celeb-DF v2 and 95.0% accuracy and a 97.2% F1-score on DFDC, achieving a commendable balance between simplicity and effectiveness. Full article
Show Figures

Figure 1

27 pages, 44538 KB  
Article
Short-Term Load Forecasting in the Greek Power Distribution System: A Comparative Study of Gradient Boosting and Deep Learning Models
by Md Fazle Hasan Shiblee and Paraskevas Koukaras
Energies 2025, 18(19), 5060; https://doi.org/10.3390/en18195060 - 23 Sep 2025
Viewed by 218
Abstract
Accurate short-term electricity load forecasting is essential for efficient energy management, grid reliability, and cost optimization. This study presents a comprehensive comparison of five supervised learning models—Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), a hybrid (CNN-LSTM) architecture, and [...] Read more.
Accurate short-term electricity load forecasting is essential for efficient energy management, grid reliability, and cost optimization. This study presents a comprehensive comparison of five supervised learning models—Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), a hybrid (CNN-LSTM) architecture, and Light Gradient Boosting Machine (LightGBM)—using multivariate data from the Greek electricity market between 2015 and 2024. The dataset incorporates hourly load, temperature, humidity, and holiday indicators. Extensive preprocessing was applied, including K-Nearest Neighbor (KNN) imputation, time-based feature extraction, and normalization. Models were trained using a 70:20:10 train–validation–test split and evaluated with standard performance metrics: MAE, MSE, RMSE, NRMSE, MAPE, and R2. The experimental findings show that LightGBM beat deep learning (DL) models on all evaluation metrics and had the best MAE (69.12 MW), RMSE (101.67 MW), and MAPE (1.20%) and the highest R2 (0.9942) for the test set. It also outperformed models in the literature and operational forecasts conducted in the real world by ENTSO-E. Though LSTM performed well, particularly in long-term dependency capturing, it performed a bit worse in high-variance periods. CNN, GRU, and hybrid models demonstrated moderate results, but they tended to underfit or overfit in some circumstances. These findings highlight the efficacy of LightGBM in structured time-series forecasting tasks, offering a scalable and interpretable alternative to DL models. This study supports its potential for real-world deployment in smart/distribution grid applications and provides valuable insights into the trade-offs between accuracy, complexity, and generalization in load forecasting models. Full article
Show Figures

Figure 1

24 pages, 3544 KB  
Article
A Deep Learning Model Integrating EEMD and GRU for Air Quality Index Forecasting
by Mei-Ling Huang, Netnapha Chamnisampan and Yi-Ru Ke
Atmosphere 2025, 16(9), 1095; https://doi.org/10.3390/atmos16091095 - 18 Sep 2025
Viewed by 390
Abstract
Accurate prediction of the air quality index (AQI) is essential for environmental monitoring and sustainable urban planning. With rising pollution from industrialization and urbanization, particularly from fine particulate matter (PM2.5, PM10), nitrogen dioxide (NO2), and ozone (O [...] Read more.
Accurate prediction of the air quality index (AQI) is essential for environmental monitoring and sustainable urban planning. With rising pollution from industrialization and urbanization, particularly from fine particulate matter (PM2.5, PM10), nitrogen dioxide (NO2), and ozone (O3), robust forecasting tools are needed to support timely public health interventions. This study proposes a hybrid deep learning framework that combines empirical mode decomposition (EMD) and ensemble empirical mode decomposition (EEMD) with two recurrent neural network architectures: long short-term memory (LSTM) and gated recurrent unit (GRU). A comprehensive dataset from Xitun District, Taichung City—including AQI and 18 pollutant and meteorological variables—was used to train and evaluate the models. Model performance was assessed using root mean square error, mean absolute error, mean absolute percentage error, and the coefficient of determination. Both LSTM and GRU models effectively capture the temporal patterns of air quality data, outperforming traditional methods. Among all configurations, the EEMD-GRU model delivered the highest prediction accuracy, demonstrating strong capability in modeling high-dimensional and nonlinear environmental data. Furthermore, the incorporation of decomposition techniques significantly reduced prediction error across all models. These findings highlight the effectiveness of hybrid deep learning approaches for modeling complex environmental time series. The results further demonstrate their practical value in air quality management and early-warning systems. Full article
(This article belongs to the Section Air Quality)
Show Figures

Figure 1

35 pages, 6406 KB  
Article
Comparative Study of RNN-Based Deep Learning Models for Practical 6-DOF Ship Motion Prediction
by HaEun Lee and Yangjun Ahn
J. Mar. Sci. Eng. 2025, 13(9), 1792; https://doi.org/10.3390/jmse13091792 - 17 Sep 2025
Viewed by 323
Abstract
Accurate prediction of ship motion is essential for ensuring the safety and efficiency of maritime operations. However, the ship dynamics’ nonlinear, non-stationary, and environment-dependent nature presents significant challenges for reliable short-term forecasting. This study uses a simulated dataset designed to reflect realistic maritime [...] Read more.
Accurate prediction of ship motion is essential for ensuring the safety and efficiency of maritime operations. However, the ship dynamics’ nonlinear, non-stationary, and environment-dependent nature presents significant challenges for reliable short-term forecasting. This study uses a simulated dataset designed to reflect realistic maritime variability to evaluate the performance of recurrent neural network (RNN)-based models—including RNN, LSTM, GRU, and Bi-LSTM—under both single and multi-environment conditions. The analysis examines the effects of input sequence length, downsampling intervals, model complexity, and input dimensionality. Results show that Bi-LSTM consistently outperforms unidirectional architectures, particularly in complex multi-environment scenarios. In single-environment settings, the prediction horizon exceeded 40 s, while it decreased to around 20 s under more variable conditions, reflecting generalization challenges. Multi-degree-of-freedom (DOF) inputs enhanced performance by capturing the coupled nature of ship dynamics, whereas incorporating wave height data yielded inconsistent results. A sequence length of 200 timesteps and a downsampling interval of 5 effectively balanced motion feature preservation with high-frequency noise reduction. Increasing model size improved accuracy up to 256 hidden units and 10 layers, beyond which performance gains diminished. Additionally, Peak Matching was introduced as a complementary metric to MSE, emphasizing the importance of accurately predicting motion extrema for practical maritime applications. Full article
(This article belongs to the Special Issue Machine Learning for Prediction of Ship Motion)
Show Figures

Figure 1

36 pages, 8122 KB  
Article
Human Activity Recognition via Attention-Augmented TCN-BiGRU Fusion
by Ji-Long He, Jian-Hong Wang, Chih-Min Lo and Zhaodi Jiang
Sensors 2025, 25(18), 5765; https://doi.org/10.3390/s25185765 - 16 Sep 2025
Viewed by 449
Abstract
With the widespread application of wearable sensors in health monitoring and human–computer interaction, deep learning-based human activity recognition (HAR) research faces challenges such as the effective extraction of multi-scale temporal features and the enhancement of robustness against noise in multi-source data. This study [...] Read more.
With the widespread application of wearable sensors in health monitoring and human–computer interaction, deep learning-based human activity recognition (HAR) research faces challenges such as the effective extraction of multi-scale temporal features and the enhancement of robustness against noise in multi-source data. This study proposes the TGA-HAR (TCN-GRU-Attention-HAR) model. The TGA-HAR model integrates Temporal Convolutional Neural Networks and Recurrent Neural Networks by constructing a hierarchical feature abstraction architecture through cascading Temporal Convolutional Network (TCN) and Bidirectional Gated Recurrent Unit (BiGRU) layers for complex activity recognition. This study utilizes TCN layers with dilated convolution kernels to extract multi-order temporal features. This study utilizes BiGRU layers to capture bidirectional temporal contextual correlation information. To further optimize feature representation, the TGA-HAR model introduces residual connections to enhance the stability of gradient propagation and employs an adaptive weighted attention mechanism to strengthen feature representation. The experimental results of this study demonstrate that the model achieved test accuracies of 99.37% on the WISDM dataset, 95.36% on the USC-HAD dataset, and 96.96% on the PAMAP2 dataset. Furthermore, we conducted tests on datasets collected in real-world scenarios. This method provides a highly robust solution for complex human activity recognition tasks. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

28 pages, 1812 KB  
Article
An Integrated Hybrid Deep Learning Framework for Intrusion Detection in IoT and IIoT Networks Using CNN-LSTM-GRU Architecture
by Doaa Mohsin Abd Ali Afraji, Jaime Lloret and Lourdes Peñalver
Computation 2025, 13(9), 222; https://doi.org/10.3390/computation13090222 - 14 Sep 2025
Viewed by 612
Abstract
Intrusion detection systems (IDSs) are critical for securing modern networks, particularly in IoT and IIoT environments where traditional defenses such as firewalls and encryption are insufficient against evolving cyber threats. This paper proposes an enhanced hybrid deep learning model that integrates convolutional neural [...] Read more.
Intrusion detection systems (IDSs) are critical for securing modern networks, particularly in IoT and IIoT environments where traditional defenses such as firewalls and encryption are insufficient against evolving cyber threats. This paper proposes an enhanced hybrid deep learning model that integrates convolutional neural networks (CNNs), Long Short-Term Memory (LSTM), and Gated Recurrent Units (GRU) in a multi-branch architecture designed to capture spatial and temporal dependencies while minimizing redundant computations. Unlike conventional hybrid approaches, the proposed parallel–sequential fusion framework leverages the strengths of each component independently before merging features, thereby improving detection granularity and learning efficiency. A rigorous preprocessing pipeline is employed to handle real-world data challenges: missing values are imputed using median filling, class imbalance is mitigated through SMOTE (Synthetic Minority Oversampling Technique), and feature scaling is performed with Min–Max normalization to ensure convergence consistency. The methodology is validated on the TON_IoT and CICIDS2017 dataset, chosen for its diversity and realism in IoT/IIoT attack scenarios. Three hybrid models—CNN-LSTM, CNN-GRU, and the proposed CNN-LSTM-GRU—are assessed for binary and multiclass intrusion detection. Experimental results demonstrate that the CNN-LSTM-GRU architecture achieves superior performance, attaining 100% accuracy in binary classification and 97% in multiclass detection, with balanced precision, recall, and F1-scores across all classes. Furthermore, evaluation on the CICIDS2017 dataset confirms the model’s generalization ability, achieving 99.49% accuracy with precision, recall, and F1-scores of 0.9954, 0.9943, and 0.9949, respectively, outperforming CNN-LSTM and CNN-GRU baselines. Compared to existing IDS models, our approach delivers higher robustness, scalability, and adaptability, making it a promising candidate for next-generation IoT/IIoT security. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

17 pages, 3195 KB  
Article
Intelligent Method for PDC Bit Selection Based on Graph Neural Network
by Ning Li, Chengkai Zhang, Tianguo Xia, Mengna Hao, Long Chen, Zhaopeng Zhu, Chaochen Wang, Shanlin Ye and Xiran Liu
Appl. Sci. 2025, 15(18), 9985; https://doi.org/10.3390/app15189985 - 12 Sep 2025
Viewed by 303
Abstract
As oil and gas exploration extends to deep, ultra-deep, and unconventional reservoirs, high drilling costs persist. Drill bit performance, as the critical rock-breaking component, directly governs efficiency and economics. While optimal bit selection boosts rate of penetration (ROP) and cuts costs, traditional expert-dependent [...] Read more.
As oil and gas exploration extends to deep, ultra-deep, and unconventional reservoirs, high drilling costs persist. Drill bit performance, as the critical rock-breaking component, directly governs efficiency and economics. While optimal bit selection boosts rate of penetration (ROP) and cuts costs, traditional expert-dependent methods struggle to address complex formation bit parameter interactions, suffering from low accuracy and poor adaptability. With artificial intelligence gaining traction in petroleum engineering, machine learning-based bit selection has emerged as a key solution. This study focuses on polycrystalline diamond compact (PDC) bits and proposes an intelligent bit selection method based on graph neural networks (GNNs), utilizing drilling records from over 100 wells encompassing 40 multidimensional features. Through comparative analysis of four intelligent models—random forest, gradient boosting (XGBoost), gated recurrent unit (GRU), and the GNN, the results demonstrate that the GNN achieves superior performance with an R2 (coefficient of determination) of 0.932 and MAPE (mean absolute percentage error) of 6.88%. The GNN significantly outperforms conventional models in rock-breaking performance prediction. By establishing this GNN model for ROP and footage per run prediction, this study achieves intelligent bit selection that substantially enhances drilling efficiency, reduces operational costs, and provides scientifically reliable technical support for drilling operations in complex formation conditions. Full article
Show Figures

Figure 1

18 pages, 4873 KB  
Article
Optimized GRU with Self-Attention for Bearing Fault Diagnosis Using Bayesian Hyperparameter Tuning
by Zongchao Liu, Shuai Teng and Shaodi Wang
Algorithms 2025, 18(9), 576; https://doi.org/10.3390/a18090576 - 12 Sep 2025
Viewed by 312
Abstract
Rolling bearing failures cause significant production downtime and economic losses. Traditional diagnostic methods suffer from low efficiency, suboptimal accuracy, and susceptibility to human subjectivity. To address these limitations, this paper proposes a novel bearing fault diagnosis (BFD) approach leveraging a Gated Recurrent Unit [...] Read more.
Rolling bearing failures cause significant production downtime and economic losses. Traditional diagnostic methods suffer from low efficiency, suboptimal accuracy, and susceptibility to human subjectivity. To address these limitations, this paper proposes a novel bearing fault diagnosis (BFD) approach leveraging a Gated Recurrent Unit (GRU) network. Key contributions include: (1) Employing Bayesian optimization to automate the search for the optimal GRU architecture (layers, hidden units) and hyperparameters (learning rate, batch size, epochs), significantly enhancing diagnostic performance (achieving 97.9% accuracy). (2) Integrating a self-attention mechanism to further improve the GRU’s feature extraction capability from vibration signals, boosting accuracy to 99.6%. (3) Demonstrating the robustness of the optimized GRU with self-attention across varying motor speeds (1772 rpm, 1750 rpm, 1730 rpm), consistently maintaining diagnostic accuracy above 97%. Comparative studies with Bayesian-optimized Long Short-Term Memory (LSTM) and Convolutional Neural Network (CNN) models confirm the superior accuracy (97.9% vs. 95.1% and 90.0%) and faster inference speed (0.27 s) of the proposed GRU-based method. The results validate that the combination of Bayesian optimization, GRU, and self-attention provides an efficient, accurate, and robust intelligent solution for automated BFD. Full article
Show Figures

Figure 1

30 pages, 3118 KB  
Article
Prediction of Combustion Parameters and Pollutant Emissions of a Dual-Fuel Engine Based on Recurrent Neural Networks
by Joel Freidy Ebolembang, Fabrice Parfait Nang Nkol, Lionel Merveil Anague Tabejieu, Fernand Toukap Nono and Claude Valery Ngayihi Abbe
Appl. Sci. 2025, 15(18), 9868; https://doi.org/10.3390/app15189868 - 9 Sep 2025
Viewed by 366
Abstract
A critical challenge in engine research lies in minimizing harmful emissions while optimizing the efficiency of internal combustion engines. Dual-fuel engines, operating with methanol and diesel, offer a promising alternative, but their combustion modeling remains complex due to the intricate thermochemical interactions involved. [...] Read more.
A critical challenge in engine research lies in minimizing harmful emissions while optimizing the efficiency of internal combustion engines. Dual-fuel engines, operating with methanol and diesel, offer a promising alternative, but their combustion modeling remains complex due to the intricate thermochemical interactions involved. This study proposes a predictive framework that combines validated CFD simulations with deep learning techniques to estimate key combustion and emission parameters in a methanol–diesel dual-fuel engine. A three-dimensional CFD model was developed to simulate turbulent combustion, methanol injection, and pollutant formation, using the RNG k-ε turbulence model. A temporal dataset consisting of 1370 samples was generated, covering the compression, combustion, and early expansion phases—critical regions influencing both emissions and in-cylinder pressure dynamics. The optimal configuration identified involved a 63° spray injection angle and a 25% methanol proportion. A Gated Recurrent Unit (GRU) neural network, consisting of 256 neurons, a Tanh activation function, and a dropout rate of 0.2, was trained on this dataset. The model accurately predicted in-cylinder pressure, temperature, NOx emissions, and impact-related parameters, achieving a Pearson correlation coefficient of ρ = 0.997. This approach highlights the potential of combining CFD and deep learning for rapid and reliable prediction of engine behavior. It contributes to the development of more efficient, cleaner, and robust design strategies for future dual-fuel combustion systems. Full article
(This article belongs to the Special Issue Diesel Engine Combustion and Emissions Control)
Show Figures

Figure 1

Back to TopTop