Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (588)

Search Parameters:
Keywords = hybrid CNN-LSTM

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1081 KB  
Article
Hybrid Deep Learning Approach for Secure Electric Vehicle Communications in Smart Urban Mobility
by Abdullah Alsaleh
Vehicles 2025, 7(4), 112; https://doi.org/10.3390/vehicles7040112 - 2 Oct 2025
Abstract
The increasing adoption of electric vehicles (EVs) within intelligent transportation systems (ITSs) has elevated the importance of cybersecurity, especially with the rise in Vehicle-to-Everything (V2X) communications. Traditional intrusion detection systems (IDSs) struggle to address the evolving and complex nature of cyberattacks in such [...] Read more.
The increasing adoption of electric vehicles (EVs) within intelligent transportation systems (ITSs) has elevated the importance of cybersecurity, especially with the rise in Vehicle-to-Everything (V2X) communications. Traditional intrusion detection systems (IDSs) struggle to address the evolving and complex nature of cyberattacks in such dynamic environments. To address these challenges, this study introduces a novel deep learning-based IDS designed specifically for EV communication networks. We present a hybrid model that integrates convolutional neural networks (CNNs), long short-term memory (LSTM) layers, and adaptive learning strategies. The model was trained and validated using the VeReMi dataset, which simulates a wide range of attack scenarios in V2X networks. Additionally, an ablation study was conducted to isolate the contribution of each of its modules. The model demonstrated strong performance with 98.73% accuracy, 97.88% precision, 98.91% sensitivity, and 98.55% specificity, as well as an F1-score of 98.39%, an MCC of 0.964, a false-positive rate of 1.45%, and a false-negative rate of 1.09%, with a detection latency of 28 ms and an AUC-ROC of 0.994. Specifically, this work fills a clear gap in the existing V2X intrusion detection literature—namely, the lack of scalable, adaptive, and low-latency IDS solutions for hardware-constrained EV platforms—by proposing a hybrid CNN–LSTM architecture coupled with an elastic weight consolidation (EWC)-based adaptive learning module that enables online updates without full retraining. The proposed model provides a real-time, adaptive, and high-precision IDS for EV networks, supporting safer and more resilient ITS infrastructures. Full article
Show Figures

Figure 1

52 pages, 3501 KB  
Review
The Role of Artificial Intelligence and Machine Learning in Advancing Civil Engineering: A Comprehensive Review
by Ali Bahadori-Jahromi, Shah Room, Chia Paknahad, Marwah Altekreeti, Zeeshan Tariq and Hooman Tahayori
Appl. Sci. 2025, 15(19), 10499; https://doi.org/10.3390/app151910499 - 28 Sep 2025
Abstract
The integration of artificial intelligence (AI) and machine learning (ML) has revolutionised civil engineering, enhancing predictive accuracy, decision-making, and sustainability across domains such as structural health monitoring, geotechnical analysis, transportation systems, water management, and sustainable construction. This paper presents a detailed review of [...] Read more.
The integration of artificial intelligence (AI) and machine learning (ML) has revolutionised civil engineering, enhancing predictive accuracy, decision-making, and sustainability across domains such as structural health monitoring, geotechnical analysis, transportation systems, water management, and sustainable construction. This paper presents a detailed review of peer-reviewed publications from the past decade, employing bibliometric mapping and critical evaluation to analyse methodological advances, practical applications, and limitations. A novel taxonomy is introduced, classifying AI/ML approaches by civil engineering domain, learning paradigm, and adoption maturity to guide future development. Key applications include pavement condition assessment, slope stability prediction, traffic flow forecasting, smart water management, and flood forecasting, leveraging techniques such as Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM), Support Vector Machines (SVMs), and hybrid physics-informed neural networks (PINNs). The review highlights challenges, including limited high-quality datasets, absence of AI provisions in design codes, integration barriers with IoT-based infrastructure, and computational complexity. While explainable AI tools like SHAP and LIME improve interpretability, their practical feasibility in safety-critical contexts remains constrained. Ethical considerations, including bias in training datasets and regulatory compliance, are also addressed. Promising directions include federated learning for data privacy, transfer learning for data-scarce regions, digital twins, and adherence to FAIR data principles. This study underscores AI as a complementary tool, not a replacement, for traditional methods, fostering a data-driven, resilient, and sustainable built environment through interdisciplinary collaboration and transparent, explainable systems. Full article
(This article belongs to the Section Civil Engineering)
Show Figures

Figure 1

20 pages, 8184 KB  
Article
Enhanced Short-Term Photovoltaic Power Prediction Through Multi-Method Data Processing and SFOA-Optimized CNN-BiLSTM
by Xiaojun Hua, Zhiming Zhang, Tao Ye, Zida Song, Yun Shao and Yixin Su
Energies 2025, 18(19), 5124; https://doi.org/10.3390/en18195124 - 26 Sep 2025
Abstract
The increasing global demand for renewable energy poses significant challenges to grid stability due to the fluctuation and unpredictability of photovoltaic (PV) power generation. To enhance the accuracy of short-term PV power prediction, this study proposes an innovative integrated model that combines Convolutional [...] Read more.
The increasing global demand for renewable energy poses significant challenges to grid stability due to the fluctuation and unpredictability of photovoltaic (PV) power generation. To enhance the accuracy of short-term PV power prediction, this study proposes an innovative integrated model that combines Convolutional Neural Networks (CNN) and Bidirectional Long Short-Term Memory (BiLSTM), optimized using the Starfish Optimization Algorithm (SFOA) and integrated with a multi-method data processing framework. To reduce input feature redundancy and improve prediction accuracy under different conditions, the K-means clustering algorithm is employed to classify past data into three typical weather scenarios. Empirical Mode Decomposition is utilized for multi-scale feature extraction, while Kernel Principal Component Analysis is applied to reduce data redundancy by extracting nonlinear principal components. A hybrid CNN-BiLSTM neural network is then constructed, with its hyperparameters optimized using SFOA to enhance feature extraction and sequence modeling capabilities. The experiments were carried out with historical data from a Chinese PV power station, and the results were compared with other existing prediction models. The results demonstrate that the Root Mean Square Error of PV power generation prediction for three scenarios are 9.8212, 12.4448, and 6.2017, respectively, outperforming all other comparative models. Full article
Show Figures

Figure 1

28 pages, 6039 KB  
Article
Detection and Classification of Unhealthy Heartbeats Using Deep Learning Techniques
by Abdullah M. Albarrak, Raneem Alharbi and Ibrahim A. Ibrahim
Sensors 2025, 25(19), 5976; https://doi.org/10.3390/s25195976 - 26 Sep 2025
Abstract
Arrhythmias are a common and potentially life-threatening category of cardiac disorders, making accurate and early detection crucial for improving clinical outcomes. Electrocardiograms are widely used to monitor heart rhythms, yet their manual interpretation remains prone to inconsistencies due to the complexity of the [...] Read more.
Arrhythmias are a common and potentially life-threatening category of cardiac disorders, making accurate and early detection crucial for improving clinical outcomes. Electrocardiograms are widely used to monitor heart rhythms, yet their manual interpretation remains prone to inconsistencies due to the complexity of the signals. This research investigates the effectiveness of machine learning and deep learning techniques for automated arrhythmia classification using ECG signals from the MIT-BIH dataset. We compared Gradient Boosting Machine (GBM) and Multilayer Perceptron (MLP) as traditional machine learning models with a hybrid deep learning model combining one-dimensional convolutional neural networks (1D-CNNs) and long short-term memory (LSTM) networks. Furthermore, the Grey Wolf Optimizer (GWO) was utilized to automatically optimize the hyperparameters of the 1D-CNN-LSTM model, enhancing its performance. Experimental results show that the proposed 1D-CNN-LSTM model achieved the highest accuracy of 97%, outperforming both classical machine learning and other deep learning baselines. The classification report and confusion matrix confirm the model’s robustness in identifying various arrhythmia types. These findings emphasize the possible benefits of integrating metaheuristic optimization with hybrid deep learning. Full article
(This article belongs to the Special Issue Sensors Technology and Application in ECG Signal Processing)
Show Figures

Figure 1

24 pages, 3231 KB  
Article
A Deep Learning-Based Ensemble Method for Parameter Estimation of Solar Cells Using a Three-Diode Model
by Sung-Pei Yang, Fong-Ruei Shih, Chao-Ming Huang, Shin-Ju Chen and Cheng-Hsuan Chiua
Electronics 2025, 14(19), 3790; https://doi.org/10.3390/electronics14193790 - 24 Sep 2025
Viewed by 10
Abstract
Accurate parameter estimation of solar cells is critical for early-stage fault diagnosis in photovoltaic (PV) power systems. A physical model based on three-diode configuration has been recently introduced to improve model accuracy. However, nonlinear and recursive relationships between internal parameters and PV output, [...] Read more.
Accurate parameter estimation of solar cells is critical for early-stage fault diagnosis in photovoltaic (PV) power systems. A physical model based on three-diode configuration has been recently introduced to improve model accuracy. However, nonlinear and recursive relationships between internal parameters and PV output, along with parameter drift and PV degradation due to long-term operation, pose significant challenges. To address these issues, this study proposes a deep learning-based ensemble framework that integrates outputs from multiple optimization algorithms to improve estimation precision and robustness. The proposed method consists of three stages. First, the collected data were preprocessed using some data processing techniques. Second, a PV power generation system is modeled using the three-diode structure. Third, several optimization algorithms with distinct search behaviors are employed to produce diverse estimations. Finally, a hybrid deep learning model combining convolutional neural networks (CNNs) and long short-term memory (LSTM) networks is used to learn from these results. Experimental validation on a 733 kW PV power generation system demonstrates that the proposed method outperforms individual optimization approaches in terms of prediction accuracy and stability. Full article
Show Figures

Figure 1

17 pages, 1548 KB  
Article
Hybrid Deep-Ensemble Network with VAE-Based Augmentation for Imbalanced Tabular Data Classification
by Sang-Jeong Lee and You-Suk Bae
Appl. Sci. 2025, 15(19), 10360; https://doi.org/10.3390/app151910360 - 24 Sep 2025
Viewed by 63
Abstract
Background: Severe class imbalance limits reliable tabular AI in manufacturing, finance, and healthcare. Methods: We built a modular pipeline comprising correlation-aware seriation; a hybrid convolutional neural network (CNN)–transformer–Bidirectional Long Short-Term Memory (BiLSTM) encoder; variational autoencoder (VAE)-based minority augmentation; and deep/tree ensemble heads (XGBoost [...] Read more.
Background: Severe class imbalance limits reliable tabular AI in manufacturing, finance, and healthcare. Methods: We built a modular pipeline comprising correlation-aware seriation; a hybrid convolutional neural network (CNN)–transformer–Bidirectional Long Short-Term Memory (BiLSTM) encoder; variational autoencoder (VAE)-based minority augmentation; and deep/tree ensemble heads (XGBoost and Support Vector Machine, SVM). We benchmarked the Synthetic Minority Oversampling Technique (SMOTE) and ADASYN under identical protocols. Focal loss and ensemble weights were tuned per dataset. The primary metric was the Area Under the Precision–Recall Curve (AUPRC), with receiver operating characteristic area under the curve (ROC AUC) as complementary. Synthetic-data fidelity was quantified by train-on-synthetic/test-on-real (TSTR) utility, two-sample discriminability (ROC AUC of a real-vs-synthetic classifier), and Maximum Mean Discrepancy (MMD2). Results: Across five datasets (SECOM, CREDIT, THYROID, APS, and UCI), augmentation was data-dependent: VAE led on APS (+3.66 pp AUPRC vs. SMOTE) and was competitive on CREDIT (+0.10 pp vs. None); the SMOTE dominated SECOM; no augmentation performed best for THYROID and UCI. Positional embedding (PE) with seriation helped when strong local correlations were present. Ensembles typically favored XGBoost while benefiting from the hybrid encoder. Efficiency profiling and a slim variant supported latency-sensitive use. Conclusions: A data-aware recipe emerged: prefer VAE when fidelity is high, the SMOTE on smoother minority manifolds, and no augmentation when baselines suffice; apply PE/seriation selectively and tune per dataset for robust, reproducible deployment. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

25 pages, 6670 KB  
Article
WT-CNN-BiLSTM: A Precise Rice Yield Prediction Method for Small-Scale Greenhouse Planting on the Yunnan Plateau
by Jihong Sun, Peng Tian, Xinrui Wang, Jiawei Zhao, Xianwei Niu, Haokai Zhang and Ye Qian
Agronomy 2025, 15(10), 2256; https://doi.org/10.3390/agronomy15102256 - 23 Sep 2025
Viewed by 115
Abstract
Multispectral technology and deep learning are widely used in field crop yield prediction. Existing studies mainly focus on large-scale estimation in plain regions, while integrated applications for small-scale plateau plots are rarely reported. To solve this problem, this study proposes a WT-CNN-BiLSTM hybrid [...] Read more.
Multispectral technology and deep learning are widely used in field crop yield prediction. Existing studies mainly focus on large-scale estimation in plain regions, while integrated applications for small-scale plateau plots are rarely reported. To solve this problem, this study proposes a WT-CNN-BiLSTM hybrid model that integrates UAV-borne multispectral imagery and deep learning for rice yield prediction in small-scale greenhouses on the Yunnan Plateau. Initially, a rice dataset covering five drip irrigation levels was constructed, including vegetation index images of rice throughout its entire growth cycle and yield data from 500 sub-plots. After data augmentation (image rotation, flipping, and yield augmentation with Gaussian noise), the dataset was expanded to 2000 sub-plots. Then, with CNN-LSTM as the baseline, four vegetation indices (NDVI, NDRE, OSAVI, and RECI) were compared, and RECI-Yield was determined as the optimal input dataset. Finally, the convolutional layers in the first residual block of ResNet50 were replaced with WTConv to enhance multi-frequency feature extraction; the extracted features were then input into BiLSTM to capture the long-term growth trends of rice, resulting in the development of the WT-CNN-BiLSTM model. Experimental results showed that in small-scale greenhouses on the Yunnan Plateau, the model achieved the best prediction performance under the 50% drip irrigation level (R2 = 0.91). Moreover, the prediction performance based on the merged dataset of all irrigation levels was even better (RMSE = 9.68 g, MAPE = 11.41%, R2 = 0.92), which was significantly superior to comparative models such as CNN-LSTM, CNN-BiLSTM, and CNN-GRU, as well as the prediction results under single irrigation levels. Cross-validation based on the RECI-Yield-VT dataset (RMSE = 8.07 g, MAPE = 9.22%, R2 = 0.94) further confirmed its generalization ability, enabling its effective application to rice yield prediction in small-scale greenhouse scenarios on the Yunnan Plateau. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

27 pages, 44538 KB  
Article
Short-Term Load Forecasting in the Greek Power Distribution System: A Comparative Study of Gradient Boosting and Deep Learning Models
by Md Fazle Hasan Shiblee and Paraskevas Koukaras
Energies 2025, 18(19), 5060; https://doi.org/10.3390/en18195060 - 23 Sep 2025
Viewed by 218
Abstract
Accurate short-term electricity load forecasting is essential for efficient energy management, grid reliability, and cost optimization. This study presents a comprehensive comparison of five supervised learning models—Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), a hybrid (CNN-LSTM) architecture, and [...] Read more.
Accurate short-term electricity load forecasting is essential for efficient energy management, grid reliability, and cost optimization. This study presents a comprehensive comparison of five supervised learning models—Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), a hybrid (CNN-LSTM) architecture, and Light Gradient Boosting Machine (LightGBM)—using multivariate data from the Greek electricity market between 2015 and 2024. The dataset incorporates hourly load, temperature, humidity, and holiday indicators. Extensive preprocessing was applied, including K-Nearest Neighbor (KNN) imputation, time-based feature extraction, and normalization. Models were trained using a 70:20:10 train–validation–test split and evaluated with standard performance metrics: MAE, MSE, RMSE, NRMSE, MAPE, and R2. The experimental findings show that LightGBM beat deep learning (DL) models on all evaluation metrics and had the best MAE (69.12 MW), RMSE (101.67 MW), and MAPE (1.20%) and the highest R2 (0.9942) for the test set. It also outperformed models in the literature and operational forecasts conducted in the real world by ENTSO-E. Though LSTM performed well, particularly in long-term dependency capturing, it performed a bit worse in high-variance periods. CNN, GRU, and hybrid models demonstrated moderate results, but they tended to underfit or overfit in some circumstances. These findings highlight the efficacy of LightGBM in structured time-series forecasting tasks, offering a scalable and interpretable alternative to DL models. This study supports its potential for real-world deployment in smart/distribution grid applications and provides valuable insights into the trade-offs between accuracy, complexity, and generalization in load forecasting models. Full article
Show Figures

Figure 1

20 pages, 1689 KB  
Article
Prediction of Motor Rotor Temperature Using TCN-BiLSTM-MHA Model Based on Hybrid Grey Wolf Optimization Algorithm
by Changzhi Lv, Guangbo Lin, Dongxin Xu, Zhongxin Song and Di Fan
World Electr. Veh. J. 2025, 16(9), 541; https://doi.org/10.3390/wevj16090541 - 22 Sep 2025
Viewed by 216
Abstract
The permanent magnet synchronous motor (PMSM) is the core of new energy vehicle drive systems, and its temperature status is directly related to the safety of the entire vehicle. However, the temperature of rotor permanent magnets is difficult to measure directly, and traditional [...] Read more.
The permanent magnet synchronous motor (PMSM) is the core of new energy vehicle drive systems, and its temperature status is directly related to the safety of the entire vehicle. However, the temperature of rotor permanent magnets is difficult to measure directly, and traditional sensor schemes are costly and complex to deploy. With the development of Artificial Intelligence (AI) technology, deep learning (DL) provides a feasible path for sensorless modeling. This paper proposes a prediction model that integrates a Temporal Convolutional Network (TCN), Bidirectional Long Short-Term Memory Network (BiLSTM), and multi-head attention mechanism (MHA) and introduces a Hybrid Grey Wolf Optimizer (H-GWO) for hyperparameter optimization, which is applied to PMSM temperature prediction. A public dataset from Paderborn University is used for training and testing. The test set verification results show that the H-GWO-optimized TCN-BiLSTM-MHA model has a mean absolute error (MAE) of 0.3821 °C, a root mean square error (RMSE) of 0.4857 °C, and an R2 of 0.9985. Compared with the CNN-BiLSTM-Attention model, the MAE and RMSE are reduced by approximately 11.8% and 19.3%, respectively. Full article
(This article belongs to the Section Propulsion Systems and Components)
Show Figures

Figure 1

17 pages, 3119 KB  
Article
Fault Diagnosis Method Using CNN-Attention-LSTM for AC/DC Microgrid
by Qiangsheng Bu, Pengpeng Lyu, Ruihai Sun, Jiangping Jing, Zhan Lyu and Shixi Hou
Modelling 2025, 6(3), 107; https://doi.org/10.3390/modelling6030107 - 18 Sep 2025
Viewed by 313
Abstract
From the perspectives of theoretical design and practical application, the existing fault diagnosis methods with the complex identification process owing to manual feature extraction and the insufficient feature extraction for time series data and weak fault signal is not suitable for AC/DC microgrids. [...] Read more.
From the perspectives of theoretical design and practical application, the existing fault diagnosis methods with the complex identification process owing to manual feature extraction and the insufficient feature extraction for time series data and weak fault signal is not suitable for AC/DC microgrids. Thus, this paper proposes a fault diagnosis method that integrates a convolutional neural network (CNN) with a long short-term memory (LSTM) network and attention mechanisms. The method employs a multi-scale convolution-based weight layer (Weight Layer 1) to extract features of faults from different dimensions, performing feature fusion to enrich the fault characteristics of the AC/DC microgrid. Additionally, a hybrid attention block-based weight layer (Weight Layer 2) is designed to enable the model to adaptively focus on the most significant features, thereby improving the extraction and utilization of critical information, which enhances both classification accuracy and model generalization. By cascading LSTM layers, the model effectively captures temporal dependencies within the features, allowing the model to extract critical information from the temporal evolution of electrical signals, thus enhancing both classification accuracy and robustness. Simulation results indicate that the proposed method achieves a classification accuracy of up to 99.5%, with fault identification accuracy for noisy signals under 10 dB noise interference reaching 92.5%, demonstrating strong noise immunity. Full article
Show Figures

Figure 1

25 pages, 2551 KB  
Article
Optimal Low-Carbon Economic Dispatch Strategy for Active Distribution Networks with Participation of Multi-Flexible Loads
by Xu Yao, Kun Zhang, Chenghui Liu, Taipeng Zhu, Fangfang Zhou, Jiezhang Li and Chong Liu
Processes 2025, 13(9), 2972; https://doi.org/10.3390/pr13092972 - 18 Sep 2025
Viewed by 233
Abstract
Optimization dispatch with flexible load participation in new power systems significantly enhances renewable energy accommodation, though the potential of flexible loads remains underexploited. To improve renewable utilization efficiency, promote wind/PV consumption and reduce carbon emissions, this paper establishes a low-carbon economic optimization dispatch [...] Read more.
Optimization dispatch with flexible load participation in new power systems significantly enhances renewable energy accommodation, though the potential of flexible loads remains underexploited. To improve renewable utilization efficiency, promote wind/PV consumption and reduce carbon emissions, this paper establishes a low-carbon economic optimization dispatch model for active distribution networks incorporating flexible loads and tiered carbon trading. First, a hybrid SSA (Sparrow Search Algorithm)–CNN-LSTM model is adopted for accurate renewable generation forecasting. Meanwhile, multi-type flexible loads are categorized into shiftable, transferable and reducible loads based on response characteristics, with tiered carbon trading mechanism introduced to achieve low-carbon operation through price incentives that guide load-side participation while avoiding privacy leakage from direct control. Considering the non-convex nonlinear characteristics of the dispatch model, an improved Beluga Whale Optimization (BWO) algorithm is developed. To address the diminished solution diversity and precision in conventional BWO evolution, Tent chaotic mapping is introduced to resolve initial parameter sensitivity. Finally, modified IEEE-33 bus system simulations demonstrate the method’s validity and feasibility. Full article
(This article belongs to the Special Issue Applications of Smart Microgrids in Renewable Energy Development)
Show Figures

Figure 1

28 pages, 13270 KB  
Article
Deep Learning Applications for Crop Mapping Using Multi-Temporal Sentinel-2 Data and Red-Edge Vegetation Indices: Integrating Convolutional and Recurrent Neural Networks
by Rahat Tufail, Patrizia Tassinari and Daniele Torreggiani
Remote Sens. 2025, 17(18), 3207; https://doi.org/10.3390/rs17183207 - 17 Sep 2025
Viewed by 2162
Abstract
Accurate crop classification using satellite imagery is critical for agricultural monitoring, yield estimation, and land-use planning. However, this task remains challenging due to the spectral similarity among crops. Although crops differ in physiological characteristics, including chlorophyll content, they often exhibit only subtle differences [...] Read more.
Accurate crop classification using satellite imagery is critical for agricultural monitoring, yield estimation, and land-use planning. However, this task remains challenging due to the spectral similarity among crops. Although crops differ in physiological characteristics, including chlorophyll content, they often exhibit only subtle differences in their spectral reflectance, which make their precise discrimination challenging. To address this, this study uses the high temporal and spectral resolution of Sentinel-2 imagery, including its red-edge bands and derived vegetation indices, which are particularly sensitive to vegetation health and structural differences. This study presents a hybrid deep learning framework for crop classification, conducted through a case study in a complex agricultural region of Northern Italy. We investigated the combined use of spectral bands and NDVI & red-edge-based vegetation indices as inputs to hybrid deep learning models. Previous studies have applied 1D CNN, 2D CNN, LSTM, and GRU, often standalone, but their capacity to jointly process spectral and vegetative features through integrated CNN-RNN structures remains underexplored in mixed agricultural regions. To fill this gap, we developed and assessed four hybrid architectures: (1) 1D CNN-LSTM, (2) 1D CNN-GRU, (3) 2D CNN-LSTM, and (4) 2D CNN-GRU. These models were trained using optimized hyperparameters on combined spectral and vegetative input features. The 2D CNN-GRU model achieved the highest overall accuracy (99.12%) and F1-macro (99.14%), followed by 2D CNN-LSTM (98.51%), while 1D CNN-GRU and 1D CNN-LSTM performed slightly lower (93.46% and 92.54%), respectively. Full article
Show Figures

Figure 1

19 pages, 1961 KB  
Article
Malicious URL Detection with Advanced Machine Learning and Optimization-Supported Deep Learning Models
by Fuat Türk and Mahmut Kılıçaslan
Appl. Sci. 2025, 15(18), 10090; https://doi.org/10.3390/app151810090 - 15 Sep 2025
Viewed by 546
Abstract
This study presents a comprehensive comparative analysis of machine learning, deep learning, and optimization-based hybrid methods for malicious URL detection on the Malicious Phish dataset. For feature selection and model hyperparameter tuning, the Genetic Algorithm (GA), Particle Swarm Optimization (PSO), and Harris Hawk [...] Read more.
This study presents a comprehensive comparative analysis of machine learning, deep learning, and optimization-based hybrid methods for malicious URL detection on the Malicious Phish dataset. For feature selection and model hyperparameter tuning, the Genetic Algorithm (GA), Particle Swarm Optimization (PSO), and Harris Hawk Optimizer (HHO) were employed. Both multiclass and binary classification tasks were addressed using classic machine learning algorithms such as LightGBM, XGBoost, and Random Forest, as well as deep learning models including LSTM, CNN, and hybrid CNN+LSTM architectures, with optimization support also integrated into these models. The experimental results reveal that the ELECTRA-based deep learning model achieved outstanding accuracy and F1-scores of up to 99% in both multiclass and binary scenarios. Although optimization-supported hybrid models also improved performance, the language-model-based ELECTRA architecture demonstrated a significant superiority over classical and optimized approaches. The findings indicate that optimization algorithms are effective in feature selection and enhancing model performance, yet next-generation language models clearly set a new benchmark in malicious URL detection. Full article
Show Figures

Figure 1

29 pages, 4506 KB  
Article
Adaptive Deep Belief Networks and LightGBM-Based Hybrid Fault Diagnostics for SCADA-Managed PV Systems: A Real-World Case Study
by Karl Kull, Muhammad Amir Khan, Bilal Asad, Muhammad Usman Naseer, Ants Kallaste and Toomas Vaimann
Electronics 2025, 14(18), 3649; https://doi.org/10.3390/electronics14183649 - 15 Sep 2025
Viewed by 507
Abstract
Photovoltaic (PV) systems are increasingly integral to global energy solutions, but their long-term reliability is challenged by various operational faults. In this article, we propose an advanced hybrid diagnostic framework combining a Deep Belief Network (DBN) for feature pattern extraction and a Light [...] Read more.
Photovoltaic (PV) systems are increasingly integral to global energy solutions, but their long-term reliability is challenged by various operational faults. In this article, we propose an advanced hybrid diagnostic framework combining a Deep Belief Network (DBN) for feature pattern extraction and a Light Gradient Boosting Machine (LightGBM) for classification to detect and diagnose PV panel faults. The proposed model is trained and validated on the QASP PV Fault Detection Dataset, a real-time SCADA-based dataset collected from 255 W panels at the Quaid-e-Azam Solar 100 MW Power Plant (QASP), Pakistan’s largest solar facility. The dataset encompasses seven classes: Healthy, Open Circuit, Photovoltaic Ground (PVG), Partial Shading, Busbar, Soiling, and Hotspot Faults. The DBN captures complex non-linear relationships in SCADA parameters such as DC voltage, DC current, irradiance, inverter power, module temperature, and performance ratio, while LightGBM ensures high accuracy in classifying fault types. The proposed model is trained and evaluated on a real-world SCADA-based dataset comprising 139,295 samples, with a 70:30 split for training and testing, ensuring robust generalization across diverse PV fault conditions. Experimental results demonstrate the robustness and generalization capabilities of the proposed hybrid (DBN–LightGBM) model, outperforming conventional machine learning methods and showing an accuracy of 98.21% classification accuracy, 98.0% macro-F1 score, and significantly reduced training time compared to Transformer and CNN-LSTM baselines. This study contributes to a reliable and scalable AI-driven solution for real-time PV fault monitoring, offering practical implications for large-scale solar plant maintenance and operational efficiency. Full article
Show Figures

Figure 1

18 pages, 4010 KB  
Article
Traffic Flow Prediction via a Hybrid CPO-CNN-LSTM-Attention Architecture
by Ivan Topilin, Jixiao Jiang, Anastasia Feofilova and Nikita Beskopylny
Smart Cities 2025, 8(5), 148; https://doi.org/10.3390/smartcities8050148 - 15 Sep 2025
Viewed by 498
Abstract
Spatiotemporal modeling and prediction of road network traffic flow are essential components of intelligent transport systems (ITS), aimed at effectively enhancing road service levels. Sustainable and reliable traffic management in smart cities requires the use of modern algorithms based on a comprehensive analysis [...] Read more.
Spatiotemporal modeling and prediction of road network traffic flow are essential components of intelligent transport systems (ITS), aimed at effectively enhancing road service levels. Sustainable and reliable traffic management in smart cities requires the use of modern algorithms based on a comprehensive analysis of a significant number of dynamically changing factors. This paper designs a Crested Porcupine Optimizer (CPO)-CNN-LSTM-Attention time series prediction model, which integrates machine learning and deep learning to improve the efficiency of traffic flow forecasting in the condition of urban roads. Based on historical traffic patterns observed on Paris’s roads, a traffic flow prediction model was formulated and subsequently verified for effectiveness. The CPO algorithm combined with multiple neural network models performed well in predicting traffic flow, surpassing other models with a root-mean-square error (RMSE) of 17.35–19.83, a mean absolute error (MAE) of 13.98–14.04, and a mean absolute percentage error (MAPE) of 5.97–6.62%. Therefore, the model proposed in this paper can predict traffic flow more accurately, providing a solution for enhancing urban traffic management in intelligent transportation systems, and thus offering a research direction for the future development of smart city construction. Full article
Show Figures

Figure 1

Back to TopTop