Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (20)

Search Parameters:
Keywords = hybrid BiLSTM-XGBoost model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
40 pages, 5487 KB  
Communication
Physics-Informed Temperature Prediction of Lithium-Ion Batteries Using Decomposition-Enhanced LSTM and BiLSTM Models
by Seyed Saeed Madani, Yasmin Shabeer, Michael Fowler, Satyam Panchal, Carlos Ziebert, Hicham Chaoui and François Allard
World Electr. Veh. J. 2026, 17(1), 2; https://doi.org/10.3390/wevj17010002 - 19 Dec 2025
Viewed by 640
Abstract
Accurately forecasting the operating temperature of lithium-ion batteries (LIBs) is essential for preventing thermal runaway, extending service life, and ensuring the safe operation of electric vehicles and stationary energy-storage systems. This work introduces a unified, physics-informed, and data-driven temperature-prediction framework that integrates mathematically [...] Read more.
Accurately forecasting the operating temperature of lithium-ion batteries (LIBs) is essential for preventing thermal runaway, extending service life, and ensuring the safe operation of electric vehicles and stationary energy-storage systems. This work introduces a unified, physics-informed, and data-driven temperature-prediction framework that integrates mathematically governed preprocessing, electrothermal decomposition, and sequential deep learning architectures. The methodology systematically applies the governing relations to convert raw temperature measurements into trend, seasonal, and residual components, thereby isolating long-term thermal accumulation, reversible entropy-driven oscillations, and irreversible resistive heating. These physically interpretable signatures serve as structured inputs to machine learning and deep learning models trained on temporally segmented temperature sequences. Among all evaluated predictors, the Bidirectional Long Short-Term Memory (BiLSTM) network achieved the highest prediction fidelity, yielding an RMSE of 0.018 °C, a 35.7% improvement over the conventional Long Short-Term Memory (LSTM) (RMSE = 0.028 °C) due to its ability to simultaneously encode forward and backward temporal dependencies inherent in cyclic electrochemical operation. While CatBoost exhibited the strongest performance among classical regressors (RMSE = 0.022 °C), outperforming Random Forest, Gradient Boosting, Support Vector Regression, XGBoost, and LightGBM, it remained inferior to BiLSTM because it lacks the capacity to represent bidirectional electrothermal dynamics. This performance hierarchy confirms that LIB thermal evolution is not dictated solely by historical load sequences; it also depends on forthcoming cycling patterns and entropic interactions, which unidirectional and memoryless models cannot capture. The resulting hybrid physics-data-driven framework provides a reliable surrogate for real-time LIB thermal estimation and can be directly embedded within BMS to enable proactive intervention strategies such as predictive cooling activation, current derating, and early detection of hazardous thermal conditions. By coupling physics-based decomposition with deep sequential learning, this study establishes a validated foundation for next-generation LIB thermal-management platforms and identifies a clear trajectory for future work extending the methodology to module- and pack-level systems suitable for industrial deployment. Full article
(This article belongs to the Section Vehicle Management)
Show Figures

Figure 1

41 pages, 6103 KB  
Article
H-RT-IDPS: A Hierarchical Real-Time Intrusion Detection and Prevention System for the Smart Internet of Vehicles via TinyML-Distilled CNN and Hybrid BiLSTM-XGBoost Models
by Ikram Hamdaoui, Chaymae Rami, Zakaria El Allali and Khalid El Makkaoui
Technologies 2025, 13(12), 572; https://doi.org/10.3390/technologies13120572 - 5 Dec 2025
Viewed by 671
Abstract
The integration of connected vehicles into smart city infrastructure introduces critical cybersecurity challenges for the Internet of Vehicles (IoV), where resource-constrained vehicles and powerful roadside units (RSUs) must collaborate for secure communication. We propose H-RT-IDPS, a hierarchical real-time intrusion detection and prevention system [...] Read more.
The integration of connected vehicles into smart city infrastructure introduces critical cybersecurity challenges for the Internet of Vehicles (IoV), where resource-constrained vehicles and powerful roadside units (RSUs) must collaborate for secure communication. We propose H-RT-IDPS, a hierarchical real-time intrusion detection and prevention system targeting two high-priority IoV security pillars: availability (traffic overload) and integrity/authenticity (spoofing), with spoofing evaluated across multiple subclasses (GAS, RPM, SPEED, and steering wheel). In the offline phase, deep learning and hybrid models were benchmarked on the vehicular CAN bus dataset CICIoV2024, with the BiLSTM-XGBoost hybrid chosen for its balance between accuracy and inference speed. Real-time deployment uses a TinyML-distilled CNN on vehicles for ultra-lightweight, low-latency detection, while RSU-level BiLSTM-XGBoost performs a deeper temporal analysis. A Kafka–Spark Streaming pipeline supports localized classification, prevention, and dashboard-based monitoring. In baseline, stealth, and coordinated modes, the evaluation achieved accuracy, precision, recall, and F1-scores all above 97%. The mean end-to-end inference latency was 148.67 ms, and the resource usage was stable. The framework remains robust in both high-traffic and low-frequency attack scenarios, enhancing operator situational awareness through real-time visualizations. These results demonstrate a scalable, explainable, and operator-focused IDPS well suited for securing SC-IoV deployments against evolving threats. Full article
(This article belongs to the Special Issue Research on Security and Privacy of Data and Networks)
Show Figures

Figure 1

36 pages, 1860 KB  
Article
Carbon Trading Price Forecasting Based on Multidimensional News Text and Decomposition–Ensemble Model: The Case Study of China’s Pilot Regions
by Xu Wang, Yingjie Liu, Zhenao Guo, Tengfei Yang, Xu Gong and Zhichong Lyu
Forecasting 2025, 7(4), 72; https://doi.org/10.3390/forecast7040072 - 28 Nov 2025
Viewed by 740
Abstract
Accurately predicting carbon trading price is challenging due to pronounced nonlinearity, non-stationarity, and sensitivity to diverse factors, including macroeconomic conditions, market sentiment, and climate policy. This study proposes a novel hybrid forecasting framework that integrates multidimensional news text analysis, ICEEMDAN (Improved Complete Ensemble [...] Read more.
Accurately predicting carbon trading price is challenging due to pronounced nonlinearity, non-stationarity, and sensitivity to diverse factors, including macroeconomic conditions, market sentiment, and climate policy. This study proposes a novel hybrid forecasting framework that integrates multidimensional news text analysis, ICEEMDAN (Improved Complete Ensemble Empirical Mode Decomposition with Adaptive Noise) decomposition, and machine learning to predict carbon prices in China’s pilot trading prices. We first extract a market sentiment index from news texts in the WiseSearch News Database using a customized Chinese carbon-market dictionary. In addition, a price trend index and topic intensity index are derived using Latent Dirichlet Allocation (LDA) and Convolutional Neural Networks (CNN), respectively. All feature sequences are subsequently decomposed and reconstructed using sample-entropy-based ICEEMDAN approach. The resulting multi-frequency components were then used as inputs for a range of machine-learning models to evaluate predictive performance. The empirical results demonstrate that the incorporation of multidimensional text information on China’s carbon market, combined with financial features, yields a substantial gain in prediction accuracy. Our integrated decomposition-ensemble framework achieves optimal performance by employing dedicated models—BiGRU, XGBoost, and BiLSTM for the high-frequency, low-frequency, and trend components, respectively. This approach provides policymakers, regulators, and investors with a more reliable tool for forecasting carbon prices and supports more informed decision-making, offering a promising pathway for effective carbon-price prediction. Full article
Show Figures

Figure 1

21 pages, 2939 KB  
Article
Integrating Structural Causal Models with Enhanced LSTM for Predicting Single-Tree Carbon Sequestration
by Xuemei Guan and Kai Ma
Forests 2025, 16(11), 1726; https://doi.org/10.3390/f16111726 - 14 Nov 2025
Viewed by 516
Abstract
Accurate estimation of carbon sequestration at the single-tree scale is essential for understanding forest carbon dynamics and supporting precision forestry under global carbon-neutral goals. Traditional allometric models often neglect environmental variability, while data-driven machine learning approaches suffer from limited interpretability. To bridge this [...] Read more.
Accurate estimation of carbon sequestration at the single-tree scale is essential for understanding forest carbon dynamics and supporting precision forestry under global carbon-neutral goals. Traditional allometric models often neglect environmental variability, while data-driven machine learning approaches suffer from limited interpretability. To bridge this gap, we developed a hybrid prediction framework that integrates a Structural Causal Model (SCM) with an Enhanced Long Short-Term Memory (LSTM) network. Using 47-year observation data (1975–2022) of Mongolian oak (*Quercus mongolica* Fisch. ex Ledeb.) from the Laoyeling Ecological Station, the SCM was applied to infer causal relationships among growth and environmental factors, while the Enhanced-LSTM combined multiscale convolution and self-attention modules to capture nonlinear temporal dependencies. Results showed that the proposed SCM-Enhanced-LSTM achieved the highest predictive performance (R2 = 0.944, RMSE = 0.079 kg, MAE = 0.064 kg), outperforming Bi-LSTM and XGBoost models by over 20% in accuracy and maintaining robustness under noise perturbations. Causal analysis identified soil moisture and stem diameter as the dominant drivers of carbon increment. This study provides a transparent, interpretable, and high-precision framework for single-tree carbon sequestration prediction, offering methodological support for fine-scale forest carbon accounting and sustainable management strategies. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

21 pages, 2639 KB  
Article
A Hybrid Model of Multi-Head Attention Enhanced BiLSTM, ARIMA, and XGBoost for Stock Price Forecasting Based on Wavelet Denoising
by Qingliang Zhao, Hongding Li, Xiao Liu and Yiduo Wang
Mathematics 2025, 13(16), 2622; https://doi.org/10.3390/math13162622 - 15 Aug 2025
Cited by 1 | Viewed by 1516
Abstract
The stock market plays a crucial role in the financial system, with its price movements reflecting macroeconomic trends. Due to the influence of multifaceted factors such as policy shifts and corporate performance, stock prices exhibit nonlinearity, high noise, and non-stationarity, making them difficult [...] Read more.
The stock market plays a crucial role in the financial system, with its price movements reflecting macroeconomic trends. Due to the influence of multifaceted factors such as policy shifts and corporate performance, stock prices exhibit nonlinearity, high noise, and non-stationarity, making them difficult to model accurately using a single approach. To enhance forecasting accuracy, this study proposes a hybrid forecasting framework that integrates wavelet denoising, multi-head attention-based BiLSTM, ARIMA, and XGBoost. Wavelet transform is first employed to enhance data quality. The multi-head attention BiLSTM captures nonlinear temporal dependencies, ARIMA models linear trends in residuals, and XGBoost improves the recognition of complex patterns. The final prediction is obtained by combining the outputs of all models through an inverse-error weighted ensemble strategy. Using the CSI 300 Index as an empirical case, we construct a multidimensional feature set including both market and technical indicators. Experimental results show that the proposed model clearly outperforms individual models in terms of RMSE, MAE, MAPE, and R2. Ablation studies confirm the importance of each module in performance enhancement. The model also performs well on individual stock data (e.g., Fuyao Glass), demonstrating promising generalization ability. This research provides an effective solution for improving stock price forecasting accuracy and offers valuable insights for investment decision-making and market regulation. Full article
Show Figures

Figure 1

24 pages, 5555 KB  
Article
A Signal Processing-Guided Deep Learning Framework for Wind Shear Prediction on Airport Runways
by Afaq Khattak, Pak-wai Chan, Feng Chen, Hashem Alyami and Masoud Alajmi
Atmosphere 2025, 16(7), 802; https://doi.org/10.3390/atmos16070802 - 1 Jul 2025
Viewed by 1584
Abstract
Wind shear at the Hong Kong International Airport (HKIA) poses a significant safety risk due to terrain-induced airflow disruptions near the runways. Accurate assessment is essential for safeguarding aircraft during take-off and landing, as abrupt changes in wind speed or direction can compromise [...] Read more.
Wind shear at the Hong Kong International Airport (HKIA) poses a significant safety risk due to terrain-induced airflow disruptions near the runways. Accurate assessment is essential for safeguarding aircraft during take-off and landing, as abrupt changes in wind speed or direction can compromise flight stability. This study introduces a hybrid framework for short-term wind shear prediction based on data collected from Doppler LiDAR systems positioned near the central and south runways of the HKIA. These systems provide high-resolution measurements of wind shear magnitude along critical flight paths. To predict wind shear more effectively, the proposed framework integrates a signal processing technique with a deep learning strategy. It begins with optimized variational mode decomposition (OVMD), which decomposes the wind shear time series into intrinsic mode functions (IMFs), each capturing distinct temporal characteristics. These IMFs are then modeled using bidirectional gated recurrent units (BiGRU), with hyperparameters optimized via the Tree-structured Parzen Estimator (TPE). To further enhance prediction accuracy, residual errors are corrected using Extreme Gradient Boosting (XGBoost), which captures discrepancies between the reconstructed signal and actual observations. The resulting OVMD–BiGRU–XGBoost framework exhibits strong predictive performance on testing data, achieving R2 values of 0.729 and 0.926, RMSE values of 0.931 and 0.709, and MAE values of 0.624 and 0.521 for the central and south runways, respectively. Compared with GRUs, LSTM, BiLSTM, and ResNet-based baselines, the proposed framework achieves higher accuracy and a more effective representation of multi-scale temporal dynamics. It contributes to improving short-term wind shear prediction and supports operational planning and safety management in airport environments. Full article
(This article belongs to the Special Issue Aviation Meteorology: Developments and Latest Achievements)
Show Figures

Figure 1

28 pages, 4113 KB  
Article
Building Electricity Prediction Using BILSTM-RF-XGBOOST Hybrid Model with Improved Hyperparameters Based on Bayesian Algorithm
by Yuqing Liu, Binbin Li and Hejun Liang
Electronics 2025, 14(11), 2287; https://doi.org/10.3390/electronics14112287 - 4 Jun 2025
Cited by 4 | Viewed by 2029
Abstract
Accurate building energy consumption prediction is essential for efficient energy management and energy optimization. This study utilizes bidirectional long short-term memory (BiLSTM) to automatically extract deep time series features. The nonlinear fitting and high-precision prediction capabilities of Random Forest (RF) and XGBoost models [...] Read more.
Accurate building energy consumption prediction is essential for efficient energy management and energy optimization. This study utilizes bidirectional long short-term memory (BiLSTM) to automatically extract deep time series features. The nonlinear fitting and high-precision prediction capabilities of Random Forest (RF) and XGBoost models are then utilized to develop a BiLSTM-RF-XGBoost stacked hybrid model. To enhance model generalization and reduce overfitting, a Bayesian algorithm with an early stopping mechanism is utilized to fine-tune hyperparameters, and strict K-fold time series cross-validation (TSCV) is implemented for performance evaluation. The hybrid model achieves a high TSCV average R2 value of 0.989 during cross-validation. When evaluated on an independent test set, it yields a mean square error (MSE) of 0.00003, a root mean square error (RMSE) of 0.00548, a mean absolute error (MAE) of 0.00130, and a mean absolute percentage error (MAPE) of 0.26%. These values are significantly lower than those of comparison models, indicating a significant improvement in predictive performance. The study offers insights into the internal decision-making of the model through SHAP (SHapley Additive exPlanations) feature significance analysis, revealing the key roles of temperature and power lag features, and validating that the stacked model effectively utilizes the outputs of base models as meta-features. This study makes contributions by proposing a novel hybrid model trained with Bayesian optimization, analyzing the influence of various feature factors, and providing innovative technological solutions for building energy consumption prediction. It also provides theoretical value and guidance for low-carbon building energy management and application. Full article
Show Figures

Graphical abstract

29 pages, 4281 KB  
Article
A BiLSTM-Based Hybrid Ensemble Approach for Forecasting Suspended Sediment Concentrations: Application to the Upper Yellow River
by Jinsheng Fan, Renzhi Li, Mingmeng Zhao and Xishan Pan
Land 2025, 14(6), 1199; https://doi.org/10.3390/land14061199 - 3 Jun 2025
Cited by 1 | Viewed by 1553
Abstract
Accurately predicting suspended sediment concentrations (SSC) is vital for effective reservoir planning, water resource optimization, and ecological restoration. This study proposes a hybrid ensemble model—VMD-MGGP-NGO-BiLSTM-NGO—which integrates Variational Mode Decomposition (VMD) for signal decomposition, Multi-Gene Genetic Programming (MGGP) for feature filtering, and a double-optimized [...] Read more.
Accurately predicting suspended sediment concentrations (SSC) is vital for effective reservoir planning, water resource optimization, and ecological restoration. This study proposes a hybrid ensemble model—VMD-MGGP-NGO-BiLSTM-NGO—which integrates Variational Mode Decomposition (VMD) for signal decomposition, Multi-Gene Genetic Programming (MGGP) for feature filtering, and a double-optimized NGO-BiLSTM-NGO (Northern Goshawk Optimization) structure for enhanced predictive learning. The model was trained and validated using daily discharge and SSC data from the Tangnaihai Hydrological Station on the upper Yellow River. The main findings are as follows: (1) The proposed model achieved an NSC improvement of 19.93% over the Extreme Gradient Boosting (XGBoost) and 15.26% over the Convolutional Neural Network—Long Short-Term Memory network (CNN-LSTM). (2) Compared to GWO- and PSO-based BiLSTM ensembles, the NGO-optimized VMD-MGGP-NGO- BiLSTM-NGO model achieved superior accuracy and robustness, with an average testing-phase NSC of 0.964, outperforming the Grey Wolf Optimization (GWO) and Particle Swarm Optimization (PSO) counterparts. (3) On testing data, the model attained an NSC of 0.9708, indicating strong generalization across time. Overall, the VMD-MGGP-NGO-BiLSTM-NGO model demonstrates outstanding predictive capacity and structural synergy, serving as a reliable reference for future research on SSC forecasting and environmental modeling. Full article
(This article belongs to the Special Issue Artificial Intelligence for Soil Erosion Prediction and Modeling)
Show Figures

Figure 1

20 pages, 3197 KB  
Article
Research on Intrusion Detection Method Based on Transformer and CNN-BiLSTM in Internet of Things
by Chunhui Zhang, Jian Li, Naile Wang and Dejun Zhang
Sensors 2025, 25(9), 2725; https://doi.org/10.3390/s25092725 - 25 Apr 2025
Cited by 14 | Viewed by 6226
Abstract
With the widespread deployment of Internet of Things (IoT) devices, their complex network environments and open communication modes have made them prime targets for cyberattacks. Traditional Intrusion Detection Systems (IDS) face challenges in handling complex attack types, data imbalance, and feature extraction difficulties [...] Read more.
With the widespread deployment of Internet of Things (IoT) devices, their complex network environments and open communication modes have made them prime targets for cyberattacks. Traditional Intrusion Detection Systems (IDS) face challenges in handling complex attack types, data imbalance, and feature extraction difficulties in IoT environments. Accurately detecting abnormal traffic in IoT has become increasingly critical. To address the limitation of single models in comprehensively capturing the diverse features of IoT traffic, this paper proposes a hybrid model based on CNN-BiLSTM-Transformer, which better handles complex features and long-sequence dependencies in intrusion detection. To address the issue of data class imbalance, the Borderline-SMOTE method is introduced to enhance the model’s ability to recognize minority class attack samples. To tackle the problem of redundant features in the original dataset, a comprehensive feature selection strategy combining XGBoost, Chi-square (Chi2), and Mutual Information is adopted to ensure the model focuses on the most discriminative features. Experimental validation demonstrates that the proposed method achieves 99.80% accuracy on the CIC-IDS 2017 dataset and 97.95% accuracy on the BoT-IoT dataset, significantly outperforming traditional intrusion detection methods, proving its efficiency and accuracy in detecting abnormal traffic in IoT environments. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

16 pages, 511 KB  
Article
Hybrid Machine Learning and Deep Learning Approaches for Insult Detection in Roman Urdu Text
by Nisar Hussain, Amna Qasim, Gull Mehak, Olga Kolesnikova, Alexander Gelbukh and Grigori Sidorov
AI 2025, 6(2), 33; https://doi.org/10.3390/ai6020033 - 8 Feb 2025
Cited by 9 | Viewed by 2689
Abstract
Thisstudy introduces a new model for detecting insults in Roman Urdu, filling an important gap in natural language processing (NLP) for low-resource languages. The transliterated nature of Roman Urdu also poses specific challenges from a computational linguistics perspective, including non-standardized grammar, variation in [...] Read more.
Thisstudy introduces a new model for detecting insults in Roman Urdu, filling an important gap in natural language processing (NLP) for low-resource languages. The transliterated nature of Roman Urdu also poses specific challenges from a computational linguistics perspective, including non-standardized grammar, variation in spellings for the same word, and high levels of code-mixing with English, which together make automated insult detection for Roman Urdu a highly complex problem. To address these problems, we created a large-scale dataset with 46,045 labeled comments from social media websites such as Twitter, Facebook, and YouTube. This is the first dataset for insult detection for Roman Urdu that was created and annotated with insulting and non-insulting content. Advanced preprocessing methods such as text cleaning, text normalization, and tokenization are used in the study, as well as feature extraction using TF–IDF through unigram (Uni), bigram (Bi), trigram (Tri), and their unions: Uni+Bi+Trigram. We compared ten machine learning algorithms (logistic regression, support vector machines, random forest, gradient boosting, AdaBoost, and XGBoost) and three deep learning topologies (CNN, LSTM, and Bi-LSTM). Different models were compared, and ensemble ones were proven to give the highest F1-scores, reaching 97.79%, 97.78%, and 95.25%, respectively, for AdaBoost, decision tree, TF–IDF, and Uni+Bi+Trigram configurations. Deeper learning models also performed on par, with CNN achieving an F1-score of 97.01%. Overall, the results highlight the utility of n-gram features and the combination of robust classifiers in detecting insults. This study makes strides in improving NLP for Roman Urdu, yet further research has established the foundation of pre-trained transformers and hybrid approaches; this could overcome existing systems and platform limitations. This study has conscious implications, mainly on the construction of automated moderation tools to achieve safer online spaces, especially for South Asian social media websites. Full article
(This article belongs to the Topic Applications of NLP, AI, and ML in Software Engineering)
Show Figures

Figure 1

21 pages, 6013 KB  
Article
Research on Physically Constrained VMD-CNN-BiLSTM Wind Power Prediction
by Yongkang Liu, Yi Gu, Yuwei Long, Qinyu Zhang, Yonggang Zhang and Xu Zhou
Sustainability 2025, 17(3), 1058; https://doi.org/10.3390/su17031058 - 27 Jan 2025
Cited by 6 | Viewed by 1788
Abstract
Accurate forecasting of wind power is crucial for addressing energy demands, promoting sustainable energy practices, and mitigating environmental challenges. In order to improve the prediction accuracy of wind power, a VMD-CNN-BiLSTM hybrid model with physical constraints is proposed in this paper. Initially, the [...] Read more.
Accurate forecasting of wind power is crucial for addressing energy demands, promoting sustainable energy practices, and mitigating environmental challenges. In order to improve the prediction accuracy of wind power, a VMD-CNN-BiLSTM hybrid model with physical constraints is proposed in this paper. Initially, the isolation forest algorithm identifies samples that deviate from actual power outputs, and the LightGBM algorithm is used to reconstruct the abnormal samples. Then, leveraging the variational mode decomposition (VMD) approach, the reconstructed data are decomposed into 13 sub-signals. Each sub-signal is trained using a CNN-BiLSTM model, yielding individual prediction results. Finally, the XGBoost algorithm is introduced to add the physical penalty term to the loss function. The predicted value of each sub-signal is taken as the input to get the predicted result of wind power. The hybrid model is applied to the 12 h forecast of a wind farm in Zhangjiakou City, Hebei province. Compared with other hybrid forecasting models, this model has the highest score on five performance indicators and can provide reference for wind farm generation planning, safe grid connection, real-time power dispatching, and practical application of sustainable energy. Full article
Show Figures

Figure 1

24 pages, 2674 KB  
Article
Achieving Excellence in Cyber Fraud Detection: A Hybrid ML+DL Ensemble Approach for Credit Cards
by Eyad Btoush, Xujuan Zhou, Raj Gururajan, Ka Ching Chan and Omar Alsodi
Appl. Sci. 2025, 15(3), 1081; https://doi.org/10.3390/app15031081 - 22 Jan 2025
Cited by 18 | Viewed by 6427
Abstract
The rapid advancement of technology has increased the complexity of cyber fraud, presenting a growing challenge for the banking sector to efficiently detect fraudulent credit card transactions. Conventional detection approaches face challenges in adapting to the continuously evolving tactics of fraudsters. This study [...] Read more.
The rapid advancement of technology has increased the complexity of cyber fraud, presenting a growing challenge for the banking sector to efficiently detect fraudulent credit card transactions. Conventional detection approaches face challenges in adapting to the continuously evolving tactics of fraudsters. This study addresses these limitations by proposing an innovative hybrid model that integrates Machine Learning (ML) and Deep Learning (DL) techniques through a stacking ensemble and resampling strategies. The hybrid model leverages ML techniques including Decision Tree (DT), Random Forest (RF), Support Vector Machine (SVM), eXtreme Gradient Boosting (XGBoost), Categorical Boosting (CatBoost), and Logistic Regression (LR) alongside DL techniques such as Convolutional Neural Network (CNN) and Bidirectional Long Short-Term Memory Network (BiLSTM) with attention mechanisms. By utilising the stacking ensemble method, the model consolidates predictions from multiple base models, resulting in improved predictive accuracy compared to individual models. The methodology incorporates robust data pre-processing techniques. Experimental evaluations demonstrate the superior performance of the hybrid ML+DL model, particularly in handling class imbalances and achieving a high F1 score, achieving an F1 score of 94.63%. This result underscores the effectiveness of the proposed model in delivering reliable cyber fraud detection, highlighting its potential to enhance financial transaction security. Full article
Show Figures

Figure 1

22 pages, 18310 KB  
Article
Enhanced Short-Term Load Forecasting: Error-Weighted and Hybrid Model Approach
by Huiqun Yu, Haoyi Sun, Yueze Li, Chunmei Xu and Chenkun Du
Energies 2024, 17(21), 5304; https://doi.org/10.3390/en17215304 - 25 Oct 2024
Cited by 4 | Viewed by 2001
Abstract
To tackle the challenges of high variability and low accuracy in short-term electricity load forecasting, this study introduces an enhanced prediction model that addresses overfitting issues by integrating an error-optimal weighting approach with an improved ensemble forecasting framework. The model employs a hybrid [...] Read more.
To tackle the challenges of high variability and low accuracy in short-term electricity load forecasting, this study introduces an enhanced prediction model that addresses overfitting issues by integrating an error-optimal weighting approach with an improved ensemble forecasting framework. The model employs a hybrid algorithm combining grey relational analysis and radial kernel principal component analysis to preprocess the multi-dimensional input data. It then leverages an ensemble of an optimized deep bidirectional gated recurrent unit (BiGRU), an enhanced long short-term memory (LSTM) network, and an advanced temporal convolutional neural network (TCN) to generate predictions. These predictions are refined using an error-optimal weighting scheme to yield the final forecasts. Furthermore, a Bayesian-optimized Bagging and Extreme Gradient Boosting (XGBoost) ensemble model is applied to minimize prediction errors. Comparative analysis with existing forecasting models demonstrates superior performance, with an average absolute percentage error (MAPE) of 1.05% and a coefficient of determination (R2) of 0.9878. These results not only validate the efficacy of our proposed strategy, but also highlight its potential to enhance the precision of short-term load forecasting, thereby contributing to the stability of power systems and supporting societal production needs. Full article
(This article belongs to the Section F1: Electrical Power System)
Show Figures

Figure 1

18 pages, 3533 KB  
Article
Rice Yield Forecasting Using Hybrid Quantum Deep Learning Model
by De Rosal Ignatius Moses Setiadi, Ajib Susanto, Kristiawan Nugroho, Ahmad Rofiqul Muslikh, Arnold Adimabua Ojugo and Hong-Seng Gan
Computers 2024, 13(8), 191; https://doi.org/10.3390/computers13080191 - 7 Aug 2024
Cited by 17 | Viewed by 5607
Abstract
In recent advancements in agricultural technology, quantum mechanics and deep learning integration have shown promising potential to revolutionize rice yield forecasting methods. This research introduces a novel Hybrid Quantum Deep Learning model that leverages the intricate processing capabilities of quantum computing combined with [...] Read more.
In recent advancements in agricultural technology, quantum mechanics and deep learning integration have shown promising potential to revolutionize rice yield forecasting methods. This research introduces a novel Hybrid Quantum Deep Learning model that leverages the intricate processing capabilities of quantum computing combined with the robust pattern recognition prowess of deep learning algorithms such as Extreme Gradient Boosting (XGBoost) and Bidirectional Long Short-Term Memory (Bi-LSTM). Bi-LSTM networks are used for temporal feature extraction and quantum circuits for quantum feature processing. Quantum circuits leverage quantum superposition and entanglement to enhance data representation by capturing intricate feature interactions. These enriched quantum features are combined with the temporal features extracted by Bi-LSTM and fed into an XGBoost regressor. By synthesizing quantum feature processing and classical machine learning techniques, our model aims to improve prediction accuracy significantly. Based on measurements of mean square error (MSE), the coefficient of determination (R2), and mean average error (MAE), the results are 1.191621 × 10−5, 0.999929482, and 0.001392724, respectively. This value is so close to perfect that it helps make essential decisions in global agricultural planning and management. Full article
Show Figures

Figure 1

16 pages, 1550 KB  
Article
Prediction of Sunspot Number with Hybrid Model Based on 1D-CNN, BiLSTM and Multi-Head Attention Mechanism
by Huirong Chen, Song Liu, Ximing Yang, Xinggang Zhang, Jianzhong Yang and Shaofen Fan
Electronics 2024, 13(14), 2804; https://doi.org/10.3390/electronics13142804 - 16 Jul 2024
Cited by 5 | Viewed by 2375
Abstract
Sunspots have a significant impact on human activities. In this study, we aimed to improve solar activity prediction accuracy. To predict the sunspot number based on different aspects, such as extracted features and relationships among data, we developed a hybrid model that includes [...] Read more.
Sunspots have a significant impact on human activities. In this study, we aimed to improve solar activity prediction accuracy. To predict the sunspot number based on different aspects, such as extracted features and relationships among data, we developed a hybrid model that includes a one-dimensional convolutional neural network (1D-CNN) for extracting the features of sunspots and bidirectional long short-term memory (BiLSTM) embedded with a multi-head attention mechanism (MHAM) to learn the inner relationships among data and finally predict the sunspot number. We evaluated our model and several existing models according to different evaluation indicators, such as mean absolute error (MAE) and root mean square error (RMSE). Compared with the informer, stacked LSTM, XGBoost-DL, and EMD-LSTM-AM models, the RMSE and MAE of our results were more than 42.5% and 65.1% lower, respectively. The experimental results demonstrate that our model has higher accuracy than other methods. Full article
Show Figures

Figure 1

Back to TopTop