Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (883)

Search Parameters:
Keywords = LSTM-RNN

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 2183 KB  
Article
A Hierarchical RNN-LSTM Model for Multi-Class Outage Prediction and Operational Optimization in Microgrids
by Nouman Liaqat, Muhammad Zubair, Aashir Waleed, Muhammad Irfan Abid and Muhammad Shahid
Electricity 2025, 6(4), 55; https://doi.org/10.3390/electricity6040055 - 1 Oct 2025
Abstract
Microgrids are becoming an innovative piece of modern energy systems as they provide locally sourced and resilient energy opportunities and enable efficient energy sourcing. However, microgrid operations can be greatly affected by sudden environmental changes, deviating demand, and unexpected outages. In particular, extreme [...] Read more.
Microgrids are becoming an innovative piece of modern energy systems as they provide locally sourced and resilient energy opportunities and enable efficient energy sourcing. However, microgrid operations can be greatly affected by sudden environmental changes, deviating demand, and unexpected outages. In particular, extreme climatic events expose the vulnerability of microgrid infrastructure and resilience, often leading to increased risk of system-wide outages. Thus, successful microgrid operation relies on timely and accurate outage predictions. This research proposes a data-driven machine learning framework for the optimized operation of a microgrid and predictive outage detection using a Recurrent Neural Network–Long Short-Term Memory (RNN-LSTM) architecture that reflects inherent temporal modeling methods. A time-aware embedding and masking strategy is employed to handle categorical and sparse temporal features, while mutual information-based feature selection ensures only the most relevant and interpretable inputs are retained for prediction. Moreover, the model addresses the challenges of experiencing rapid power fluctuations by looking at long-term learning dependency aspects within historical and real-time data observation streams. Two datasets are utilized: a locally developed real-time dataset collected from a 5 MW microgrid of Maple Cement Factory in Mianwali and a 15-year national power outage dataset obtained from Kaggle. Both datasets went through intensive preprocessing, normalization, and tokenization to transform raw readings into machine-readable sequences. The suggested approach attained an accuracy of 86.52% on the real-time dataset and 84.19% on the Kaggle dataset, outperforming conventional models in detecting sequential outage patterns. It also achieved a precision of 86%, a recall of 86.20%, and an F1-score of 86.12%, surpassing the performance of other models such as CNN, XGBoost, SVM, and various static classifiers. In contrast to these traditional approaches, the RNN-LSTM’s ability to leverage temporal context makes it a more effective and intelligent choice for real-time outage prediction and microgrid optimization. Full article
Show Figures

Figure 1

21 pages, 9610 KB  
Article
Global Ionosphere Total Electron Content Prediction Based on Bidirectional Denoising Wavelet Transform Convolution
by Liwei Sun, Guoming Yuan, Huijun Le, Xingyue Yao, Shijia Li and Haijun Liu
Atmosphere 2025, 16(10), 1139; https://doi.org/10.3390/atmos16101139 - 28 Sep 2025
Abstract
The Denoising Wavelet Transform Convolutional Long Short-Term Memory Network (DWTConvLSTM) is a novel ionospheric total electron content (TEC) spatiotemporal prediction model proposed in 2025 that can simultaneously consider high-frequency and low-frequency features while suppressing noise. However, it also has flaws as it only [...] Read more.
The Denoising Wavelet Transform Convolutional Long Short-Term Memory Network (DWTConvLSTM) is a novel ionospheric total electron content (TEC) spatiotemporal prediction model proposed in 2025 that can simultaneously consider high-frequency and low-frequency features while suppressing noise. However, it also has flaws as it only considers unidirectional temporal features in spatiotemporal prediction. To address this issue, this paper adopts a bidirectional structure and designs a bidirectional DWTConvLSTM model that can simultaneously extract bidirectional spatiotemporal features from TEC maps. Furthermore, we integrate a lightweight attention mechanism called Convolutional Additive Self-Attention (CASA) to enhance important features and attenuate unimportant ones. The final model was named CASA-BiDWTConvLSTM. We validated the effectiveness of each improvement through ablation experiments. Then, a comprehensive comparison was performed on the 11-year Global Ionospheric Maps (GIMs) dataset, involving the proposed CASA-BiDWTConvLSTM model and several other state-of-the-art models such as C1PG, ConvGRU, ConvLSTM, and PredRNN. In this experiment, the dataset was partitioned into 7 years for training, 2 years for validation, and the final 2 years for testing. The experimental results indicate that the RMSE of CASA-BiDWTConvLSTM is lower than those of C1PG, ConvGRU, ConvLSTM, and PredRNN. Specifically, the decreases in RMSE during high solar activity years are 24.84%, 16.57%, 13.50%, and 10.29%, respectively, while the decreases during low solar activity years are 26.11%, 16.83%, 11.68%, and 7.04%, respectively. In addition, this article also verified the effectiveness of CASA-BiDWTConvLSTM from spatial and temporal perspectives, as well as on four geomagnetic storms. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

23 pages, 924 KB  
Article
Energy and Water Management in Smart Buildings Using Spiking Neural Networks: A Low-Power, Event-Driven Approach for Adaptive Control and Anomaly Detection
by Malek Alrashidi, Sami Mnasri, Maha Alqabli, Mansoor Alghamdi, Michael Short, Sean Williams, Nashwan Dawood, Ibrahim S. Alkhazi and Majed Abdullah Alrowaily
Energies 2025, 18(19), 5089; https://doi.org/10.3390/en18195089 - 24 Sep 2025
Viewed by 17
Abstract
The growing demand for energy efficiency and sustainability in smart buildings necessitates advanced AI-driven methods for adaptive control and predictive maintenance. This study explores the application of Spiking Neural Networks (SNNs) to event-driven processing, real-time anomaly detection, and edge computing-based optimization in building [...] Read more.
The growing demand for energy efficiency and sustainability in smart buildings necessitates advanced AI-driven methods for adaptive control and predictive maintenance. This study explores the application of Spiking Neural Networks (SNNs) to event-driven processing, real-time anomaly detection, and edge computing-based optimization in building automation. In contrast to conventional deep learning models, SNNs provide low-power, high-efficiency computation by mimicking biological neural processes, making them particularly suitable for real-time, edge-deployed decision-making. The proposed SNN based on Reward-Modulated Spike-Timing-Dependent Plasticity (STDP) and Bayesian Optimization (BO) integrates occupancy and ambient condition monitoring to dynamically manage assets such as appliances while simultaneously identifying anomalies for predictive maintenance. Experimental evaluations show that our BO-STDP-SNN framework achieves notable reductions in both energy consumption by 27.8% and power requirements by 70%, while delivering superior accuracy in anomaly detection compared with CNN, RNN, and LSTM based baselines. These results demonstrate the potential of SNNs to enhance the efficiency and resilience of smart building systems, reduce operational costs, and support long-term sustainability through low-latency, event-driven intelligence. Full article
(This article belongs to the Special Issue Digital Engineering for Future Smart Cities)
Show Figures

Graphical abstract

30 pages, 2308 KB  
Article
Forecasting Installation Demand Using Machine Learning: Evidence from a Large PV Installer in Poland
by Anna Zielińska and Rafał Jankowski
Energies 2025, 18(18), 4998; https://doi.org/10.3390/en18184998 - 19 Sep 2025
Viewed by 248
Abstract
The dynamic growth of the photovoltaic (PV) market in Poland, driven by declining technology costs, government support programs, and the decentralization of energy generation, has created a strong demand for accurate short-term forecasts to support sales planning, logistics, and resource management. This study [...] Read more.
The dynamic growth of the photovoltaic (PV) market in Poland, driven by declining technology costs, government support programs, and the decentralization of energy generation, has created a strong demand for accurate short-term forecasts to support sales planning, logistics, and resource management. This study investigates the application of long short-term memory (LSTM) recurrent neural networks to forecast two key market indicators: the monthly number of completed PV installations and their average unit capacity. The analysis is based on proprietary two-year data from one of the largest PV companies in Poland, covering both sales and completed installations. The dataset was preprocessed through cleaning, filtering, and aggregation into a consistent monthly time series. Results demonstrate that the LSTM model effectively captured seasonality and temporal dependencies in the PV market, outperforming multilayer perceptron (MLP) models in forecasting installation counts and providing robust predictions for average capacity. These findings confirm the potential of LSTM-based forecasting as a valuable decision-support tool for enterprises and policymakers, enabling improved market strategy, optimized resource allocation, and more effective design of support mechanisms in the renewable energy sector. The originality of this study lies in the use of a unique, proprietary dataset of over 12,000 completed PV micro-installations, rarely available in the literature, and in its direct focus on market demand forecasting rather than energy production. This perspective highlights the practical value of the model for companies in sales planning, logistics, and resource allocation. Full article
Show Figures

Figure 1

28 pages, 13270 KB  
Article
Deep Learning Applications for Crop Mapping Using Multi-Temporal Sentinel-2 Data and Red-Edge Vegetation Indices: Integrating Convolutional and Recurrent Neural Networks
by Rahat Tufail, Patrizia Tassinari and Daniele Torreggiani
Remote Sens. 2025, 17(18), 3207; https://doi.org/10.3390/rs17183207 - 17 Sep 2025
Viewed by 2162
Abstract
Accurate crop classification using satellite imagery is critical for agricultural monitoring, yield estimation, and land-use planning. However, this task remains challenging due to the spectral similarity among crops. Although crops differ in physiological characteristics, including chlorophyll content, they often exhibit only subtle differences [...] Read more.
Accurate crop classification using satellite imagery is critical for agricultural monitoring, yield estimation, and land-use planning. However, this task remains challenging due to the spectral similarity among crops. Although crops differ in physiological characteristics, including chlorophyll content, they often exhibit only subtle differences in their spectral reflectance, which make their precise discrimination challenging. To address this, this study uses the high temporal and spectral resolution of Sentinel-2 imagery, including its red-edge bands and derived vegetation indices, which are particularly sensitive to vegetation health and structural differences. This study presents a hybrid deep learning framework for crop classification, conducted through a case study in a complex agricultural region of Northern Italy. We investigated the combined use of spectral bands and NDVI & red-edge-based vegetation indices as inputs to hybrid deep learning models. Previous studies have applied 1D CNN, 2D CNN, LSTM, and GRU, often standalone, but their capacity to jointly process spectral and vegetative features through integrated CNN-RNN structures remains underexplored in mixed agricultural regions. To fill this gap, we developed and assessed four hybrid architectures: (1) 1D CNN-LSTM, (2) 1D CNN-GRU, (3) 2D CNN-LSTM, and (4) 2D CNN-GRU. These models were trained using optimized hyperparameters on combined spectral and vegetative input features. The 2D CNN-GRU model achieved the highest overall accuracy (99.12%) and F1-macro (99.14%), followed by 2D CNN-LSTM (98.51%), while 1D CNN-GRU and 1D CNN-LSTM performed slightly lower (93.46% and 92.54%), respectively. Full article
Show Figures

Figure 1

35 pages, 6406 KB  
Article
Comparative Study of RNN-Based Deep Learning Models for Practical 6-DOF Ship Motion Prediction
by HaEun Lee and Yangjun Ahn
J. Mar. Sci. Eng. 2025, 13(9), 1792; https://doi.org/10.3390/jmse13091792 - 17 Sep 2025
Viewed by 323
Abstract
Accurate prediction of ship motion is essential for ensuring the safety and efficiency of maritime operations. However, the ship dynamics’ nonlinear, non-stationary, and environment-dependent nature presents significant challenges for reliable short-term forecasting. This study uses a simulated dataset designed to reflect realistic maritime [...] Read more.
Accurate prediction of ship motion is essential for ensuring the safety and efficiency of maritime operations. However, the ship dynamics’ nonlinear, non-stationary, and environment-dependent nature presents significant challenges for reliable short-term forecasting. This study uses a simulated dataset designed to reflect realistic maritime variability to evaluate the performance of recurrent neural network (RNN)-based models—including RNN, LSTM, GRU, and Bi-LSTM—under both single and multi-environment conditions. The analysis examines the effects of input sequence length, downsampling intervals, model complexity, and input dimensionality. Results show that Bi-LSTM consistently outperforms unidirectional architectures, particularly in complex multi-environment scenarios. In single-environment settings, the prediction horizon exceeded 40 s, while it decreased to around 20 s under more variable conditions, reflecting generalization challenges. Multi-degree-of-freedom (DOF) inputs enhanced performance by capturing the coupled nature of ship dynamics, whereas incorporating wave height data yielded inconsistent results. A sequence length of 200 timesteps and a downsampling interval of 5 effectively balanced motion feature preservation with high-frequency noise reduction. Increasing model size improved accuracy up to 256 hidden units and 10 layers, beyond which performance gains diminished. Additionally, Peak Matching was introduced as a complementary metric to MSE, emphasizing the importance of accurately predicting motion extrema for practical maritime applications. Full article
(This article belongs to the Special Issue Machine Learning for Prediction of Ship Motion)
Show Figures

Figure 1

21 pages, 471 KB  
Review
Long Short-Term Memory Networks: A Comprehensive Survey
by Moez Krichen and Alaeddine Mihoub
AI 2025, 6(9), 215; https://doi.org/10.3390/ai6090215 - 5 Sep 2025
Viewed by 1129
Abstract
Long Short-Term Memory (LSTM) networks have revolutionized the field of deep learning, particularly in applications that require the modeling of sequential data. Originally designed to overcome the limitations of traditional recurrent neural networks (RNNs), LSTMs effectively capture long-range dependencies in sequences, making them [...] Read more.
Long Short-Term Memory (LSTM) networks have revolutionized the field of deep learning, particularly in applications that require the modeling of sequential data. Originally designed to overcome the limitations of traditional recurrent neural networks (RNNs), LSTMs effectively capture long-range dependencies in sequences, making them suitable for a wide array of tasks. This survey aims to provide a comprehensive overview of LSTM architectures, detailing their unique components, such as cell states and gating mechanisms, which facilitate the retention and modulation of information over time. We delve into the various applications of LSTMs across multiple domains, including the following: natural language processing (NLP), where they are employed for language modeling, machine translation, and sentiment analysis; time series analysis, where they play a critical role in forecasting tasks; and speech recognition, significantly enhancing the accuracy of automated systems. By examining these applications, we illustrate the versatility and robustness of LSTMs in handling complex data types. Additionally, we explore several notable variants and improvements of the standard LSTM architecture, such as Bidirectional LSTMs, which enhance context understanding, and Stacked LSTMs, which increase model capacity. We also discuss the integration of Attention Mechanisms with LSTMs, which have further advanced their performance in various tasks. Despite their strengths, LSTMs face several challenges, including high Computational Complexity, extensive Data Requirements, and difficulties in training, which can hinder their practical implementation. This survey addresses these limitations and provides insights into ongoing research aimed at mitigating these issues. In conclusion, we highlight recent advances in LSTM research and propose potential future directions that could lead to enhanced performance and broader applicability of LSTM networks. This survey serves as a foundational resource for researchers and practitioners seeking to understand the current landscape of LSTM technology and its future trajectory. Full article
Show Figures

Figure 1

30 pages, 6568 KB  
Article
Hybrid Hourly Solar Energy Forecasting Using BiLSTM Networks with Attention Mechanism, General Type-2 Fuzzy Logic Approach: A Comparative Study of Seasonal Variability in Lithuania
by Naiyer Mohammadi Lanbaran, Darius Naujokaitis, Gediminas Kairaitis and Virginijus Radziukynas
Appl. Sci. 2025, 15(17), 9672; https://doi.org/10.3390/app15179672 - 2 Sep 2025
Viewed by 420
Abstract
This research introduces a novel hybrid forecasting framework for solar energy prediction in high-latitude regions with extreme seasonal variations. This approach uniquely employs General Type-2 Fuzzy Logic (GT2-FL) for data preprocessing and uncertainty handling, followed by two advanced neural architectures, including BiLSTM and [...] Read more.
This research introduces a novel hybrid forecasting framework for solar energy prediction in high-latitude regions with extreme seasonal variations. This approach uniquely employs General Type-2 Fuzzy Logic (GT2-FL) for data preprocessing and uncertainty handling, followed by two advanced neural architectures, including BiLSTM and SCINet with Time2Vec encoding and Variational Mode Decomposition (VMD) signal processing. Four configurations are systematically evaluated: BiLSTM-Time2Vec, BiLSTM-VMD, SCINet-Time2Vec, and SCINet-VMD, each tested with GT2-FL preprocessed data and raw input data. Using meteorological data from Lithuania (2023–2024) with extreme seasonal variations where daylight hours range from 17 h in summer to 7 h in winter, F-BiLSTM-Time2Vec achieved exceptional performance, with nRMSE = 1.188%, NMAE = 0.813%, and WMAE = 3.013%, significantly outperforming both VMD-based variants and SCINet architectures. Comparative analysis revealed that Time2Vec encoding proved more beneficial than VMD preprocessing, especially when enhanced with fuzzification. The results confirm that fuzzification, BiLSTM architecture, and Time2Vec encoding provide the most robust forecasting capability under various seasonal conditions. Full article
Show Figures

Figure 1

23 pages, 4363 KB  
Article
Hybrid SDE-Neural Networks for Interpretable Wind Power Prediction Using SCADA Data
by Mehrdad Ghadiri and Luca Di Persio
Electricity 2025, 6(3), 48; https://doi.org/10.3390/electricity6030048 - 1 Sep 2025
Viewed by 383
Abstract
Wind turbine power forecasting is crucial for optimising energy production, planning maintenance, and enhancing grid stability. This research focuses on predicting the output of a Senvion MM92 wind turbine at the Kelmarsh wind farm in the UK using SCADA data from 2020. Two [...] Read more.
Wind turbine power forecasting is crucial for optimising energy production, planning maintenance, and enhancing grid stability. This research focuses on predicting the output of a Senvion MM92 wind turbine at the Kelmarsh wind farm in the UK using SCADA data from 2020. Two approaches are explored: a hybrid model combining Stochastic Differential Equations (SDEs) with Neural Networks (NNs) and Deep Learning models, in particular, Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM), and the Combination of Convolutional Neural Networks (CNNs) and LSTM. Notably, while SDE-NN models are well suited for predictions in cases where data patterns are chaotic and lack consistent trends, incorporating stochastic processes increases the complexity of learning within SDE models. Moreover, it is worth mentioning that while SDE-NNs cannot be classified as purely “white box” models, they are also not entirely “black box” like traditional Neural Networks. Instead, they occupy a middle ground, offering improved interpretability over pure NNs while still leveraging the power of Deep Learning. This balance is precious in fields such as wind power prediction, where accuracy and understanding of the underlying physical processes are essential. The evaluation of the results demonstrates the effectiveness of the SDE-NNs compared to traditional Deep Learning models for wind power prediction. The SDE-NNs achieve slightly better accuracy than other Deep Learning models, highlighting their potential as a powerful alternative. Full article
Show Figures

Figure 1

17 pages, 1149 KB  
Article
IP Spoofing Detection Using Deep Learning
by İsmet Kaan Çekiş, Buğra Ayrancı, Fezayim Numan Salman and İlker Özçelik
Appl. Sci. 2025, 15(17), 9508; https://doi.org/10.3390/app15179508 - 29 Aug 2025
Viewed by 601
Abstract
IP spoofing is a critical component in many cyberattacks, enabling attackers to evade detection and conceal their identities. This study rigorously compares eight deep learning models—LSTM, GRU, CNN, MLP, DNN, RNN, ResNet1D, and xLSTM—for their efficacy in detecting IP spoofing attacks. Overfitting was [...] Read more.
IP spoofing is a critical component in many cyberattacks, enabling attackers to evade detection and conceal their identities. This study rigorously compares eight deep learning models—LSTM, GRU, CNN, MLP, DNN, RNN, ResNet1D, and xLSTM—for their efficacy in detecting IP spoofing attacks. Overfitting was mitigated through techniques such as dropout, early stopping, and normalization. Models were trained using binary cross-entropy loss and the Adam optimizer. Performance was assessed via accuracy, precision, recall, F1 score, and inference time, with each model executed a total of 15 times to account for stochastic variability. Results indicate a powerful performance across all models, with LSTM and GRU demonstrating superior detection efficacy. After ONNX conversion, the MLP and DNN models retained their performance while achieving significant reductions in inference time, miniaturized model sizes, and platform independence. These advancements facilitated the effective utilization of the developed systems in real-time network security applications. The comprehensive performance metrics presented are crucial for selecting optimal IP spoofing detection strategies tailored to diverse application requirements, serving as a valuable reference for network anomaly monitoring and targeted attack detection. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

12 pages, 1667 KB  
Proceeding Paper
Multivariate Forecasting Evaluation: Nixtla-TimeGPT
by S M Ahasanul Karim, Bahram Zarrin and Niels Buus Lassen
Comput. Sci. Math. Forum 2025, 11(1), 29; https://doi.org/10.3390/cmsf2025011029 - 26 Aug 2025
Viewed by 533
Abstract
Generative models are being used in all domains. While primarily built for processing texts and images, their reach has been further extended towards data-driven forecasting. Whereas there are many statistical, machine learning and deep learning models for predictive forecasting, generative models are special [...] Read more.
Generative models are being used in all domains. While primarily built for processing texts and images, their reach has been further extended towards data-driven forecasting. Whereas there are many statistical, machine learning and deep learning models for predictive forecasting, generative models are special because they do not need to be trained beforehand, saving time and computational power. Also, multivariate forecasting with the existing models is difficult when the future horizons are unknown for the regressors because they add mode uncertainties in the forecasting process. Thus, this study experiments with TimeGPT(Zeroshot) by Nixtla where it tries to identify if the generative model can outperform other models like ARIMA, Prophet, NeuralProphet, Linear Regression, XGBoost, Random Forest, LSTM, and RNN. To determine this, the research created synthetic datasets and synthetic signals to assess the individual model performances and regressor performances for 12 models. The results then used the findings to assess the performance of TimeGPT in comparison to the best fitting models in different real-world scenarios. The results showed that TimeGPT outperforms multivariate forecasting for weekly granularities by automatically selecting important regressors whereas its performance for daily and monthly granularities is still weak. Full article
Show Figures

Figure 1

31 pages, 20980 KB  
Article
A Novel Method for Virtual Real-Time Cumuliform Fluid Dynamics Simulation Using Deep Recurrent Neural Networks
by Carlos Jiménez de Parga, Sergio Calo, José Manuel Cuadra, Ángel M. García-Vico and Rafael Pastor Vargas
Mathematics 2025, 13(17), 2746; https://doi.org/10.3390/math13172746 - 26 Aug 2025
Viewed by 755
Abstract
The real-time simulation of atmospheric clouds for the visualisation of outdoor scenarios has been a computer graphics research challenge since the emergence of the natural phenomena rendering field in the 1980s. In this work, we present an innovative method for real-time cumuli movement [...] Read more.
The real-time simulation of atmospheric clouds for the visualisation of outdoor scenarios has been a computer graphics research challenge since the emergence of the natural phenomena rendering field in the 1980s. In this work, we present an innovative method for real-time cumuli movement and transition based on a Recurrent Neural Network (RNN). Specifically, an LSTM, a GRU and an Elman RNN network are trained on time-series data generated by a parallel Navier–Stokes fluid solver. The training process optimizes the network to predict the velocity of cloud particles for the subsequent time step, allowing the model to act as a computationally efficient surrogate for the full physics simulation. In the experiments, we obtained natural-looking behaviour for cumuli evolution and dissipation with excellent performance by the RNN fluid algorithm compared with that of classical finite-element computational solvers. These experiments prove the suitability of our ontogenetic computational model in terms of achieving an optimum balance between natural-looking realism and performance in opposition to computationally expensive hyper-realistic fluid dynamics simulations which are usually in non-real time. Therefore, the core contributions of our research to the state of the art in cloud dynamics are the following: a progressively improved real-time step of the RNN-LSTM fluid algorithm compared to the previous literature to date by outperforming the inference times during the runtime cumuli animation in the analysed hardware, the absence of spatial grid bounds and the replacement of fluid dynamics equation solving with the RNN. As a consequence, this method is applicable in flight simulation systems, climate awareness educational tools, atmospheric simulations, nature-based video games and architectural software. Full article
(This article belongs to the Special Issue Mathematical Applications in Computer Graphics)
Show Figures

Figure 1

24 pages, 3133 KB  
Article
A Feature Selection-Based Multi-Stage Methodology for Improving Driver Injury Severity Prediction on Imbalanced Crash Data
by Çiğdem İnan Acı, Gizen Mutlu, Murat Ozen, Esra Sarac and Vahide Nida Kılıç Uzel
Electronics 2025, 14(17), 3377; https://doi.org/10.3390/electronics14173377 - 25 Aug 2025
Viewed by 689
Abstract
Predicting driver injury severity is critical for enhancing road safety, but it is complicated because fatal accidents inherently create class imbalance within datasets. This study conducts a comparative analysis of machine-learning (ML) and deep-learning (DL) models for multi-class driver injury severity prediction using [...] Read more.
Predicting driver injury severity is critical for enhancing road safety, but it is complicated because fatal accidents inherently create class imbalance within datasets. This study conducts a comparative analysis of machine-learning (ML) and deep-learning (DL) models for multi-class driver injury severity prediction using a comprehensive dataset of 107,195 traffic accidents from the Adana, Mersin, and Antalya provinces in Turkey (2018–2023). To address the significant imbalance between fatal, injury, and non-injury classes, the hybrid SMOTE-ENN algorithm was employed for data balancing. Subsequently, feature selection techniques, including Relief-F, Extra Trees, and Recursive Feature Elimination (RFE), were utilized to identify the most influential predictors. Various ML models (K-Nearest Neighbors (KNN), XGBoost, Random Forest) and DL architectures (Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), Recurrent Neural Network (RNN)) were developed and rigorously evaluated. The findings demonstrate that traditional ML models, particularly KNN (0.95 accuracy, 0.95 F1-macro) and XGBoost (0.92 accuracy, 0.92 F1-macro), significantly outperformed DL models. The SMOTE-ENN technique proved effective in managing class imbalance, and RFE identified a critical 25-feature subset including driver fault, speed limit, and road conditions. This research highlights the efficacy of well-preprocessed ML approaches for tabular crash data, offering valuable insights for developing robust predictive tools to improve traffic safety outcomes. Full article
(This article belongs to the Special Issue Machine Learning Approach for Prediction: Cross-Domain Applications)
Show Figures

Graphical abstract

28 pages, 2147 KB  
Article
Generalized Methodology for Two-Dimensional Flood Depth Prediction Using ML-Based Models
by Mohamed Soliman, Mohamed M. Morsy and Hany G. Radwan
Hydrology 2025, 12(9), 223; https://doi.org/10.3390/hydrology12090223 - 24 Aug 2025
Viewed by 963
Abstract
Floods are among the most devastating natural disasters; predicting their depth and extent remains a global challenge. Machine Learning (ML) models have demonstrated improved accuracy over traditional probabilistic flood mapping approaches. While previous studies have developed ML-based models for specific local regions, this [...] Read more.
Floods are among the most devastating natural disasters; predicting their depth and extent remains a global challenge. Machine Learning (ML) models have demonstrated improved accuracy over traditional probabilistic flood mapping approaches. While previous studies have developed ML-based models for specific local regions, this study aims to establish a methodology for estimating flood depth on a global scale using ML algorithms and freely available datasets—a challenging yet critical task. To support model generalization, 45 catchments from diverse geographic regions were selected based on elevation, land use, land cover, and soil type variations. The datasets were meticulously preprocessed, ensuring normality, eliminating outliers, and scaling. These preprocessed data were then split into subgroups: 75% for training and 25% for testing, with six additional unseen catchments from the USA reserved for validation. A sensitivity analysis was performed across several ML models (ANN, CNN, RNN, LSTM, Random Forest, XGBoost), leading to the selection of the Random Forest (RF) algorithm for both flood inundation classification and flood depth regression models. Three regression models were assessed for flood depth prediction. The pixel-based regression model achieved an R2 of 91% for training and 69% for testing. Introducing a pixel clustering regression model improved the testing R2 to 75%, with an overall validation (for unseen catchments) R2 of 64%. The catchment-based clustering regression model yielded the most robust performance, with an R2 of 83% for testing and 82% for validation. The developed ML model demonstrates breakthrough computational efficiency, generating complete flood depth predictions in just 6 min—a 225× speed improvement (90–95% time reduction) over conventional HEC-RAS 6.3 simulations. This rapid processing enables the practical implementation of flood early warning systems. Despite the dramatic speed gains, the solution maintains high predictive accuracy, evidenced by statistically robust 95% confidence intervals and strong spatial agreement with HEC-RAS benchmark maps. These findings highlight the critical role of the spatial variability of dependencies in enhancing model accuracy, representing a meaningful approach forward in scalable modeling frameworks with potential for global generalization of flood depth. Full article
Show Figures

Figure 1

10 pages, 4353 KB  
Proceeding Paper
Should You Sleep or Trade Bitcoin?
by Paridhi Talwar, Aman Jain and Eugene Pinsky
Comput. Sci. Math. Forum 2025, 11(1), 20; https://doi.org/10.3390/cmsf2025011020 - 22 Aug 2025
Viewed by 601
Abstract
Dramatic price swings and the possibility of extreme returns have made Bitcoin a hot topic of interest for investors and researchers alike. With the help of advanced neural network models including CNN, RCNN, and LSTM networks, this paper has delved deep into the [...] Read more.
Dramatic price swings and the possibility of extreme returns have made Bitcoin a hot topic of interest for investors and researchers alike. With the help of advanced neural network models including CNN, RCNN, and LSTM networks, this paper has delved deep into the intricacies of Bitcoin price behavior. We will study different time intervals—close-to-close, close-to-open, open-to-close, and day-to-day—to find a pattern that we can use to develop an investment strategy. The average volatility over a year, six months, and three months is compared with the predictive power of volatility versus a traditional buy-and-hold strategy. Our findings point out the strengths and weaknesses of each neural network model and provide useful insights into optimizing cryptocurrency portfolios. This study contributes to the literature on the price prediction and volatility analysis of cryptocurrencies, thus providing useful information to both researchers and investors to execute strategic steps within the volatile cryptocurrency market. Full article
Show Figures

Figure 1

Back to TopTop