Next Article in Journal
A Fast Satellite Selection Algorithm Based on NSWOA for Multi-Constellation LEO Satellite Dynamic Opportunistic Navigation
Previous Article in Journal
Distributionally Robust Optimal Scheduling for Integrated Energy System Based on Dynamic Hydrogen Blending Strategy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Flood Risk Forecasting: An Innovative Approach with Machine Learning and Markov Chains Using LIDAR Data

Department of Civil Engineering, Energy, Environment and Materials (DICEAM), Mediterranea University of Reggio Calabria, 89124 Reggio Calabria, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(13), 7563; https://doi.org/10.3390/app15137563
Submission received: 7 May 2025 / Revised: 24 June 2025 / Accepted: 1 July 2025 / Published: 5 July 2025
(This article belongs to the Section Green Sustainable Science and Technology)

Abstract

In recent years, the world has seen a significant increase in extreme weather events, such as floods, hurricanes, and storms, which have caused extensive damage to infrastructure and communities. These events result from natural phenomena and human-induced factors, including climate change and natural climate variations. For instance, the floods in Europe in 2024 and the hurricanes in the United States have highlighted the vulnerability of urban and rural areas. These extreme events are often unpredictable and pose considerable challenges for spatial planning and risk management. This study explores an innovative approach that employs machine learning and Markov chains to enhance spatial planning and predict flood risk areas. By utilizing data such as weather records, land use and land cover (LULC) information, topographic LIDAR data, and advanced predictive models, the study aims to identify the most vulnerable areas and provide recommendations for risk mitigation. The results indicate that integrating these technologies can improve forecasting accuracy, thereby supporting more informed decisions in land management. Given the effects of climate change and the increasing frequency of extreme events, adopting advanced forecasting and planning tools is crucial for protecting communities and reducing economic and social damage. This method was applied to the Calopinace area, also known as the Calopinace River or Fiumara della Cartiera, which crosses Reggio Calabria and is notorious for its historical floods. It can serve as part of an early warning system, enabling alerts to be issued throughout the monitored area. Furthermore, it can be integrated into existing emergency protocols, thereby enhancing the effectiveness of disaster response. Future research could investigate incorporating additional data and AI techniques to improve accuracy.

1. Introduction

In recent decades, the urgency of the climate change-induced increase in the frequency and severity of extreme weather events, particularly floods, has become increasingly evident. The IPCC Sixth Assessment Report (AR6) confirms with high confidence that anthropogenic climate change has already intensified heavy precipitation events in many regions, thereby elevating the risk of both pluvial and fluvial flooding [IPCC, 2021] [1]. This is further corroborated by recent studies such as that by Lakshmi, V. [2] which analyzed the Eastern Shore in Virginia using CMIP6-based projections and found a significant increase in flood frequency and severity; an up to 8.9% higher peak flood intensity compared to the 2003–2020 baseline. Their findings also underscore the growing vulnerability of low-lying coastal areas to sea-level rise and hydroclimatic extremes, reinforcing the need for adaptive planning and risk mitigation strategies.
These events pose significant threats to community safety and environmental sustainability, causing substantial damage to infrastructure, ecosystems, and human lives [3]. This urgency underscores the need to develop practical predictive tools for risk management. Numerous studies, including those from the IPCC, have demonstrated that the intensification of natural disasters, especially floods, is closely linked to human activities [1]. Uncontrolled urbanization, deforestation, soil sealing, and greenhouse gas emissions all contribute to alterations in the natural hydrological cycle, increasing the frequency and severity of extreme events. The Climate Change 2021 report investigates this relationship’s scientific and physical basis, thoroughly addressing short-term climate forcings in the atmosphere [4]. This interaction between natural factors and anthropogenic pressures makes flood risk forecasting a complex but essential challenge for enhancing territorial resilience.
In Italy, recent events, including the floods in Emilia-Romagna and an increase in intense weather phenomena in 2024, have underscored the region’s vulnerability. Floods, one of the most common natural disasters, are driven by various factors: heavy rainfall, rising sea levels, inadequate drainage systems, and increasing urban development. The consequences can be devastating, impacting human lives, infrastructure, the economy, and the environment.
To address these issues, both the European Union and Italy have implemented regulatory measures such as Flood Risk Management Plans (FRMPs) [5] and the Hydrogeological Asset Plan (PAI). These tools aim to prevent and mitigate hydrogeological risks through integrated and up-to-date planning.
Scientific research has produced numerous forecasting models for risk areas. Traditional hydrological models, such as deterministic models based on physical equations, have been extensively used to simulate how watersheds respond to meteorological events [6,7]. However, these models require a detailed understanding of the territorial characteristics and often struggle to adapt to complex and dynamic scenarios.
To address these limitations, numerical and probabilistic models have been increasingly adopted. Numerical models, like HEC-RAS, TUFLOW, and OpenFOAM, enable one-dimensional, two-dimensional, or three-dimensional simulations of water flow, offering a more realistic depiction of water propagation [8,9,10]. On the other hand, probabilistic models, such as Bayesian networks and the Analytic Hierarchy Process (AHP), facilitate the quantification of uncertainty while integrating multiple criteria into risk assessments [11,12,13].
Concurrently, artificial intelligence (AI) has introduced transformative methodologies in environmental modeling, particularly in the domain of flood risk assessment. Machine learning (ML) algorithms such as Random Forest (RF), Support Vector Machines (SVMs), and deep neural networks have demonstrated strong capabilities in processing large-scale, heterogeneous environmental datasets, including meteorological time series, topographic features, land use classifications, and hydrological indicators. These models are particularly effective in capturing complex, nonlinear interactions among variables, enabling the classification of flood-prone areas and the forecasting of extreme events with enhanced accuracy. For instance, RF is widely adopted for its robustness, interpretability, and suitability for spatial risk mapping, while deep learning architectures such as Long Short-Term Memory (LSTM) networks and Convolutional Neural Networks (CNNs) are increasingly employed to model temporal dependencies and spatial patterns in flood dynamics. These AI-driven approaches are reshaping predictive modeling in environmental sciences by offering scalable, data-driven solutions that complement or surpass traditional hydrological models [14,15,16,17,18].
Additionally, the integration of sequential models, such as Long Short-Term Memory (LSTM) [19], has enhanced temporal event prediction by capturing long-term dependencies in historical data. However, these models require significant computational resources and a complex training phase.
In this context, flood risk forecasting is crucial in spatial planning and civil protection. However, the current state of traditional forecasting models that rely on hydrological or statistical approaches is not without its limitations. These models often struggle with accuracy and adaptability to complex and dynamic scenarios, highlighting the need for a new approach.
This study proposes an innovative approach that could potentially revolutionize flood risk forecasting. It integrates machine learning algorithms, specifically Random Forest, with Markov chains. It utilizes high-resolution data derived from LIDAR, lithological maps, land use, and precipitation to improve the accuracy of flood risk assessments and support informed decision-making in land management.
The main contributions of this work include the following:
  • Developing a hybrid RF–Markov predictive model for the dynamic simulation of flood risk.
  • The use of LIDAR data to generate high-resolution digital terrain models.
  • A quantitative comparison with LSTM models to validate the effectiveness of the proposed method.
  • Creating flood susceptibility maps to aid urban planning and mitigation strategies.
The recent literature has explored employing deep learning techniques and probabilistic models for flood prediction. However, the effective integration of spatial accuracy and temporal modeling is often lacking. Our approach bridges this gap by combining the robustness of tree-based models with the capability of Markov chains to model the temporal evolution of risk.
Our challenge is the difficulty of accurately predicting flood-prone areas in complex urban environments. Our model addresses this issue by integrating geospatial and temporal data into a predictive framework that is interpretable, efficient, and adaptable to various territorial contexts.

2. Related Work

Recent studies have increasingly focused on predicting flood risk due to the rising frequency and severity of extreme weather events, which cause significant economic damage and pose environmental concerns [20,21]. Researchers have explored various methodologies to enhance prediction accuracy and support effective land management strategies.
Traditionally, hydrological and statistical models have been widely used for flood prediction. However, these models often rely on simplified assumptions, such as linear relationships between variables, stationarity of processes, and limited spatial resolution. As a result, they struggle to capture the complex, nonlinear, and dynamic nature of flood phenomena, especially in heterogeneous environments. These limitations have prompted the adoption of more flexible and data-driven approaches, such as machine learning.
Among these, Random Forest (RF) has emerged as a robust algorithm for flood risk assessment, particularly due to its ability to handle high-dimensional data, perform feature selection, and model nonlinear interactions. Zhu et al. [19] developed a deep learning approach integrating RF with transfer learning in flood prediction models. Their methodology utilizes RF to identify relevant flood characteristics and employs short-term long-memory (LSTM) neural networks to enhance temporal data processing.
Roohi et al. [22] implemented RF alongside Sentinel-1 radar imagery to classify flood-affected areas, achieving high predictive accuracy. Their research demonstrates the advantages of integrating RF with geospatial data for flood susceptibility analysis.
Kubendiran and Ramaiah [23] emphasized the role of remote sensing data (LIDAR, SAR) in improving flood mapping and prediction.
Islam et al. [24] applied Markov chains to model temporal transitions in flood-prone areas, achieving high accuracy in short-term forecasting. However, Markov models alone are limited in capturing spatial heterogeneity and complex environmental interactions.
Moreover, integrating LIDAR-derived Digital Elevation Models (DEMs) has significantly improved the accuracy of spatial analyses for flood risk mapping. Studies like that of Kader et al. [25] highlight the role of DEMs in refining land use analysis and flood susceptibility modeling.
Moreover, the incorporation of high-resolution LIDAR-derived Digital Elevation Models (DEMs), lithological maps, land use data, and rainfall patterns further enhances the model’s precision and contextual relevance. This multi-source integration enables us to overcome the limitations of traditional models that often rely on incomplete datasets.
Recent works have explored hybrid approaches. Liu et al. [26] analyzed over 100 studies on the use of ML models (including RF, LSTM, XGBoost) for flood depth prediction, highlighting the potential of hybrid models. Ezziyyani et al. [27] presented a study exploring the use of CNN, SVM, and KNN for flood forecasting and management, with data from Sentinel-2 and weather. Xu et al. [28] focused their study on the use of LSTM for urban runoff prediction; useful for integrating the temporal component into predictive models.
Our hybrid approach builds on these insights by combining the spatial modeling strengths of RF with the temporal forecasting capabilities of Markov chains. While RF has been effectively used in temporal prediction tasks, it does not natively model sequential state transitions. To address this, we integrated RF with Markov chains to simulate the temporal evolution of flood risk categories in a probabilistic framework.

3. Materials and Methods

In recent years, the increasing frequency and intensity of floods have highlighted the urgent need for more effective forecasting models. Research on flood monitoring and mapping has expanded, not to directly mitigate the impacts of land and urban changes, but rather to provide essential support for informed decision-making and targeted actions to address these challenges. Models have evolved over the past three decades. Time series models (TSMs) were dominant in flood prediction until 2010, while machine learning (ML) models, especially artificial neural networks (ANNs), have been the most dominant since 2011. Despite significant advancements, there remains potential for further integration of machine learning algorithms with other techniques to enhance predictive accuracy.
This study contributes to this effort by introducing an innovative approach that combines Random Forest with Markov chains, leveraging LIDAR data and geospatial information to enhance flood risk prediction. Developed in Python 3.11.2, this model delivers high spatial accuracy, which is crucial for identifying vulnerable areas and guiding proactive interventions. Rather than acting as a direct solution to flood-related challenges, this system serves as a valuable resource for local authorities, empowering them to make informed land management and urban planning decisions to mitigate risks effectively.
Figure 1 presents a graphical representation of the phases and data involved in flood risk assessment. Each process step, from data collection to modeling and validation, is outlined to emphasize the integrated and multidisciplinary approach adopted. This methodology considers various topographical and environmental factors influencing flooding, ensuring a more accurate and reliable risk assessment. The outcome of this process is the flood susceptibility map (FSM), which serves as a crucial tool for spatial flood risk assessment. It aids in identifying areas at risk and facilitates the planning of mitigation measures.
The Random Forest model was chosen for its proven accuracy and robustness compared to other machine learning algorithms [29,30]. While more advanced models, such as deep neural networks, exist, Random Forest strikes an optimal balance between interpretability, training speed, and predictive performance. This makes it particularly suitable for applications requiring a clear understanding of model decisions. Additionally, its lower susceptibility to overfitting enhances its reliability, making it a preferred choice for predictive analytics in complex environments.
The integration of advanced technologies, such as machine learning and Markov chains, facilitates the identification of high-risk areas and supports more informed land management decisions. These tools enable the analysis of large volumes of historical and current data, helping to identify patterns and trends that contribute to more accurate flood forecasting. Ultimately, this approach aids in disaster mitigation and community protection.
The Random Forest model introduces an element of randomness in selecting features for each tree, which helps reduce the correlation between trees and enhances the overall robustness of the model (Figure 2).
The process can be broken down into the following phases:
  • Generating Subsets of Features: A random subset of features is selected from the original dataset for each node in a tree. This subset is much smaller than the available features, ensuring tree diversity.
  • Determining the Optimal Feature: Among the randomly selected features, the one that best optimizes data separation is identified using quality metrics such as entropy or the Gini index.
  • Iterating the Process: This process is repeated for every node in every tree, ensuring that each tree is constructed using a unique set of features. This ensemble approach enhances the model’s ability to generalize to unseen data.
Before starting the model training, it is essential to configure several key hyperparameters.
A hyperparameter search technique should be applied to determine the parameters that optimize the model’s predictive performance. The proposed study selected Grid Search for its comprehensiveness, simplicity, and reliability.
The specific hyperparameter considered in this study was:
-
n_estimators: This refers to the number of trees in the forest. A higher number of trees enhances the model’s performance by reducing variance.
-
max_features: This parameter specifies the number of features to consider when splitting each node. By using ‘sqrt,’ you ensure that each tree is trained on a distinct subset of characteristics, which enhances diversity and minimizes the risk of overfitting.
-
max_depth: This defines the maximum depth of each tree. A greater depth allows for the capture of more complex patterns, but intense trees can lead to overfitting. It is important to find a balance when selecting the depth.
-
min_samples_leaf: This parameter sets the minimum number of samples required to form a leaf. It helps prevent the creation of overly specific subdivisions that may contribute to overfitting.
-
min_samples_split: This indicates the minimum number of samples to split a node. Similarly to min_samples_leaf, this parameter aims to reduce excessive subdivisions.
The rationale for choosing these hyperparameters is based on achieving a balance between model accuracy and computational efficiency. Increasing the number of trees (n_estimators) improves model performance by reducing variance, but too many trees can increase computation time without significant performance gains.
A value of 300 was chosen as a good compromise between accuracy and computational efficiency. The limitation in its choice is dictated by computational constraints and the need to contain training times.
Setting max_features to ‘sqrt’ ensures that each tree is trained on a different subset of features, increasing diversity among the trees and reducing their correlation. This enhances the model’s robustness.
Selecting a max_depth of 30 enables the model to capture complex patterns while preventing overfitting. Min_samples_leaf and min_samples_split help prevent overly specific subdivisions, improving the model’s generalization.
The hyperparameter search was conducted using Grid Search, which explores all possible combinations of a predefined set of hyperparameters (Table 1). This technique ensures that no potentially optimal combination is overlooked. The study used the fit method to train the model on different combinations of hyperparameters as part of the Grid Search process. This method is crucial because it enables the model to learn from the data and adjust its internal parameters to minimize prediction error.
Cross-validation was used to evaluate the model’s performance on different sections of the dataset. A 3-fold cross-validation was employed, dividing the dataset into three parts and training and validating the model three times for each combination of hyperparameters. The dataset consisted of spatial units (grid cells) characterized by environmental and topographic features: elevation, land use, lithology, and average rainfall. These features were extracted from LIDAR, CORINE Land Cover, and ARPACAL rainfall data (2014–2024). The target variable (predictand) was the flood risk class assigned to each unit, categorized into four ordinal levels (low, moderate, high, very high), derived from PCA-based classification. The expression “different sections” refers to the three subsets created during 3-fold cross-validation. The dataset was randomly partitioned into three equal parts. In each iteration, two-thirds were used for training and one-third for validation, rotating the folds across three runs.
After completing the Grid Search, the best-performing model is selected based on the chosen evaluation metric, which in this case was accuracy, equal to 0.87 during the hyperparameter optimization phase. This value represents the proportion of correctly predicted flood risk classes across the validation folds and was used exclusively to guide the selection of the optimal parameter combination during Grid Search. The optimal parameters identified were n_estimators = 300, max_depth = 30, min_samples_split = 2, min_samples_leaf = 1 and max_features = ‘sqrt’ (Table 2). The total number of hyperparameter combinations is calculated as the product of the number of candidate values for each parameter:
  • n_estimators: 3 values ([100, 200, 300])
  • max_depth: 4 values ([None, 10, 20, 30])
  • min_samples_split: 3 values ([2, 5, 10])
  • min_samples_leaf: 3 values ([1, 2, 4])
  • max_features: 2 values ([log2, ‘sqrt’])
The total number of combinations would be: 3 × 4 × 3 × 3 × 2 = 216. The parameter ‘sqrt’ in the context of Random Forest refers to the number of features to consider when looking for the best split; ‘sqrt’ indicates that the number of features considered in each split is the square root of the total number of features. This approach balances the diversity of individual trees and the reduction of overall model variance. The log2 parameter indicates that, for each split of a tree, the maximum number of features to consider is the base 2 logarithm of the total number of available features. This value reduces the number of features evaluated at each node, increasing diversity between trees and improving model generalization. These are standard options in the Random Forest implementation in Python and are used to control the number of features considered at each split, balancing model diversity and variance reduction.
To ensure reproducibility, we have made the full dataset structure, preprocessing steps, and model code publicly available at: http://github.com/Luigi2020357/Flood-Forecast. This includes the scripts used for feature extraction, model training, and evaluation. For further details, please refer to Appendix A.
Markov chains are widely used for modeling probabilistic systems that evolve over time, providing insights into environmental and anthropogenic interactions [31]. In flood forecasting, each state represents a different level of risk, transitioning probabilistically based on current conditions rather than past history, an attribute known as the Markov property or “short memory” [32,33].
A transition matrix P is used to describe these transitions mathematically. Each element of the matrix, Pij, indicates the probability of moving from state i to state j in a single step. The transition matrix is defined as follows:
P = P 11 ......... P 1 n ................. ................. P n 1 ......... P n n
where Pij is the probability of transition from state i to state j. The sum of the probabilities of transition from a state must be equal to 1, i.e.,
j = 1 n P i j = 1   for each
In homogeneous Markov chains, (P) remains constant, and the stationary probability distribution π satisfies the equation πP = π, ensuring equilibrium over time.
Conversely, non-homogeneous Markov chains allow for transition probabilities to vary over time or due to external influences [34]. This adaptability enables the definition of multiple transition matrices (P_t) for different time intervals, reflecting environmental changes.
An important concept in Markov models is absorbent states, which represent situations in which, once reached, the system remains indefinitely, such as extreme flood risk scenarios.
Markov chains can be classified as regular or ergodic:
-
Regular chains ensure that any state can be reached within a finite number of steps.
-
Ergodic chains exhibit stable long-term behavior, converging toward a unique stationary distribution, irrespective of the initial state. For further details, please refer to Appendix B.
The Long Short-Term Memory (LSTM) model was used as a benchmark comparator to evaluate the predictive performance of RF against a more complex, sequence-oriented deep learning model. Our goal was to compare the following two fundamentally different approaches:
-
Random Forest: a robust, interpretable, and computationally efficient model, well-suited for heterogeneous environmental data.
-
LSTM: a deep learning model capable of capturing temporal dependencies, but more computationally intensive and less interpretable.
This comparison enabled us to highlight the strengths and limitations of each model in the context of flood risk prediction. The results showed that RF achieved higher accuracy (0.89 vs. 0.85), better precision and recall, and significantly faster training times, confirming its suitability for the task, especially when interpretability and efficiency are important.
LSTM is a specialized recurrent neural network (RNN) designed to process sequential data while overcoming the limitations of traditional RNNs in capturing long-term dependencies. This capability makes LSTM particularly effective for analyzing time series, such as meteorological and hydrological forecasts.
Its architecture incorporates memory cells that regulate data retention through the following three fundamental components:
-
Input gate—which updates the cell state with relevant new information.
-
Forget gate—which discards unnecessary data.
-
Output gate—which selects the most relevant information for prediction.
By integrating multimodal data sources (e.g., satellite imagery, meteorological records, and hydrological datasets), LSTM enhances predictive accuracy by reducing biases and errors.
Training the model involves Backpropagation Through Time (BPTT), an advanced optimization method that unrolls the network over time, mitigating vanishing and exploding gradients issues common in traditional RNNs.
Despite its effectiveness, LSTM is computationally demanding and requires careful hyperparameter tuning to achieve optimal performance. In this study, the LSTM model was not used with default parameters. Instead, it was systematically optimized using GridSearchCV in combination with TimeSeriesSplit cross-validation. The following hyperparameters were explored:
  • Batch size: [16, 32, 64].
  • Epochs: [50, 100].
  • Optimizers: [‘adam’, ‘rmsprop’].
  • Dropout rates: [0.2, 0.3, 0.4].
This resulted in a total of 36 combinations, each evaluated using 5-fold time series cross-validation, for a total of 180 training runs. The model was wrapped using KerasRegressor to ensure compatibility with GridSearchCV. The best-performing configuration was selected based on validation loss, and the final model was retrained using the optimal parameters. The training and performance metrics (MSE, RMSE, R2) were computed and visualized accordingly.
The architecture of the optimized network, illustrated in Figure 3, includes several layers.
  • Input Layer: Input Shape: (60, 1)
This means that the model takes as input sequences of 60 timesteps, each with a single characteristic.
  • First Layer: LSTM Units: 50
return_sequences=True: This parameter indicates that the LSTM layer returns the entire output sequence for each timestep input, which is used as input for the next LSTM layer.
  • First Layer Dropout: Dropout Rate: 0.2
This layer helps prevent overfitting by randomly turning off 20% of neurons during training.
  • Second Layer LSTM: LSTM Units: 50
return_sequences=False: This parameter indicates that the LSTM layer returns only the last output of the sequence, which is used as input for the final dense layer.
  • Second Layer Dropout: Dropout Rate: 0.2
Similarly to the first Dropout layer, it helps prevent overfitting.
  • Dense Layer: Units: 1
This is the output layer that produces a single prediction.
Filling in the Model.
  • Optimizer: Adam.
  • Loss Function: Mean Squared Error (MSE).
In this study, we utilized LIDAR (Light Detection and Ranging) data.
LIDAR is a high-resolution remote sensing technology used to model flood-prone areas with remarkable precision [35]. It employs an IMU (Inertial Measurement Unit) platform, emitting laser pulses to measure distances between the sensor and ground objects, enabling three-dimensional georeferencing.
By scanning terrain, LIDAR generates a point cloud, distinguishing digital terrain models (DTMs) from digital surface models (DSMs). This data allows for precise elevation mapping, with a detection density exceeding 1.5 points/m2 and an altimetric accuracy within 15 cm.
LIDAR-derived digital elevation models (DEMs) play a critical role in hydraulic modeling, particularly in urban environments where high-resolution topography is necessary for assessing flood susceptibility. DEM resolution directly impacts flood propagation accuracy, with research confirming LIDAR-generated DEMs outperform other models [36,37].
This methodology provides a reliable framework for hydrological predictions, aiding in the identification of risk areas and supporting effective land management strategies.

3.1. Integration of RF and Markov Chain Models

The integration of the Random Forest (RF) model with the Markov chain presents a sophisticated and complementary approach to flood risk prediction. The RF model utilizes environmental and topographic data, including information derived from LIDAR, to accurately forecast flood-prone areas. Following this, the Markov chain models the temporal evolution of flood risk, simulating the transitions between different hazard levels (low, moderate, high, and very high).
This combination offers a dynamic perspective on risk. The transition matrix, constructed from the RF model’s predictions, quantifies the probabilities of transition between risk states, providing a probabilistic outlook on the future progression of flood risks. The unit of time used for observing transitions between flood risk states is monthly. The dataset spans 11 years, and each year is divided into 12 monthly intervals resulting in a total of 132 time steps. Each time step corresponds to a monthly observation of flood risk classification (low, moderate, high, and very high); an “observation” refers to the monthly flood risk classification assigned to each spatial unit (DEM cell) in the study area. Each unit is characterized by environmental features and receives a predicted risk class at each time step. With 132 time steps (11 years × 12 months), each unit contributes 132 observations to the temporal sequence used in the Markov chain analysis.
In this study, states are used to symbolize different flood risk classes rather than directly representing meteorological or hydrological conditions. These classes are derived from predictions made by the Random Forest model, which incorporates various environmental variables to categorize each spatial unit into one of four distinct risk levels. This method provides a clear understanding of transition processes over time by utilizing a Markov chain to model the evolution of risk. Statistical validation has confirmed the homogeneity of this matrix, indicating that transition probabilities remain stable over time, which reinforces the reliability of the integrated model.
To verify the temporal stability of the transitions between risk classes, we analyzed the homogeneity of the Markov chain by means of G2 statistics, as proposed by Anderson and Goodman [38] in their classic treatment of hypothesis testing for Markov chains. In particular, the null hypothesis tested is:
H0:
The transition matrix P describes a homogeneous Markov chain, i.e., the transition probabilities do not vary over time.
To this end, the observation period was divided into two distinct ranges: T1 (2014–2018) and T2 (2019–2024). For each interval a transition matrix (P1 and P2) was calculated, while the aggregate matrix P3, covering the entire period (T1 + T2), was used to estimate the expected frequencies eij below H0.
The G2 statistic was calculated according to the formula
G 2 = 2 i = 1 r j = 1 c n i j l n n i j e i j
  • nij represents the observed frequency of transition from state ii to state jj in periods
  • T1 and T2;
  • eij is the expected frequency calculated on the basis of the aggregate matrix P3;
  • r = c = 4 represents the number of states (risk levels: low, moderate, high, and very high).
The degrees of freedom are equal to (r − 1) × (c − 1) = 9. The G2 test was used to compare the P1 and P2 matrices against the P3 aggregate matrix. The critical value of the chi-square distribution at the significance level of 5% is 16.92. The calculated value of G2 was 2.16, well below the critical threshold, confirming the homogeneity of the Markov chain and the stability of the transition probabilities over time.
This verification reinforces the validity of the integrated RF–Markov predictive model, demonstrating that the observed risk dynamics are statistically consistent over the entire study period.
The flood risk forecasting process is structured into several phases, incorporating machine learning techniques, geospatial analysis, and stochastic modeling:
  • CSV files containing daily rainfall data from the ARPACAL gauge network (Calabria, Italy) were collected for the period 2014–2024. The original data, recorded at hourly intervals, were aggregated to daily totals. Non-numeric values (e.g., sensor errors, missing transmissions) were converted to NaN. Approximately 7.3% of the data were missing, with spatial and temporal variability. Rows with simultaneous NaNs across multiple stations were removed, while isolated gaps were filled using linear temporal interpolation. Stations with excessive missing data were excluded. TIFF files containing elevation data (DEM) are loaded and resized to increase resolution. Land use and lithology maps are loaded and resized to increase resolution. StandardScaler is used, which is a scikit-learn module used to standardize the characteristics of a dataset before being used for model training.
  • Feature extraction. Feature extraction was performed by combining spatial data on elevation, land use, lithology, and long-term average rainfall. Rainfall data were averaged by first computing the annual mean daily precipitation for each station over the period 2014–2024. These values were then spatially interpolated using ordinary kriging to generate a continuous raster of average rainfall. Each pixel in the study area was assigned the corresponding interpolated rainfall value. All features were then transformed into a structured array for model input. This data is then transformed into an array of features.
  • To reduce redundancy and improve computational efficiency, Principal Component Analysis (PCA) was applied to the input feature matrix, which included elevation (from LIDAR-derived DEM), land use (from Copernicus CORINE), lithology (from geological maps), and long-term average rainfall (from ARPACAL data). Each raster was spatially aligned and flattened, resulting in a dataset of 84,656 spatial units (pixels), each described by four standardized variables.
  • PCA was applied globally to the entire feature matrix, rather than separately by feature group, in order to capture the joint variance structure and potential interactions among topographic, geological, and meteorological variables. The transformation was performed after standardization using StandardScaler, ensuring equal contribution from all features regardless of scale.
  • The number of principal components was determined by setting n_components=0.95, retaining only those components necessary to explain at least 95% of the total variance. This resulted in the retention of seven principal components. The first three components alone accounted for approximately 95% of the variance, with PC1 dominated by lithological variables (especially clay and sandstone), PC2 by limestone, and PC3 by land use classes (notably forest and urban areas).
  • The Parallel function of joblib is applied to perform operations in parallel, leveraging multiple processors or threads to improve performance and reduce execution time.
  • Creating the machine learning model. A Random Forest model is trained on training data (80%) to classify areas into different risk classes.
  • Implementation of the Markov model. A Markov chain is created to predict transitions between risk classes.
  • Flood risk classification.
  • Creation of the flood susceptibility map. The model’s forecasts are used to generate a flood risk map, with different risk classes (Low, Moderate, High, Very High).
  • Maps view. The map is displayed using a color map to highlight the different risk classes.
Figure 4 illustrates the development phases of the RF–Markov model.
Predicting the evolution of flood risk over time is crucial for emergency management and resource allocation. The integrated model provides valuable insights for developing effective mitigation and response strategies, supporting decision-making to protect communities and minimize impacts. Understanding flood timing and progression enhances resource management during emergencies, improving early warning systems and information accuracy. Long-term forecasts aid in creating effective flood risk management plans, including preventive measures like flood barriers and advanced drainage systems.

3.2. Dataset

The flood risk forecasting model is based on four main categories of data: geospatial, environmental, meteorological, and LIDAR, offering an integrated view of the territory.
-
Geospatial Data and Land Use: The geological information comes from the Geological Map of Italy, with a focus on the Calabria Region, to assess soil permeability.
-
Land use data comes from the CORINE Land Cover project, with a geometric accuracy of 25 m, and is critical for estimating the impact of urbanization on surface runoff. (LIDAR).
-
Elevation and slope data are obtained from digital elevation models (DEMs) derived from LIDAR surveys, provided by the MASE Extraordinary Remote Sensing Plan (PST). With an elevation accuracy of less than 15 cm, this data makes it possible to identify depressed areas, slopes and water storage areas.
-
Meteorological Data: Based on eleven years of observations, the rainfall data were obtained from ARPACAL (Agenzia Regionale per la Protezione dell’Ambiente della Calabria), which operates a network of ground-based rain gauges across the region. The spatial distribution of stations is denser in coastal and urban areas, with sparser coverage in mountainous zones. Data were accessed via formal request and processed in compliance with ARPACAL’s data usage guidelines. Limitations include spatial gaps and occasional sensor outages, which were addressed through data cleaning and interpolation strategies as described above.
The integration of meteorological, geological, LIDAR, and land use data was carried out through a methodological approach that leverages the distinct characteristics of each dataset. Meteorological data, collected daily over 11 years, represent the dynamic component of the model, capturing rainfall variability, one of the main drivers of flood events. This variability is modeled using Markov chains to simulate transitions between different risk states over time. In contrast, geological and LIDAR data provide a high-resolution, static representation of the territory’s physical and morphological features, such as lithology, elevation, and slope. These datasets are fundamental for defining the intrinsic susceptibility of the territory to water accumulation and outflow. Land use data, while spatially detailed, are not strictly static. Changes in land cover due to urban expansion, deforestation, or agricultural transformation can significantly alter surface runoff and infiltration capacity over time. Therefore, land use is treated as a semi-dynamic variable, whose influence is modeled in combination with meteorological variability.
The integration of these datasets occurs within the Random Forest model, which combines spatial information (e.g., lithology, elevation, land use) with dynamic meteorological data (precipitation) to classify areas into different flood risk categories. While lithology and elevation remain constant over the 11-year simulation period, land use and rainfall introduce variability—rainfall through monthly time series, and land use through its spatial heterogeneity and potential for change.
This temporal component is further modeled using a Markov chain, which simulates the probabilistic evolution of flood risk over time based on the initial classifications provided by the Random Forest model.
In summary, static data define the spatial predisposition to risk, while rainfall and land use dynamics drive temporal and spatial variability. This synergy enables the model to capture both structural and evolving aspects of flood risk, enhancing strategic understanding and forecasting capabilities.
The results of the application of this methodology, presented in the following section, demonstrate the model’s effectiveness and accuracy in identifying high-risk areas.

4. Area of Study

The Calopinace, also known as the Calopinace River or Fiumara della Cartiera, is a watercourse that crosses Reggio Calabria. The name originates from the Greek “καλός πινακε” (calòs pinàke), which means “beautiful view”.
The Calopinace, which extends for about 44.15 km, has a catchment area of 52.91 km2. It rises in the Aspromonte Mountain range at an altitude of 1525 m above sea level and flows into the Strait of Messina near the central station of Reggio Calabria. Historically, the river inherited its catchment area from the ancient Apsìas, where Greek colonists landed to found the city of Reggio in the eighth century BC. During the summer season, the river is often dry, while in winter its flow increases significantly due to seasonal rainfall. This behavior is typical of rivers, characterized by a very variable hydrological regime. It has suffered several historical floods with a chronological order based on official sources such as the Polaris portal and the CNR Hydrogeological Catastrophe Information System (SICI). The most devastating floods were those of 1951 and 1953 which caused extensive material damage with hundreds of deaths, followed by those of 1972, 1976, 1980, 2000, and 2015. Concrete embankments have been built along the river to cope with frequent flooding and improve urban viability.
Geologically, the Calopinace crosses a complex region with various rock formations (Figure 5). Its source is the Aspromonte, a mountain range composed mainly of metamorphic and sedimentary rocks.
Land use in the areas surrounding the Calopinace is diverse and reflects the geographical and climatic characteristics of the region. Most of the territory is dedicated to agriculture, with olive groves, citrus groves, and vineyards omnipresent. The mountainous regions of Aspromonte, from which the Calopinace originates, are covered with forests and nature reserves, which play a crucial role in the conservation of biodiversity and soil protection. As the river flows through Reggio Calabria, land use shifts predominantly to urban areas, with infrastructure, housing, and commercial areas.
The geological map of Italy is at a scale of 1:500,000, and was provided by the Ministry of the Environment. The dataset has been extracted from the Geological Map of Italy, extracting the portion relating to the Calabria Region.
Figure 6 shows the lithological nature of the Fundamental Lithological Unit (ULF) area based on the physical characteristics of the rocks.
Lithological data at a scale of 1:100,000 is provided by the Regional Cartographic Center but derived from the Higher Institute for Environmental Protection and Research (ISPRA).
Figure 7 shows the Land Use/Land Cover (LULC) data on the area. The European project CORINE Land Cover, part of the Copernicus Programme, provides valuable information on land use and cover. With a history dating back to 1990, this data has been collected across Europe using advanced optical satellites. The geometric accuracy of this data is within 25 m.

5. Results

The data used in our study consisted of rainfall data collected by ARPACAL over the past eleven years, from 2014 to 2024, lithological and land use data. Additionally, the LIDAR data relevant to the study area were sourced from the PST (Extraordinary Remote Sensing Plan) database [39]. This database, provided by the Ministry of the Environment and Energy Security (MASE), contains geospatial and earth observation information that can be utilized to monitor and predict environmental and territorial phenomena.
Although there is no specific and universally recognized standard for selecting causal factors of floods that influence flood events [40], the interaction between various topographical and environmental factors plays a crucial role in flood risk assessment. The following factors were taken into account in the study:
Rainfall Data: This provides the amount, intensity, and duration of rainfall. Heavy, prolonged rainfall can saturate soils and increase surface runoff, thereby heightening flood risk.
Land Use: Urbanized areas with impermeable surfaces (e.g., roads, buildings) reduce water infiltration, while regions with natural vegetation absorb more water, lowering flood risk.
Lithology Map: The composition and structure of subsurface rocks and sediments affect water absorption. Porous materials like sand absorb more water than compact ones like clay, influencing runoff behavior.
LIDAR Data and DEMs: High-precision LIDAR data supplies detailed topographic information (elevation and slope) crucial for modeling water flow. Digital Elevation Models (DEMs) derived from LIDAR data help predict water accumulation and identify flood-prone areas. Two types of models are commonly used:
DSMFirst: Captures the first return of the laser, including buildings, trees, and other surface objects.
DSMLast: Captures the last return, focusing on the bare ground to produce a more accurate terrain representation. By removing surface features, a precise Digital Terrain Model (DTM) is obtained.
These data are integrated to simulate floods: LIDAR files are loaded to create the DEM, features are extracted, and all the information feeds into the Random Forest–Markov chain forecasting model. This model analyzes the combined topographical and environmental factors to assess flood risk accurately.
The feature importance values shown in Figure 8 are derived from the Random Forest model and are expressed as percentages. These values represent the relative contribution of each feature to the model’s predictive performance, typically calculated using the mean decrease in impurity (Gini importance) across all decision trees in the ensemble. In this case, rainfall contributes approximately 35%, lithology 30%, elevation 25%, and land use 10% to the classification of flood risk.
Along with feature importance plots, a sensitivity analysis was performed to assess how changes in input variables affect model predictions. The results of this analysis are illustrated in Figure 9, which displays the model’s response to changes in key variables.
While lithology and elevation are static variables, both rainfall and land use can vary over time and space. In particular, land use changes due to urbanization, deforestation, or agricultural practices can significantly alter surface runoff and infiltration capacity.
Figure 9 reflects this distinction by showing simulated sensitivity curves for rainfall and land use, and theoretical curves for lithology and elevation. This approach enables us to capture both the dynamic and structural components of flood risk, providing a more realistic and comprehensive understanding of how each factor contributes to flood hazard.
Sensitivity analysis is essential for understanding and managing flood risk.
The figure illustrates how standardized changes in four key variables, Rainfall, Land Use, Lithology, and Elevation, affect predicted flood risk. Rainfall (solid blue line) and Land Use (dotted purple line) are treated as dynamic variables, with curves derived from model-based simulations. Lithology (dashed green line) and Elevation (red dash–dot line) are considered static variables, and their curves represent theoretical relationships based on hydrological reasoning and their contribution to the model’s classification. All curves are plotted on a uniform y-axis scale to enable the direct comparison of their influence on flood risk.
These variables interact complexly, making an integrated analysis of all the essential factors in flood risk assessment. The assignment of weights to environmental factors in our model was determined through a sensitivity analysis, which identified the variables with the greatest impact on flood susceptibility. Rather than directly applying these weights, we employed Principal Component Analysis (PCA) to synthesize the information from the most influential variables into a reduced set of uncorrelated components. PCA allows for dimensionality reduction by eliminating redundancies and generating principal components that represent optimal linear combinations of the original variables.
To classify flood susceptibility levels, we employed a synthetic index derived from the first principal component (PC1) of the PCA. The classification into four susceptibility classes (Low, Moderate, High, and Very High) was based on quartile thresholds of the PC1 scores. This statistical approach was adopted to ensure objectivity and comparability across spatial units in the absence of consistent hydrological benchmarks or flood incidence records. Importantly, this classification was used solely for spatial interpretation and visualization purposes, and not as a target variable in model training.
PCA loadings were carefully interpreted to retain physical meaning, and a sensitivity analysis was conducted to assess the influence of individual variables (see Figure 9). The resulting susceptibility map represents a relative environmental predisposition to flooding, not an absolute hazard measure. Limitations of this approach are acknowledged, and future work will focus on calibrating the classification using hydrodynamic models, historical flood data, and, where feasible, ground-truthing.
This approach makes it possible to identify the most vulnerable areas with objective criteria, avoiding distortions resulting from an arbitrary choice of classification parameters.
Through this strategy, the model is able to integrate the relative importance of features with a robust aggregation method, improving the accuracy of flood risk prediction and facilitating the interpretation of results.
The risk classes identified are as follows:
  • Low: R < T1;
  • Moderate: T1≤R<T2;
  • High: T2≤R<T3;
  • Very High: R≥T3.
Risk class thresholds (quartile-based)
Class|Risk index range
Low| ≤ −0.34
Moderate|−0.34 to 0.01
High|from 0.01 to 0.34
Very High| > 0.34
Negative values indicate environmental conditions that are less favorable to flooding than average. Positive values indicate more risk-prone conditions.
The classification into quartiles does not imply that 50% of the territory is at absolute risk, but that in relation to the sample, half of the areas have more favorable conditions for flooding.
The classification of flood susceptibility was performed independently from the model training phase. A synthetic index was computed for each area using a weighted combination of environmental variables (rainfall, lithology, land use, and altitude), based on their relative importance. This index was then divided into quartiles to identify areas with different levels of susceptibility: Low, Moderate, High, and Very High.
This classification was not used as a target variable for model training. The Random Forest model was trained separately using 80% of the dataset, with meteorological data and environmental variables.
The quartile-based classification serves as a complementary tool for spatial interpretation, allowing for the identification of areas with similar environmental conditions that are more or less favorable to water accumulation and runoff. This approach ensures objectivity and reproducibility in the absence of historical flood records, and is grounded in real, observed data.
The predictions of the model are used to construct the transition matrix.
Figure 10 presents the transition probability matrix derived from the full time series of historical flood risk classifications. The matrix captures the stochastic dynamics of flood risk evolution across all available monthly observations. Each cell represents the normalized probability of transitioning from one risk state to another (Low, Moderate, High, Very High) based on observed transitions over time. This matrix is spatially aggregated and reflects the average behavior across the entire study area.
The transition matrix is a square matrix in which each element (i, j) represents the number of transitions observed from state i to state j in the model’s predictions.
The predictions generated with the Markov chain are obtained starting from the initial state (first prediction of the model) and for each subsequent step, the transition matrix is used to determine the probability of transition to the subsequent states. The next state is chosen based on the probability of transition. The transition matrix obtained represents the probabilities of transition between different states of flood risk: low, moderate, high, and very high. Numeric values are displayed within cells for clarity. The values are represented with different colors to identify the level of risk.
Analyzing the data reveals that low-risk areas have a 22.2% chance of remaining in their current state. However, they also have a 22.2% probability of transitioning to a moderate-risk state. Additionally, there is a slightly higher probability of 27.8% that these areas will move to a high or very high-risk state.
Moderate-risk areas have a 10.7% chance of improving and moving to a low-risk state. They also have a 28.6% probability of remaining in the moderate-risk category. However, the likelihood of worsening and transitioning to a high-risk state is 35.70%, and 24.0% for escalating to a very high-risk state.
It is important to note that while the probability of worsening from moderate to high is significant, the reverse transition, from high to moderate, has an even higher likelihood (46.4%). This suggests that high-risk areas are statistically more likely to improve than moderate-risk areas are to deteriorate, highlighting the dynamic and potentially reversible nature of flood risk in certain zones.
High-risk areas show a 14.3% chance of improving and transitioning to a low-risk state. They have a significant 46.4% probability of moving to a moderate-risk state. The probability of staying in the high-risk category is relatively low, at 17.9%, while there is a notable 22% chance of moving to a very high-risk state.
Lastly, areas categorized as very high-risk have a 28.0% chance of improving and transitioning to a low-risk state and a 16.0% chance of moving to a moderate-risk state. The probability of moving to a high-risk state or remaining in the very high-risk category is 32.0% and 24.0%, respectively.
For instance, the probability that a moderate-risk area may escalate to a high-risk state (35.7%) indicates the need for prioritized monitoring and preventive measures. Thus, the transition matrix serves not only as a descriptive tool but also as a valuable resource for informed decision-making in proactive risk management.
This analysis helps us better understand the transition dynamics between different flood risk states, allowing for more targeted interventions to mitigate risks in the most vulnerable areas.
To verify the validity of the Random Forest model, standard performance metrics (e.g., accuracy, precision, recall) were calculated by comparing the predicted flood risk categories with the expected outcomes derived from the model’s structure and input data.
These expected outcomes are not based on historical flood event records, which are not available in a consistent and georeferenced form for the study area, but rather on the anticipated risk levels associated with specific combinations of environmental and meteorological conditions.
The units of measurement used in the evaluation are consistent with the risk categories and reflect the degree of agreement between the model’s predictions and the expected classification under given input conditions:
The model achieved an RMSE of 0.118 and an MSE of 0.014, indicating a low average prediction error. The R2 value of 0.80 suggests that the model explains 80% of the variance in the observed flood risk levels, confirming its strong predictive performance on the ordinal classification task.
A sensitivity analysis was conducted to evaluate how the value of R2 changes with the number of variables in the Random Forest model and the results were shown in the following graph (Figure 11). The radar diagram (Figure 12) was also produced, which provides a clear and immediate view of the relative importance of variables in the model. Together, these diagrams provide a comprehensive understanding of the model’s performance, both in terms of individual variable contributions and overall model improvement.
Figure 12 illustrates the trend of model accuracy (measured by R2) as a function of the number of explanatory variables. This analysis is theoretical and simulates the progressive inclusion of additional variables, some of which are hypothetical and not explicitly listed in the manuscript, to evaluate the model’s sensitivity to increasing complexity. The curve shows that accuracy improves with the addition of relevant variables up to a certain point (around eight variables), after which further additions lead to marginal gains or even slight decreases due to the potential introduction of noise or redundancy.
It is important to note that this analysis is not based on data from individual monitoring stations, but on spatially distributed variables applied across the study area. The graph in Figure 11 represents a behavior in which the accuracy of the model (measured by R2) increases with the addition of relevant variables up to a certain point, after which the addition of additional variables does not lead to significant improvements and may even cause a slight decrease in accuracy due to the introduction of noise or irrelevant variables. The radar diagram (Figure 12), in addition to highlighting the relative importance of the variables, clearly showing which factors have the greatest impact on flood risk forecasts, shows the constancy of the R2 value considering all the variables together. It also suggests that the model is effective in predicting flood risk based on these variables.
A flood susceptibility map (FSM) is a crucial tool for spatial flash flood risk assessment. This map was generated using the Random Forest (RF) model, which represents a systematic approach for flood risk assessment within the study area. The risks were classified into four distinct categories: “low”, “moderate”, “high”, and “very high”, which were associated with distinct colors to identify the level of risk (Figure 13).
Figure 13 shows the spatial distribution of flood susceptibility across the study area, as predicted by the Random Forest model. The model assigns a risk level to each cell of a regular grid based on the values of multiple spatial predictors (e.g., slope, land use, precipitation, drainage density). These predictions are then visualized in QGIS, where the classified risk levels are mapped using a color scale. The contour lines (isohypses) are included for topographic reference only and do not influence the classification. The resulting map highlights areas with varying degrees of flood susceptibility, not limited to the main river corridor. This spatially continuous output provides a valuable tool for identifying critical zones and supporting regional flood risk management strategies.
The green areas, being elevated or well-drained, have a low flood risk, while the yellow zone, characterized by moderate drainage, poses a moderate risk. The orange areas, typically near rivers or in flat, low-lying regions, face a high flood risk, and the red areas are very vulnerable, often requiring preventive measures. This scheme indicates that regions with low slopes in low-lying areas, especially those near river banks, are more susceptible to flash flooding [41]. Given the number of high-risk zones, urgent flood risk management is essential, including infrastructure improvements, flood barriers, advanced warning systems, defined evacuation procedures, and potential restrictions on new construction.
Flood susceptibility plans are fundamental for spatial planning and disaster mitigation because they accurately map flood risk levels. This detailed information supports informed land management decisions such as building defense infrastructure, creating containment basins, and implementing effective drainage measures to protect people, property, and infrastructure while enhancing public safety. Moreover, informing local authorities and the public about flood risks leads to better resource allocation and coordinated interventions, which, when integrated with other planning tools like water management and urban planning, improve community resilience and reduce the impact of environmental disasters.

Performance Comparison of Random Forest and LSTM

In flood risk forecasting, it is crucial to utilize machine learning models that deliver accurate predictions while also capturing the underlying dynamics of the data. The model employed in this analysis was Random Forest (RF), validated through comparison with Long Short-Term Memory (LSTM), a model widely recognized for time series forecasting.
To ensure a comprehensive evaluation, the model performance was assessed using multiple metrics tailored to both regression and classification approaches. Specifically, Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and R2 were employed to quantify the prediction error and explanatory power of the model, ensuring robust numerical evaluation of risk levels.
Additionally, given the ordinal nature of flood risk categories (low, moderate, high, very high), classification-oriented metrics were integrated, including accuracy, precision, recall, and F1-score. These indicators provide further insights into the model’s ability to correctly classify risk levels, addressing potential misclassifications between adjacent categories and reinforcing the reliability of the predictions.
This approach ensures that the continuous and categorical aspects of flood risk estimation are effectively captured, enhancing model interpretability and robustness.
Valuation metrics such as MSE, RMSE, and R2 provide important insights into forecasting model performance. The comparison of the metric values between the two models, as shown in Figure 14, revealed that the following:
  • Mean Squared Error (MSE):
  • RF: 0.014
  • LSTM: 0.1502
  • Root Mean Squared Error (RMSE):
  • RF: 0.118
  • LSTM: 0.3876
  • R-squared (R2):
  • RF: 0.80
  • LSTM: 0.757
The RF model’s mean squared error (MSE) is notably lower than that of the LSTM, indicating that RF makes fewer prediction errors. Similarly, the root mean squared error (RMSE) for RF is lower, confirming its higher accuracy. Moreover, the R-squared (R2) value for the RF model is slightly higher than that of the LSTM, suggesting that RF better explains the variance in the data.
In the light of the fact that more metrics were obtained, it was possible to make a detailed comparison between the Random Forest and LSTM models, the results of which are summarized in Figure 15.
Accuracy: The Random Forest model (0.89) outperforms the LSTM (0.85), demonstrating a greater overall ability to correctly classify risk levels. This is particularly relevant in real-world scenarios, where any errors can have significant operational consequences.
Precision: Random Forest (0.91) exhibits higher accuracy than LSTM (0.88), reducing the likelihood of false alarms and helping to avoid unnecessary intervention.
Recall: Also in terms of sensitivity, Random Forest (0.87) proves to be more effective than LSTM (0.83), being more suitable for risk management, where it is essential not to underestimate situations of real danger.
F1-score: Random Forest (0.89) maintains a better balance between accuracy and recall than LSTM (0.84), making it more reliable in contexts where it is important to correctly balance false positives and false negatives.
Comparing the Random Forest (RF) and Long Short-Term Memory (LSTM) models, it becomes clear that the RF model significantly outperforms the LSTM regarding predictive accuracy. In the context analyzed, the Random Forest model proves to be more effective in classifying flood risk than the LSTM model.
The comparison between Random Forest and LSTM was conducted with the following three objectives in mind:
  • To verify the validity of the Random Forest model against a more advanced model like LSTM.
  • To identify the strengths and weaknesses of each model.
  • To determine the most suitable model for flood risk prediction in the context of this study.
The performance differences can be attributed to several factors. As an ensemble of decision trees, the RF model effectively captures interactions between variables and handles data variability. Decision trees excel at processing heterogeneous data and managing nonlinearity; characteristics often present in flood risk data. Moreover, RF is less sensitive to overfitting issues than LSTM, thanks to its ensemble nature, which averages predictions from many trees.
Another significant advantage of the RF model is its robustness and stability. Decision trees, the foundation of the RF model, are adept at handling noise and outliers in data. By combining predictions from multiple trees, the RF model further mitigates the impact of anomalies, making it particularly suitable for complex and noisy datasets like those used in flood risk forecasting.
RF’s versatility in handling different variables, be they categorical or numeric, without the need for complex transformations simplifies data preprocessing. This flexibility allows for a broader range of variables to be included in the model, enhancing its interpretability. The decisions made by the trees in the RF model are easily understandable, facilitating the comprehension of the relationships between variables and their impacts on predictions, thereby increasing the model’s transparency.
From a computational standpoint, the RF model is generally more efficient than the LSTM. Despite its complexity, implementing RF typically requires less computational resources and training time than LSTM, which demands larger datasets and more resources to train effectively. This makes RF a more practical and cost-effective choice for numerous applications.
On the other hand, while the LSTM model is robust in capturing time dependencies in data, its limitations can hinder its performance. LSTM may be more prone to overfitting, particularly with smaller datasets or highly correlated variables. Furthermore, the complexity of LSTM can complicate hyperparameter optimization, adversely affecting its ability to generalize well on test data.
In conclusion, the RF model is the superior choice for flood risk forecasting. Its ability to handle variability, robustness, ease of interpretation, and computational efficiency contribute to its superior performance over the LSTM model. This underscores the importance of selecting the right model based on the data’s characteristics and the analysis’s objectives.
A combined model could leverage Random Forest’s robustness and interpretability while utilizing LSTM’s ability to manage long-term dependencies, leading to more accurate predictions.
This combined approach could improve forecast accuracy and provide a more comprehensive understanding of flood risk dynamics. The integration of the two models could be achieved through ensemble learning techniques, where the predictions of both models are combined for a more robust final result. Additionally, employing stacking methods could enable predictions from one model to serve as input for the other, enhancing overall performance. This perspective represents an exciting opportunity for research and development in flood risk forecasting, with the potential to deliver more effective and reliable tools for risk management and mitigation.

6. Discussion

The results obtained demonstrate that the proposed approach, based on the integration between Random Forest and Markov chains, represents a significant step forward in flood risk prediction. The model showed high predictive performance, with an R2 of 0.80 and a very low mean squared error (MSE) (0.014), even surpassing an LSTM model used as a reference. These results confirm the validity of the initial objective of the research: to develop a predictive system that was accurate, interpretable, and capable of modeling the temporal dynamics of risk. A distinctive element of our work is the use of the Markov chain to simulate the evolution of risk over time, as illustrated in Figure 16. This figure, entitled “Simulated Flood Risk Fluctuations Over Time”, represents the temporal evolution of the simulated flood risk through the integration of the Random Forest model with a Markov chain. The simulation was generated starting from an initial state of low risk and iteratively applying the transition matrix calculated from the model predictions. Each time step represents one month, and the entire simulation covers a span of 11 years (132 months), offering a dynamic view of the evolution of risk over time.
Analyzing the results, we observe that the “High” and “Very High” risk states are the most common, accounting for 28.57% and 25.56% of the time, respectively. This indicates a higher overall frequency of elevated flood risk states in the simulation, though no clear temporal trend is visually evident in the time series. In contrast, the “Low” risk status is the least common, representing only 20.30% of the observations. A transition matrix governs the transitions between the risk states. For instance, from the “Low” state, there is a 27.78% chance of moving to either the “High” or “Very High” state. This highlights that even when the risk is low, there is a significant possibility of a rapid increase in risk.
The graph also illustrates monthly fluctuations among the different risk states, indicating considerable variability in flood risk every month. This variability suggests that the risk can change quickly from one month to the next, emphasizing the importance of continuous monitoring.
The figure does not have forecasting purposes, but provides a probabilistic representation useful for strategic purposes, to support the planning of risk mitigation interventions. The time series shows high monthly variability, with frequent transitions between all four risk states (“Low”, “Moderate”, “High”, and “Very High”). Although no net visual trend emerges, the aggregate analysis shows that the “High” and “Very High” states become predominant over time suggesting that, in the absence of interventions, the territory could be subject to an escalation of hydrogeological risk.
The original contribution of this study is divided into three main aspects: (1) the synergistic integration between machine learning models and stochastic models, (2) the use of high-resolution LIDAR data for the generation of digital terrain models, and (3) the ability of the model to provide both spatial and temporal predictions of risk. In particular, the use of LIDAR data with altimetric accuracy of less than 15 cm has made it possible to obtain an extremely detailed topographic representation, superior to that offered by low-resolution digital terrain models commonly used in the literature. This significantly improved the quality of spatial risk classification. In addition, direct comparison with an LSTM model highlighted the higher accuracy of our approach, which resulted in superior predictive metrics with a simpler and more interpretable computational structure. Finally, temporal modeling using Markov chains added a dynamic dimension to the analysis, making it possible to simulate the evolution of risk over time and to support long-term strategic decisions.
This approach not only improves the accuracy of forecasts but also expands the possibilities of operational use of the model, making it a useful tool for early warning systems, spatial planning, and emergency management.
In conclusion, our model does not limit itself to classifying areas at risk, but provides a dynamic and adaptive view of the flood phenomenon, contributing significantly to the evolution of hydrogeological risk forecasting and management tools.

6.1. Comparison with Other Methodologies

The findings underscore the importance of a modeling approach that not only achieves high predictive accuracy but also fosters strong interpretability and actionable insights. In this context, the comparison with other hydrogeological risk prediction methodologies demonstrates how the proposed model effectively balances precision, transparency, and flexibility. It successfully addresses the limitations of more complex approaches, such as LSTM, SVM, and CNN, as discussed below.
In terms of performance, the proposed model achieves excellent results in both classification and regression metrics. Comparisons with algorithms commonly used in the literature further validate the effectiveness of this approach. For instance, while LSTM neural networks are designed to capture time dependencies in sequential data, they exhibit lower performance compared to our model. Similarly, Support Vector Machines (SVMs) and Convolutional Neural Networks (CNNs) show significant limitations.
SVMs can be helpful in linear scenarios or when a well-chosen kernel is applied, but they struggle with heterogeneous and complex datasets, such as those arising from environmental and geospatial contexts. Furthermore, SVMs often lack interpretability, making it difficult to understand the contribution of individual variables to risk predictions. Studies, such as that by Granata et al. [42], have indicated that while SVMs can effectively model urban runoff, they require careful calibration and might not perform well in dynamic scenarios.
On the other hand, CNNs excel in image processing but require substantial amounts of data and high computational resources. They are not inherently designed to model temporal components of risk unless paired with more complex architectures, such as CNN-LSTMs, which further complicate the system. A study by Wang et al. [41] utilized CNNs for flood susceptibility mapping; however, the results demonstrated lower interpretability and flexibility compared to tree-based models, such as Random Forest.
In contrast, the proposed model maintains an optimal balance between accuracy, efficiency, and transparency. By utilizing LIDAR data to generate digital terrain models (DTMs), it ensures superior spatial accuracy. Additionally, stochastic modeling through Markov chains enables the simulation of risk evolution over time. This approach not only enhances the quality of forecasts but also serves as a valuable tool for spatial planning and emergency management.
Even when compared to hybrid approaches present in the literature, such as the one proposed by Kim et al. [43], which combines LSTM and Random Forest for urban flood risk forecasting, the model stands out due to its ease of implementation and greater transparency in decision-making. While Kim’s model necessitates complex numerical simulations to generate training data, the approach used here is grounded in observational and geospatial data, making it more replicable and adaptable to different geographical contexts.
In summary, the proposed model is a robust, advanced, and adaptable solution that effectively overcomes the limitations of existing methodologies, providing a concrete contribution to flood risk management in the face of increasing climate vulnerability.

6.2. The Limitations, Scalability, and Adaptability of the Model

The model was developed and validated using data from a single catchment, the Calopinace River basin in Reggio Calabria. This choice was guided by the availability of high-resolution LIDAR data, detailed land use and lithological maps, and a consistent 11-year meteorological dataset. The basin’s complex topographic and urban characteristics provided a suitable testbed for methodological development.
However, the use of a single basin limits the spatial generalizability of the results. The absence of spatial cross-validation across multiple watersheds or hydroclimatic regimes restricts the assessment of model robustness under varying environmental conditions.
The current study is intended as a proof-of-concept to demonstrate the methodological soundness of the RF–Markov framework. Future research will focus on extending the model to additional catchments in southern Italy, characterized by diverse geomorphological and climatic features.
Moreover, regional climate models, which use historical and current climate data to simulate future weather patterns, are essential for assessing the impacts of climate change at a local scale and for developing effective mitigation strategies.
A limitation of the present study lies in the absence of georeferenced historical flood event data for the Calopinace River basin. While several major flood events have been documented (e.g., 1951, 1953, 1972, 2000, 2015), these records are primarily qualitative or aggregated at the municipal level, lacking the spatial resolution required for pixel-level validation. Consequently, a direct comparison between model predictions and historical flood extents was not feasible.
To address this constraint, we adopted a proxy-based validation strategy grounded in real environmental predictors such as LIDAR-derived elevation, lithology, land use, and long-term rainfall averages and applied Principal Component Analysis (PCA) to derive a synthetic flood susceptibility index. This index, classified into four ordinal risk levels, served as a statistically robust and reproducible proxy for flood risk in the absence of empirical flood labels.
The model’s performance metrics (RMSE = 0.118, MSE = 0.014, R2 = 0.80), obtained via 3-fold cross-validation, reflect its internal consistency and generalization capacity within the study area. While these metrics do not represent real-world predictive accuracy in the strictest sense, they provide a reliable internal benchmark given the available data.
Despite this limitation, the model offers practical value for decision-makers operating in data-scarce environments. It provides a spatially explicit, interpretable, and reproducible framework for identifying areas of high environmental susceptibility to flooding, supporting proactive land use planning and early warning systems.
Future work will focus on accessing and digitizing historical flood archives in collaboration with local authorities and civil protection agencies. This will enable more direct validation and the potential integration of event-based calibration techniques to further enhance model reliability. Finally, to further improve predictive capacity, combining the Random Forest algorithm with LSTM neural networks which excel at processing sequential and temporal data may capture long-term dependencies in climate and hydrological data. This combination promises to enhance flood risk forecasts, making the model more robust and adaptable to varying regional conditions.

7. Conclusions

Flooding is occurring more frequently worldwide, making effective disaster risk management essential. This management should focus on reducing a territory’s vulnerability and improving emergency responses. This study presents an innovative approach to identifying the areas most at risk of flooding using a machine learning system, with results validated through Markov chain analysis. The findings indicate that the characteristics of a territory significantly influence the occurrence of flooding.
This study emphasizes how these technologies can enhance our understanding of the effects of climate variability and aid in the development of strategies for climate change mitigation and adaptation. The combination of machine learning and Markov chains provides a powerful tool for predicting and managing emergencies, potentially reducing economic damage and saving lives.
This study presents a comprehensive comparison of the Random Forest (RF) model and the Long Short-Term Memory (LSTM) model, outlining the strengths and weaknesses of each. The RF model, in particular, demonstrated greater effectiveness in predictions within the specific context examined. Looking ahead, the study suggests a potential avenue for further research: the development of a combined RF and LSTM model that could harness the strengths of both approaches for improved prediction accuracy.
The flood susceptibility map (FSM) plays a unique and crucial role in supporting decision-makers, as highlighted by this research. By identifying high-risk areas, the FSM facilitates informed decision-making in spatial planning. This enables the implementation of resilient design strategies and prioritizes mitigation efforts to enhance community resilience and reduce property damage during flood events. The FSM has multiple applications, serving as a guide for designing resilient infrastructure and significantly aiding the development of advanced and robust systems such as drainage systems, embankments, and flood barriers. These elements can play a crucial role in mitigating the impact of flooding. Additionally, the FSM supports land use planning by helping create plans to avoid construction in high-risk areas and promote development in safer locations. It also enhances emergency management, as improved flood forecasting allows for the rapid development of effective response plans, reduced evacuation times, and better coordination among relief agencies.
One of the key components of this approach is the proactive analysis of the temporal dynamics of flood events, achieved through Markov chains. This method enables the simulation of the temporal evolution of flood risk, providing a dynamic perspective for monitoring and predicting risk conditions. Transitions between risk states (low, moderate, high, and very high) are modeled to quickly identify areas that may shift from low to high risk, thus improving the timeliness and effectiveness of emergency responses.
Integrating real-time data with the predictive model ensures continuous updates of flood risk forecasts. This proactive approach enables authorities to make informed and timely decisions, enhancing asset management and emergency response efforts. Implementing an early warning system based on these predictions can significantly reduce economic damage and save lives.
The findings lay a solid foundation for further research, which could focus on optimizing the model and integrating new data sources to improve risk management. Lastly, it is essential to emphasize that collaboration among various communities can facilitate the comparison of adopted strategies, allowing for the sharing of data and methodologies. This exchange can advance flood risk management and contribute to greater resilience in vulnerable communities.

Author Contributions

Conceptualization, V.B., L.B., G.B., G.M.M., and E.G.; methodology, V.B., L.B., G.B., G.M.M., and E.G.; software, V.B., L.B., G.B., G.M.M., and E.G.; validation, V.B., L.B., G.B., G.M.M., and E.G.; formal analysis, V.B., L.B., G.B., G.M.M., and E.G.; investigation, V.B., L.B., G.B., G.M.M., and E.G.; data curation, V.B., L.B., G.B., G.M.M., and E.G.; writing—original draft preparation, V.B., L.B., G.B., G.M.M., and E.G.; writing—review and editing, V.B., L.B., G.B., G.M.M., and E.G.; visualization, V.B., L.B., G.B., G.M.M., and E.G.; supervision, V.B., L.B., G.B., G.M.M., and E.G.; project administration, V.B., L.B., G.B., G.M.M., and E.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The developed model and the input data are posted in GitHub. URL: http://github.com/Luigi2020357/Flood-Forecast (Accessed on 25 April 2025).

Acknowledgments

The contribution is the result of ongoing research under the PNRR National Recovery and Resilience Plan, Mission 4 “Education and Research,” funded by Next Generation EU, within the Innovation Ecosystem project “Tech4You” Technologies for climate change adaptation and quality of life improvement, SPOKE 4-Technologies for resilient and accessible cultural and natural heritage, Goal 4.6 Planning for Climate Change to boost cultural and natural heritage: demand-oriented ecosystem services based on enabling ICT and AI technologies.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Technical Details on Hyperparameter Optimization

There are different search techniques for optimizing hyperparameters; the most common are:
Grid Search: An exhaustive method that explores all possible combinations of a predefined set of hyperparameters. This technique ensures that no potentially optimal combination is overlooked.
Random Search: This method randomly explores a set of hyperparameter combinations. While it is less exhaustive than Grid Search, it is more time-efficient.
Bayesian Optimization: This approach builds a probabilistic performance model and utilizes Hyperband, which combines Random Search with early stopping to evaluate many hyperparameter configurations quickly.
The proposed study selected Grid Search for its comprehensiveness, simplicity, and reliability. It is straightforward to set up and implement, and since the results are not reliant on randomness, it is considered a trustworthy technique. Additionally, it can be used with any machine learning model and hyperparameter type, making it highly versatile.
The specific hyperparameters considered in this study were:
-
n_estimators: This refers to the number of trees in the forest. A higher number of trees enhances the model’s performance by reducing variance.
-
max_features: This parameter specifies the number of features to consider when splitting each node. By using ‘sqrt,’ you ensure that each tree is trained on a distinct subset of characteristics, which enhances diversity and minimizes the risk of overfitting.
-
max_depth: This defines the maximum depth of each tree. A greater depth allows for the capture of more complex patterns, but intense trees can lead to overfitting. It is important to find a balance when selecting the depth.
-
min_samples_leaf: This parameter sets the minimum number of samples required to form a leaf. It helps prevent the creation of overly specific subdivisions that may contribute to overfitting.
-
min_samples_split: This indicates the minimum number of samples to split a node. Similarly to min_samples_leaf, this parameter aims to reduce excessive subdivisions.
In a Random Forest model, the grid can include various combinations of values for the number of trees (n_estimators), the maximum depth of the trees (max_depth), the minimum number of samples required to split a node (min_samples_split), and other related parameters.
Once the grid is defined, every possible combination of parameters is trained and validated using cross-validation. The study used the fit method to train the model on different combinations of hyperparameters as part of the Grid Search process. This method is crucial because it enables the model to learn from the data and adjust its internal parameters to minimize prediction error.
During training, the model analyzes the characteristics of the input data and the corresponding labels, seeking to identify patterns and relationships that can be used for accurate predictions on new data. The fit method is essential as it optimizes the model through an iterative learning process. This process involves optimizing model parameters using a loss function, which measures the error between the model’s predictions and the actual labels. The goal is to minimize this loss function, thereby improving the model’s generalization ability to unseen data. This approach enabled us to identify the combination of hyperparameters that optimizes the model’s performance, leading to more accurate and reliable predictions. Thus, the decision to use the fit method was vital to the success of our modeling approach, contributing significantly to enhancing the accuracy of flood risk forecasts.
Cross-validation is a technique that divides the dataset into several parts, or ‘folds.’ The model is trained on one part of the dataset in each iteration and validated on another. This process is repeated for each combination of hyperparameters, ensuring the model is evaluated on different dataset sections. In this case, a 3-fold cross-validation is used, which means that the dataset has been divided into three parts, and the model has been trained and validated three times for each combination of hyperparameters.
After completing the Grid Search, the best-performing model is selected based on the chosen evaluation metric, which in this case was accuracy. The optimal parameters identified were: n_estimators = 300, max_depth = 30, min_samples_split = 2, min_samples_leaf = 1, and max_features = ‘sqrt’.
The optimal parameters were selected based on straightforward logic, focusing on achieving model accuracy while avoiding overfitting:
-
n_estimators: Increasing the number of trees improves model performance by reducing variance. However, too many trees can increase computation time without significantly enhancing performance. A value of 300 was chosen as a good compromise between accuracy and computational efficiency.
-
max_features: Setting this parameter to ‘sqrt’ (the square root of the total number of features) ensures that each tree is trained on a different subset of features. This approach increases the diversity among the trees, reducing their correlation and enhancing the model’s robustness.
-
max_depth: A greater depth enables the model to capture more complex patterns in the data. However, excessive depth can lead to overfitting, where the model fits too closely to the training data and struggles to generalize to new data. A depth of 30 was selected to balance the ability to capture complex patterns while preventing overfitting.
-
min_samples_leaf and min_samples_split: These parameters help prevent overly specific subdivisions from being created that can lead to overfitting. By setting min_samples_leaf to 1 and min_samples_split to 2, each subdivision contains enough samples to be meaningful, thus improving the generalization of the model.
Increasing the number of trees in the Random Forest (n_estimators = 300) and optimizing the max_depth (max_depth = 30) improves model metrics. This enhancement indicates that the model is now more accurate and reliable for predicting flood risk, resulting in reduced forecast errors and better explanations of data variability.
Searching for the optimal parameters corresponding to the number of combinations specified in the grid required 216 iterations.

Appendix B

Appendix B.1. Markov Chain

Markov chains are powerful tools for modeling systems that evolve probabilistically. They provide a comprehensive understanding of changes in an ecosystem due to interactions with environmental and anthropogenic factors. In this study, states are used to symbolize different flood risk classes rather than directly representing meteorological or hydrological conditions. These classes are derived from predictions made by the Random Forest model, which incorporates various environmental variables to categorize each spatial unit into one of four distinct risk levels. This method provides a clear understanding of transition processes over time by utilizing a Markov chain to model the evolution of risk. For example, in a flood-prone area, each state corresponds to a level of risk, ranging from low to high. Transitions between these states occur based on specific probabilities that depend only on the current state and not on the history of the system. This feature is known as the Markov property or ‘short memory’.

Appendix B.2. Transition Matrix

A P transition matrix is used to mathematically describe these transitions. Each element of the matrix, Pij, indicates the probability of moving from state i to state j in a single step. The transition matrix is defined as follows:
Pij = probability of transition from state i to state j. The sum of the probabilities of transition from a state must be equal to 1, i.e., Σ Pij = 1 for every i.

Appendix B.3. Homogeneous and Non-Homogeneous Markov Chains

These considerations are valid in the presence of homogeneous Markov chains. The transition matrix P is invariable and the stationary probability distribution π satisfies the equation πP = π. This indicates that the probability distribution remains unchanged after the transition matrix is applied.
In contrast, a non-homogeneous Markov chain has transition probabilities that may vary over time or be influenced by external factors. In such cases, the transition matrix Pt depends on time t or other external variables, enabling the Markov chain to be dynamic and adaptable. For example, different transition probability matrices Pt1, Pt2, …, Ptn can be generated for various time intervals t1, t2, …, tn, reflecting changes in transition probabilities in response to external conditions.

Appendix B.4. Absorbent States

Another important concept is that of absorbent states. These states are situations from which, once reached, the system cannot escape. In the context of floods, an absorbing state could represent an extreme risk scenario that persists over time once reached.

Appendix B.5. Regular and Ergodic Markov Chains

Markov chains can be classified as regular or ergodic. A Markov chain is considered regular if it can reach any state from any other state in a finite number of steps. This implies that, in the long run, the system does not remain trapped in a limited subset of states. A Markov chain is ergodic if it exhibits stable behavior over time, regardless of the initial state. This means that there is a unique stationary distribution that the system converges towards, regardless of where it began.

Appendix B.6. Validation of the Transition Matrix

The transition matrix was validated using the maximum likelihood test, confirming the homogeneity of the Markovian process over an 11-year period. As the input data expanded from 6 years to 11 years, the transition matrix was homogeneous. The G2 value was calculated and found to be 2.16, which is lower than the critical value of the chi-square distribution (about 16.91) with 9 degrees of freedom. This result supports the null hypothesis that the observed transition matrix is consistent with a homogeneous Markov chain. This homogeneity indicates that transition probabilities remain stable over time, which has significant implications for understanding flood risk dynamics.

Appendix B.7. The Benefits of Integrating into the Predictive Model

The integration of the Markov chain makes it possible to model the temporal dynamics of the risk, which cannot be captured by static models. Quantify the likelihood of transition between risk states, supporting preemptive planning and emergency management. This stochastic component improves the predictive capacity of the system, making it more robust and suitable for complex and variable scenarios.

Appendix C

Predictive Model and Real-Time Data Integration

The predictive model, as previously described, is a powerful tool that utilizes ma-chine learning techniques and Markov chains to effectively identify areas at risk by analyzing historical and geospatial data. The model’s potential is vast, as it can be expanded into a system that acquires and analyzes real-time data to fully leverage its capabilities. This implementation is a significant step towards protecting communities and minimizing extreme events’ economic and social impacts. By monitoring and predicting flood conditions in real-time, informed, and timely decisions can be made, enhancing emergency response and asset management. Achieving this requires a series of technical and operational actions outlined below, with the aim of creating a system that can promptly and effectively monitor and respond to flood conditions.
To collect real-time data, it is essential to implement an Internet of Things (IoT) sensor network. These strategically placed sensors monitor meteorological variables such as rainfall, temperature, and humidity, as well as hydrological variables like river levels. The selection of installation sites should be based on a thorough analysis of at-risk areas and the geographical characteristics of the territory. Once the sensors are installed, ensuring a reliable communication network for transmitting the collected data to central servers is vital. Wi-Fi and cellular networks effectively guarantee continuous, uninterrupted data transmission. The central servers, configured using cloud technologies, must be able to receive, process, and store data in real-time, ensuring the system’s scalability and reliability.
The data pipeline is the central element of the real-time forecasting system. At this stage, sensor data must be cleaned and converted into usable formats. This process is efficiently managed using tools like Apache Kafka, which helps regulate the data flow. Once transformed, the data should be stored in a real-time database like Prometheus. This open-source toolkit is ideal for monitoring and alerting, as it is designed to handle large volumes of temporal data. This ensures that information is quickly and reliably accessible for future analysis.
By implementing incremental learning techniques, the predictive model is always up-to-date with the latest information. These techniques, which allow the model to be up-dated with new data in real-time, are particularly well-suited for machine learning algorithms like Long-Short-Term Memory (LSTM) neural networks. These algorithms can adapt to changes in the data and continuously improve predictions.
Furthermore, the integration of LSTMs with the Random Forest model used in the original framework offers significant benefits. This hybrid approach can substantially enhance predictive capabilities by leveraging the strengths of both models: LSTM’s proficiency in handling temporal sequences and Random Forest’s robustness.
A crucial aspect of the real-time forecasting system is data visualization and early warning capabilities. Users can view flood risk forecasts in real-time by creating a dashboard, gaining a clear and immediate overview of current and future conditions. Tools such as Metabase or Grafana, known for their user-friendly interfaces and robust features, can be utilized to develop custom dashboards, ensuring a high-quality user experience.
Regular simulations and continuous feedback from users and local authorities are essential for the real-time forecasting system to work effectively. Simulations must replicate different meteorological and hydrological conditions to evaluate system performance. Simulations help test the accuracy of the system and identify areas for improvement, while user feedback helps identify any issues and improve the usability of the system.
Effective flood risk management requires predicting weather and hydrological conditions and implementing an early warning system that can send automatic notifications to both authorities and the public. These notifications should be clear and timely, especially during periods of high flood risk. Messaging services like Twilio can be utilized to deliver SMS, emails, or push notifications. These messages must be tailored to individuals’ risk levels and geographic locations, ensuring that critical information reaches the right people promptly.
Collaboration with local authorities is vital for integrating the real-time forecasting system into emergency protocols. This includes establishing standard operating procedures (SOPs) for flood response. The SOPs must clearly define the roles and responsibilities of each entity involved, ensuring that everyone knows their specific tasks in the event of a flood. Additionally, SOPs should be regularly updated to reflect best practices and lessons learned from past events.
Periodic audits and simulations of flood scenarios should be conducted to test the effectiveness of the SOPs and the alert system. These simulations help identify gaps in procedures and improve coordination among different agencies. Practical exercises must in-volve all levels of government and relief organizations to ensure that emergency response measures are coordinated and timely.
Other important initiatives include training and informational activities for both authorities and the public on using the alert system and responding effectively to emergencies. Training sessions should include practical exercises based on potential flood scenarios. Support these training sessions with informational materials and how-to guides to help users understand the warning system and the safety measures to take in the event of flooding.
Continuous feedback from users and local authorities is crucial for ongoing system improvement. This feedback is not just important, but it is also a way for everyone involved to be heard and feel involved in the process. It should be used to identify usability issues and enhance the alert system. Establishing a dedicated communication channel for feedback can facilitate the gathering of valuable information and encourage constructive dialog among all parties involved.

References

  1. IPCC. Climate Change 2022: Impacts, Adaptation, and Vulnerability. Contribution of Working Group II to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change; Pörtner, H.-O., Roberts, D.C., Tignor, M., Poloczanska, E.S., Mintenbeck, K., Alegría, A., Craig, M., Langsdorf, S., Löschke, S., Möller, V., et al., Eds.; Cambridge University Press: Cambridge, UK, 2022. [Google Scholar] [CrossRef]
  2. Lakshmi, V. Enhancing human resilience against climate change: Assessment of hydroclimatic extremes and sea level rise impacts on the Eastern Shore of Virginia, United States. Sci. Total Environ. 2024, 947, 174289. [Google Scholar]
  3. AghaKouchak, A.; Chian Sadegh, M. Climate extremes and compound hazards in a warming world. Annu. Rev. Earth Planet. Sci. 2020, 48, 519–548. [Google Scholar] [CrossRef]
  4. Fu, B.; Gasser, T.; Li, B.; Tao, S.; Ciais, P.; Piao, S.; Balkanski, Y.; Li, W.; Yin, T.; Han, L.; et al. Short-lived climate forcers have long-term climate impacts via the carbon–climate feedback. Nat. Clim. Change 2020, 10, 851–855. [Google Scholar] [CrossRef]
  5. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32007L0060 (accessed on 15 November 2024).
  6. Kumar, V.; Sharma, K.; Caloiero, T.; Mehta, D.; Singh, K. Comprehensive Overview of Flood Modeling Approaches: A Review of Recent Advances. Hydrology 2023, 10, 141. [Google Scholar] [CrossRef]
  7. Rao, G.V.; Nagireddy, N.R.; Keesara, V.R.; Sridhar, V.; Srinivasan, R.; Umamahesh, N.V.; Pratap, D. Real-time flood forecasting using an integrated hydrologic and hydraulic model for the Vamsadhara and Nagavali basins, Eastern India. Nat. Hazards 2024, 120, 6011–6039. [Google Scholar] [CrossRef]
  8. Guven, D.S.; Yenigun, K.; Isinkaralar, O.; Isinkaralar, K. Modeling flood hazard impacts using GIS-based HEC-RAS technique towards climate risk in Şanlıurfa, Türkiye. Nat. Hazards 2024, 121, 3657–3675. [Google Scholar] [CrossRef]
  9. Zahura, F.T.; Goodall, J.L. Predicting combined tidal and pluvial flood inundation using a machine learning surrogate model. J. Hydrol. Reg. Stud. 2022, 41, 101087. [Google Scholar] [CrossRef]
  10. Alvir, M.; Grbčić, L.; Sikirica, A.; Kranjčević, L. OpenFOAM-ROMS nested model for coastal flow and outfall assessment. Ocean Eng. 2022, 264, 112535. [Google Scholar] [CrossRef]
  11. Guan, X.; Xia, C.; Xu, H.; Liang, Q.; Ma, C.; Xu, S. Flood risk analysis integrating of Bayesian-based time-varying model and expected annual damage considering non-stationarity and uncertainty in the coastal city. J. Hydrol. 2023, 617, 129038. [Google Scholar] [CrossRef]
  12. Yang, Y.; Gao, X.; Guo, Z.; Chen, D. Learning Bayesian networks using the constrained maximum a posteriori probability method. Pattern Recognit. 2019, 91, 123–134. [Google Scholar] [CrossRef]
  13. Abdullah, M.F.; Siraj, S.; Hodgett, R.E. An Overview of Multi-Criteria Decision Analysis (MCDA) Application in Managing Water-Related Disaster Events: Analyzing 20 Years of Literature for Flood and Drought Events. Water 2021, 13, 1358. [Google Scholar] [CrossRef]
  14. Marshall, S.R.; Tran, T.N.D.; Tapas, M.R.; Nguyen, B.Q. Integrating artificial intelligence and machine learning in hydrological modeling for sustainable resource management. Int. J. River Basin Manag. 2025, 2025, 1–17. [Google Scholar] [CrossRef]
  15. Gurney, K. An Introduction to Neural Networks; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  16. Prieto, A.; Prieto, B.; Ortigosa, E.M.; Ros, E.; Pelayo, F.; Ortega, J.; Rojas, I. Neural networks: An overview of early research, current frameworks and new challenges. Neurocomputing 2016, 214, 242–268. [Google Scholar] [CrossRef]
  17. Hai-Min Lyu, H.M.; Yin, Z.Y. Flood susceptibility prediction using tree-based machine learning models in the GBA. Sustain. Cities Soc. 2023, 97, 104744. [Google Scholar] [CrossRef]
  18. Yan, J.; Jin, J.; Chen, F.; Yu, G.; Yin, H.; Wang, W. Urban flash flood forecast using support vector machine and numerical simulation. J. Hydroinform. 2018, 20, 221–231. [Google Scholar] [CrossRef]
  19. Zhu, Q.; Wang, C.; Jin, W.; Ren, J.; Yu, X. Deep transfer learning based on lstm model for reservoir flood forecasting. Int. J. Data Warehous. Min. 2024, 20, 1–17. [Google Scholar] [CrossRef]
  20. Ozger, M. Assessment of flood damage behaviour in connection with large-scale climate indices. J. Flood Risk Manag. 2017, 10, 79–86. [Google Scholar] [CrossRef]
  21. Coronese, M.; Lamperti, F.; Keller, K.; Chiaromonte, F.; Roventini, A. Evidence for sharp increase in the economic damages of extrem natural disasters. Proc. Natl. Acad. Sci. USA 2019, 116, 21450–21455. [Google Scholar] [CrossRef]
  22. Roohi, M.; Ghafouri, H.R.; Ashrafi, S.M. Developing an Ensemble Machine Learning Approach for Enhancing Flood Damage Assessment. Int. J. Environ. Res. 2024, 18, 90. [Google Scholar] [CrossRef]
  23. Kubendiran, I.; Ramaiah, M. Modeling, mapping and analysis of floods using optical, LIDAR and SAR datasets—A review. Water Resour. 2024, 51, 438–448. [Google Scholar] [CrossRef]
  24. Islam, A.; Ghaith, M.; Hassini, S.; El-Dakhakhni, W. A short-term flood forecasting model using Markov Chain. In Proceedings of the Canadian Society of Civil Engineering Annual Conference CSCE 2021, Niagara Falls, Ontario, Canada, 26–29 May 2021; Lecture Notes in Civil Engineering; Springer Nature: Singapore, 2022; pp. 555–563. [Google Scholar]
  25. Kader, Z.; Islam, R.; Aziz, T.; Hossain, M.; Islam, R.; Miah, M.; Jaafar, W.Z.W. GIS and AHP-based flood susceptibility mapping: A case study of Bangladesh. Sustain. Water Resour. Manag. 2024, 10, 170. [Google Scholar] [CrossRef]
  26. Liu, B.; Li, Y.; Ma, M.; Mao, B. A Comprehensive Review of Machine Learning Approaches for Flood Depth Estimation. Int. J. Disaster Risk Sci. 2025, 16, 433–445. [Google Scholar] [CrossRef]
  27. Ezziyyani, M.; Cherrat, L.; Benmessoud, Y.; Mamoune, S.E. Flood Management Through the Application of Learning Models Automatic. In International Conference on Advanced Intelligent Systems for Sustainable Developent (AI2SD 2024); Ezziyyani, M., Kacprzyk, J., Balas, V.E., Eds.; AI2SD 2024. Lecture Notes in Networks and Systems; Springer: Cham, Switzerland, 2025; Volume 1403. [Google Scholar] [CrossRef]
  28. Xu, B.; Rahman, F.B.A. Intelligent Flood Prediction Method with Runoff Prediction Using LSTM Networks. In Disaster Prevention and Mitigation of Infrastructure. ICHCE 2024. Sustainable Civil Infrastructures; Zhang, P., Zhang, J., Zhang, L., Qian, H., Pang, R., Eds.; Springer: Cham, Switzerland, 2025. [Google Scholar] [CrossRef]
  29. Wahba, M.; Essam, R.; El-Rawy, M.; Al-Arifi, N.; Abdalla, F.; Elsadek, W.M. Forecasting of flash flood susceptibility mapping using random forest regression model and geographic information systems. Heliyon. 2024, 10, e33982. [Google Scholar] [CrossRef]
  30. Vaishnavi, S.; Shanmugapria, V.S.; Shirisha, K.; Yamani, G. Flood forecasting using adaptive random forest regression. J. Curr. Res. Eng. Sci. 2021, 4, 1. [Google Scholar]
  31. Kemeny, J.G.; Snell, J.L. Finite Markov Chains; Van Nostrand: Princeton, NJ, USA, 1960. [Google Scholar]
  32. Mohd Rethuan, S.F.; Rahmat, S.F. Forecasting Flood Using Markov Chain Model. Recent Trends Civ. Eng. Built Environ. 2022, 3, 536–545. [Google Scholar]
  33. Brémaud, P. Non-homogeneous Markov Chains. In Markov Chains. Texts in Applied Mathematics; Springer: Cham, Switzerland, 2020; Volume 31. [Google Scholar] [CrossRef]
  34. De Dominicis, R. Non-homogeneous Semi-Markov Processes. Decis. Econ. Financ. 1979, 2, 157–167. [Google Scholar] [CrossRef]
  35. Bodoque, J.M.; Aroca-Jiménez, E.; Eguibar, M.A.; García, J.A. Developing reliable urban flood hazard mapping from LiDAR data. J. Hydrol. 2023, 617 Pt A, 128975. [Google Scholar] [CrossRef]
  36. Chen, Z. The Application of Airborne Lidar Data in the Modelling of 3D Urban Landscape Ecology; Cambridge Scholars Publishing: Cambridge, UK, 2016. [Google Scholar]
  37. Li, J.; Wong, D.W.S. Effects of DEM sources on hydrologic applications. Comput. Environ. Urban Syst. 2010, 34, 251–261. [Google Scholar] [CrossRef]
  38. Anderson, T.W.; Goodman, L.A. Statistical inference about Markov chains. Ann. Math. Stat. 1957, 28, 89–110. [Google Scholar] [CrossRef]
  39. Available online: https://sim.mase.gov.it/portalediaccesso/mappe/grid/c98e9e410e2b4d1d8e6aabf50212d92d (accessed on 10 February 2025).
  40. Wang, Y.; Li, C.; Liu, M.; Cui, Q.; Wang, H.; Lv, J.; Li, B.; Xiong, Z.; Hu, Y. Spatial characteristics and driving factors of urban flooding in Chinese megacities. J. Hydrol. 2022, 613, 128464. [Google Scholar] [CrossRef]
  41. Opperman, J.J.; Galloway, G.; Fargione, J.; Mount, J.F.; Richter, B.D.; Secchi, S. Sustainable floodplains through large-scale reconnection to rivers. Science 2009, 326, 1487–1488. [Google Scholar] [CrossRef] [PubMed]
  42. Granata, F.; Gargano, R.; De Marinis, G. Support Vector Regression for Rainfall-Runoff Modeling in Urban Drainage: A Comparison with the EPA’s Storm Water Management Model. Water 2016, 8, 69. [Google Scholar] [CrossRef]
  43. Kim, H.I.; Kim, B.H. Flood hazard rating prediction for urban areas using random forest and LSTM. KSCE J. Civ. Eng. 2020, 24, 3884–3896. [Google Scholar] [CrossRef]
Figure 1. Outline of flood risk analysis process.
Figure 1. Outline of flood risk analysis process.
Applsci 15 07563 g001
Figure 2. RF Model.
Figure 2. RF Model.
Applsci 15 07563 g002
Figure 3. LSTM architecture.
Figure 3. LSTM architecture.
Applsci 15 07563 g003
Figure 4. RF-MARKOV Model.
Figure 4. RF-MARKOV Model.
Applsci 15 07563 g004
Figure 5. Calopinace Geologic Map.
Figure 5. Calopinace Geologic Map.
Applsci 15 07563 g005
Figure 6. Calopinace Lithological Map.
Figure 6. Calopinace Lithological Map.
Applsci 15 07563 g006
Figure 7. Land Cover.
Figure 7. Land Cover.
Applsci 15 07563 g007
Figure 8. Feature importance.
Figure 8. Feature importance.
Applsci 15 07563 g008
Figure 9. Sensitivity analysis of flood risk prediction for key variables.
Figure 9. Sensitivity analysis of flood risk prediction for key variables.
Applsci 15 07563 g009
Figure 10. Transition Matrix.
Figure 10. Transition Matrix.
Applsci 15 07563 g010
Figure 11. R2 Sensitivity analysis.
Figure 11. R2 Sensitivity analysis.
Applsci 15 07563 g011
Figure 12. Variable radar.
Figure 12. Variable radar.
Applsci 15 07563 g012
Figure 13. Flood risk map of Calopinace.
Figure 13. Flood risk map of Calopinace.
Applsci 15 07563 g013
Figure 14. Comparative Metrics RF vs. LSTM.
Figure 14. Comparative Metrics RF vs. LSTM.
Applsci 15 07563 g014
Figure 15. Comparative Metrics RF vs. LSTM.
Figure 15. Comparative Metrics RF vs. LSTM.
Applsci 15 07563 g015
Figure 16. Simulated Flood Risk Fluctuations Over Time.
Figure 16. Simulated Flood Risk Fluctuations Over Time.
Applsci 15 07563 g016
Table 1. Considered parameters.
Table 1. Considered parameters.
n_estimatorsmax_depthmin_samples_splitmin_samples_leaf
100102 1
200205 2
30030104
Table 2. Identification of optimum parameters.
Table 2. Identification of optimum parameters.
n_estimatorsmax_depthmin_samples_splitmin_samples_leaf
3003021
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bibbò, L.; Bilotta, G.; Meduri, G.M.; Genovese, E.; Barrile, V. Flood Risk Forecasting: An Innovative Approach with Machine Learning and Markov Chains Using LIDAR Data. Appl. Sci. 2025, 15, 7563. https://doi.org/10.3390/app15137563

AMA Style

Bibbò L, Bilotta G, Meduri GM, Genovese E, Barrile V. Flood Risk Forecasting: An Innovative Approach with Machine Learning and Markov Chains Using LIDAR Data. Applied Sciences. 2025; 15(13):7563. https://doi.org/10.3390/app15137563

Chicago/Turabian Style

Bibbò, Luigi, Giuliana Bilotta, Giuseppe M. Meduri, Emanuela Genovese, and Vincenzo Barrile. 2025. "Flood Risk Forecasting: An Innovative Approach with Machine Learning and Markov Chains Using LIDAR Data" Applied Sciences 15, no. 13: 7563. https://doi.org/10.3390/app15137563

APA Style

Bibbò, L., Bilotta, G., Meduri, G. M., Genovese, E., & Barrile, V. (2025). Flood Risk Forecasting: An Innovative Approach with Machine Learning and Markov Chains Using LIDAR Data. Applied Sciences, 15(13), 7563. https://doi.org/10.3390/app15137563

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop