Next Article in Journal
Responses of Terrestrial Water Storage to Climate Change in the Closed Alpine Qaidam Basin
Previous Article in Journal
Extraction of Major Groundwater Ions from Total Dissolved Solids and Mineralization Using Artificial Neural Networks: A Case Study of the Aflou Syncline Region, Algeria
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Custom Deep Learning Model Coupled with a Flood Index for Multi-Step-Ahead Flood Forecasting

1
College of Resource Environment and Tourism, Capital Normal University, Beijing 100048, China
2
Beijing Water Science and Technology Institute, Beijing 100048, China
*
Author to whom correspondence should be addressed.
Hydrology 2025, 12(5), 104; https://doi.org/10.3390/hydrology12050104
Submission received: 9 March 2025 / Revised: 17 April 2025 / Accepted: 22 April 2025 / Published: 26 April 2025

Abstract

:
Accurate and prompt flood forecasting is essential for effective decision making in flood control to help minimize or prevent flood damage. We propose a new custom deep learning model, IF-CNN-GRU, for multi-step-ahead flood forecasting that incorporates the flood index ( I F ) to improve the prediction accuracy. The model integrates convolutional neural networks (CNNs) and gated recurrent neural networks (GRUs) to analyze the spatiotemporal characteristics of hydrological data, while using a custom recursive neural network that adjusts the neural unit output at each moment based on the flood index. The IF-CNN-GRU model was applied to forecast floods with a lead time of 1–5 d at the Baihe hydrological station in the middle reaches of the Han River, China, accompanied by an in-depth investigation of model uncertainty. The results showed that incorporating the flood index I F improved the forecast precision by up to 20%. The analysis of uncertainty revealed that the contributions of modeling factors, such as the datasets, model structures, and their interactions, varied across the forecast periods. The interaction factors contributed 17–36% of the uncertainty, while the contribution of the datasets increased with the forecast period (32–53%) and that of the model structure decreased (32–28%). The experiment also demonstrated that data samples play a critical role in improving the flood forecasting accuracy, offering actionable insights to reduce the predictive uncertainty and providing a scientific basis for flood early warning systems and water resource management.

1. Introduction

Floods are one of the most important types of disasters worldwide because they cause severe damage and losses to property, people, and infrastructure [1,2]. Effective flood management relies on a timely and accurate flood forecasting model, which helps to reduce or prevent flood damage [3]. Advances in computational technologies have facilitated the deployment of machine learning-based flood forecasting models, achieving enhanced accuracy and robustness that surpass those of conventional, physically based modeling approaches [4,5,6]. Techniques such as artificial neural networks (ANNs), neural fuzziness, support vector machines (SVM), and support vector regression (SVR) have proven efficient for both short- and long-term flood forecasting [7,8,9,10]. Recent breakthroughs in integrating deep learning with hydrological systems have substantially enhanced the flood prediction capabilities. Recurrent neural networks (RNNs) serve as typical time-series prediction models and have been widely used to construct hydrological models [11]. Advanced recurrent neural networks, particularly long short-term memory (LSTM) and gated recurrent unit (GRU) variants, resolve gradient instability challenges through gating mechanisms, substantially improving the temporal pattern recognition fidelity in hydrological forecasting [12]. These models are now extensively applied to flood prediction across various time scales [3,13,14,15,16].
Deep learning-based flood forecasting models can be classified into two categories depending on their data sources: those that exclusively utilize historical hydrometeorological data and those that integrate predicted meteorological data. The former employ neural networks to establish complex relationships between historical hydrometeorological data and flood flows. For instance, Barino et al. [17] utilized one-dimensional convolutional neural networks (1D-CNNs) to predict river flows several days in advance. Cao et al. [18] developed a BiLSTM network coupled with Seq2Seq learning to predict 6 h flood flows in the Dongwan and Maduwang river basins in China. Ghobadi and Kang [19] introduced a probability-based prediction model (BLSTM) to predict the daily flow for scenarios spanning 1, 7, and 30 d. Lin et al. [20] proposed a similarity search-based framework for multi-step-ahead flood forecasting using hourly flood flow data, yielding forecasts up to 16 h ahead. Luo et al. [16] presented the SHG-LSTM model for multi-step-ahead forecasting, utilizing the 3 h streamflow and precipitation data over the previous 21 h to generate forecasts for up to seven horizons. The second category of models extends the forecast period by integrating meteorological forecast data. For example, Rasouli et al. [21] utilized three machine learning methods (BNN, SVR, and GP) to predict the daily flow for 1–7 d in a small watershed in British Columbia, Canada. Fuente et al. [22] employed meteorological variables from GFS and LSTM to forecast hourly flow time series for a large urban area in Chile over the past 3 d. Du et al. [23] developed a machine learning method supported by Google Earth Engine (GEE) to provide 24 h flood forecasts and generate 30-m-resolution flood maps. These models extend the forecast periods and improve the accuracy by using historical and forecasted data as inputs. However, owing to error accumulation and information gaps, the forecast accuracy significantly declines as the forecast period increases.
Flood occurrence on a given day can be influenced by both current and past precipitation, and the impact of historical precipitation diminishes over time [13]. The flood index ( I F ) is a widely recognized metric for real-time flood monitoring [24,25]. It tracks daily floods using current and historical precipitation, specifically focusing on the daily effective precipitation. The flood index ( I F ) quantifies the daily imbalance between water resources and demand and is globally utilized as a reliable, data-driven method for flood monitoring. This aids in assessing the duration, severity, and intensity of floods [26,27].
In this paper, we propose a new custom deep learning model, IF-CNN-GRU, for multi-step-ahead flood forecasting, incorporating the flood index ( I F ) to improve the prediction accuracy. The model combines a CNN and GRU to capture the spatiotemporal characteristics of hydrological data. By integrating the flood index ( I F ) into a custom recursive neural network, the model dynamically adjusts the neural unit outputs at each moment, effectively preserving flood information. This approach enhances the memory capacity of the recursive neural network for flood-related data, enabling the simultaneous extension of the forecast period and improvements in accuracy. The IF-CNN-GRU model was applied to predict floods for the next 5 d at the Baihe hydrological station in the middle reaches of the Han River. Additionally, the uncertainty of the model was analyzed using variance decomposition theory.

2. Study Area and Data Acquisition

The Baihe hydrological station is located in Baihe County, Shanxi Province, China, at 110°07′ E longitude and 32°49′ N latitude. It serves as a control station and monitors water inflow to the Danjiangkou Reservoir of the Han River (Figure 1). The watershed of the station covers the upstream Han River, spanning an area of 59,115 km2 and accounting for 37% of the entire Han River basin. The upper Han River is bordered by the Qinling Mountains to the north, the Daba and Micang Mountains to the south, and the Jialing River to the west, forming a mountainous enclosure on three sides with open and level terrain to the east. The region receives average annual precipitation of approximately 800 mm, which is spatially and temporally unevenly distributed. Precipitation decreases from south to north and is higher in mountainous regions than in basins or river plains. The right bank is generally more expansive than the left bank. Notably, 80% of the annual precipitation occurs from May to October. Flood prediction is particularly challenging because of frequent upstream rainstorms, which cause rapid and significant water level fluctuations.
In this study, we used the estimated daily spatial precipitation and measured daily streamflow data from the Baihe hydrological station for forecasting from 2007 to 2018. The daily spatial precipitation dataset comprised long-term spatial precipitation data estimated using the GWR-LSTM model [28] and GPM daily precipitation products. For model construction, the GWR-LSTM-based long-term estimated precipitation served as historical precipitation data, whereas the GPM daily precipitation products were applied as meteorological forecast data. Daily streamflow data for the Baihe hydrological station were sourced from the Hydrological Statistical Yearbook.
Daily time-series data for 12 years (from 2007 to 2018; 75 flood events) were included in the modeling process. The 70 flood events had peak flows ranging from 626 m3/s to 18,700 m3/s. These floods were divided into calibration and validation datasets in an 8:2 ratio, comprising 56 events for calibration and 14 for validation.

3. Methodology

3.1. Basic Neural Network

3.1.1. Convolutional Neural Network (CNN)

CNNs are widely applied in image recognition, classification, and prediction to address overfitting issues from limited data and to streamline multi-dimensional feature extraction by sharing weights [29,30]. A standard CNN architecture typically includes an input layer, one or more convolutional layers, pooling layers, fully connected layers, and an output layer.
  • Input layer: This layer receives structured data (e.g., vectors or matrices) while preserving inherent spatial or temporal relationships.
  • Convolutional layer: This layer extracts local features through sliding kernels with shared weights, where more kernels can capture increasingly abstract representations.
  • Pooling layer: This layer reduces the spatial dimensions of the feature maps (e.g., via max or average pooling), thereby lowering the computational complexity and mitigating overfitting.
  • Fully connected layer: This layer flattens the extracted features into 1D vectors for classification.
  • Output layer: This layer generates the final prediction outputs (e.g., class probabilities).
The convolutional layer plays a crucial role in feature extraction, warranting a detailed explanation of the convolution process. The input data elements are denoted by X, where X i , j represents the elements in the i-th row and j-th column. The weights for the m-th row and n-th column are represented by W m , n , and the bias of the convolution operation is denoted by W b . The activation function is represented as f. Convolution can be performed using Equation (1) to extract spatial features from the data.
A i , j = f ( m = 0 M N = 0 N W m , n X i + m , j + n + W b )
where A i , j denotes the features of the i-th row and j-th column elements,; and M and N are the length and width of the convolution kernel, respectively.
The activation function applies a nonlinear transformation to the output of the convolutional layer by adjusting the weight magnitudes. Commonly used activation functions include “Sigmoid” and “ReLU”.

3.1.2. Gated Recurrent Neural Network (GRU)

As a variation of the RNN, the GRU network introduces additive computation between the current state and the historical state, ensuring that errors are retained for longer during backpropagation and mitigating the issue of gradient vanishing. Compared to LSTM networks, the GRU simplifies the architecture by combining the LSTM input and forget gates into an update gate and adding a reset gate for further optimization. This design not only addresses gradient vanishing and exploding problems in RNNs but also results in a simpler structure and faster computation speed [31].
Figure 2 illustrates the internal structure of the GRU network, where the gate units modify the computation of the hidden state using reset gates ( R t ) and update gates ( Z t ). The states of R t and Z t are jointly determined by the hidden state from the previous time step ( H t 1 ) and current input ( X t ), as expressed in the following equations:
R t = σ ( X t W x r + H t 1 W h r ) Z t = σ ( X t W x z + H t 1 W h z )
where W x r , W h r , W x z , and W h z are the weights of R t and Z t , respectively.
The GRU network facilitates the computation of hidden states by first calculating candidate hidden states ( H ~ ):
H ˜ t = tanh ( X t W x h + R t H t 1 W h h )
where W x h and W h h represent the weight matrices of the GRU network.
The GRU network computes the current hidden state using the update gate ( Z t ) and determines the final output ( O t ):
H t = Z t H t 1 + 1 Z t H ˜ t O t = H t W h q + b q
where W h q and b q are the weight and bias of the GRU network, respectively.
The update gate ( Z t ) ranges from 0 to 1. A value closer to 1 indicates that the GRU network “remembers” more information, whereas a value closer to 0 signifies that more information is “forgotten”.

3.2. Flood Index ( I F )

The flood index ( I F ) is a simple yet robust mathematical tool for the assessment of flood conditions in a specific area at a given time [25]. It was derived by normalizing the average and standard deviation of the daily effective precipitation against the maximum value recorded during the study period. The calculation began by determining the effective precipitation ( P E ), which accounted for the gradual depletion of water resources over time. Using a time decay function, P E (current ith) was calculated as shown in Equation (5), where D represents the days, N represents the duration of continuous rainfall in the earlier stage, and P m represents the precipitation on the m-th day. Finally, I F was computed using Equation (6), with P E max ¯ 2007 2018 and σ ( P E max ¯ 2007 2018 ) representing the mean and standard deviation of the maximum daily effective precipitation during the study period, respectively.
P E , i = N = 1 D m = 1 N P m N
I F = P E P E max ¯ 2007 2018 σ ( P E max ¯ 2007 2018 )

3.3. Proposed Model

We propose a new custom deep learning model that incorporates the flood status for multi-step-ahead flood forecasting. As illustrated in Figure 3, t denotes the current time, L represents the maximum time of concentration in the watershed, and T represents the flood forecast period. The model’s input consists of two components: (1) P t L , …, P t 1 , with P t representing the historical precipitation from tL to t and Q t denoting the measured flood streamflow at time t, and (2) P t + 1 , …, P t + T 1 , with P t + T representing the forecasted precipitation from t + 1 to t + T. The IF-CNN-GRU model addresses the non-uniformity of the precipitation distribution and incorporates a custom recursive neural network that adjusts the neural unit outputs at each moment using I F , effectively preserving the flood fluctuation information.
The primary contribution of the IF-CNN-GRU model is its custom RNN, which integrates the flood index ( I F ) to enhance the “memory ability” of neural units for flood-related information. This improvement strengthens the generalization capabilities of the flood forecasting model. The computation method for the neural units in a standard RNN at time t + 1 is presented in Equation (7):
H t + 1 = σ U X t + W H t O t + 1 = σ V H t + 1
where H t and H t + 1 denote the hidden state of the RNN at time t and t + 1, respectively; O t + 1 represents the output of the RNN at time t + 1; U , W , and V denote the weights; and σ represents the activation function.
The computation method for the RNN cell units after integrating the flood index ( I F ) at time t + 1 is presented in Equation (8).
H ^ t + 1 = U X t + W H t I F t + 1 = f P t + 1 , P t , , P t m H t + 1 = σ H ^ t + 1 exp I F t + 1 O t + 1 = σ V H t + 1
where H ^ t + 1 represents the candidate hidden state of the custom RNN cell units; I F t + 1 represents the flood index at time t + 1; f represents the method of the flood index ( I F ), as indicated in Equations (5) and (6); and P t + 1 , P t , and P t m represent the rainfall from time t + 1 to time tm.
The custom RNN integrates meteorological forecast data and historical data through the flood index ( I F ), thereby enhancing its ability to retain flood information in state variables. This approach effectively extends the flood forecast period while improving the prediction accuracy.
In this study, we developed the IF-CNN-GRU model using PyTorch and conducted training on an x86-based system with 32 GB RAM without GPU acceleration, where each training step required two hours. The specific hyperparameters (i.e., number of layers, number of neurons, learning rate, and epochs) of this fusion model were the optimal choices based on the data size and multiple experiments.

3.4. Uncertainty Analysis Method

Evaluating the uncertainty in flood forecasting models requires the comprehensive analysis and quantification of the impacts of multiple uncertainty sources. We employed a modeling uncertainty analysis method based on ANOVA theory coupled with a subsampling approach to investigate the proposed model in terms of model input and structure [32,33]. The objective is to quantify the contributions of uncertainty from these sources and their interactions across different forecast periods. According to ANOVA theory, in each iteration i, the total variance (SSTi) is partitioned into contributions from different modeling components, such as SSAi (sample set partitioning method), SSBi (model structure), and SSIi (interaction between the two), as described below:
S S T i = S S A i + S S B i + S S I i
SSAi, SSBi, and SSIi were estimated using the following subsampling procedure:
S S A i = L h = 1 H Y g ( h , i ) , o Y g ( o , i ) , o 2
S S B i = H l = 1 L Y g ( o , i ) , l Y g ( o , i ) , o 2
S S I i = h = 1 H l = 1 L Y g ( h , i ) , l Y g ( h , i ) , o Y g ( o , i ) , l + Y g ( o , i ) , o 2
where Y represents the forecast streamflow, H represents the number of sample sets in iteration i, L represents the number of model structures in iteration i, o represents the average value of the elements identified by their positions, and x represents the result of the second sampling of the sample set partitioning. Then, each variance fraction η 2 is derived as follows:
η A 2 = 1 N i = 1 N S S A i S S T i
η B 2 = 1 N i = 1 N S S B i S S T i
η I 2 = 1 N i = 1 N S S I i S S T i
In Equations (13)–(15), the ranges of variation in η A 2 , η C 2 , and η I 2 are between 0 and 1, representing their individual impacts on the uncertainty of the flood forecast. The sum of these three variables is equal to 1.

3.4.1. Comparative Model Structure Design

To evaluate the impact of the model structure on the flood forecast uncertainty, a comparative model structure was designed. As illustrated in Figure 4, the primary difference between the comparative model and the IF-CNN-GRU structure is that the flood index ( I F ) is combined with the output of the CNN-GRU network and used as input to the cell units of a standard RNN.

3.4.2. Subsampling Approach

The ANOVA method may underestimate the variance when the sample size is small [34]. To mitigate this limitation, Bosshard et al. [32] incorporated a subsampling approach into the ANOVA method. This approach ensures a consistent sample size for each iteration. The 70 flood events were divided into five groups (G1, G2, G3, G4, and G5), with four groups serving as the calibration dataset and one group serving as the validation dataset.
To avoid underestimating the variance in small sample sizes when using variance decomposition methods, subsampling should be performed after partitioning the sample sets. This ensures that both the sample set partitioning and model structure have an equal number of samples for each variance decomposition calculation [33,34]. Accordingly, two of the five sample set partitioning strategies were selected for each iteration, yielding N = 10 sampling outcomes, as specified in Equation (16).
g = 1 1 1 1 2 2 2 3 3 4 2 3 4 5 3 4 5 4 5 5
The numbers in Equation (16) correspond to the numbering of the sample set partitioning method in Table 1.
The modeling combination scheme derived from variance decomposition incorporates two partitioning methods and two model structures, represented as H = 2 and L = 2 in Equations (10), (11), and (12), respectively.

3.5. Evaluation Metrics

To evaluate the performance of the IF-CNN-LSTM model, we used the coefficient of determination (R2), Nash–Sutcliffe efficiency (NSE), root mean square error (RMSE), mean absolute error (MAE), and Kling–Gupta efficiency (KGE). The R2 ranges from 0 to 1, with higher values indicating the model’s goodness of fit to the observed sample values. The NSE assesses the model’s ability to predict variables deviating from the mean and quantifies the proportion of the model’s initial variance. The NSE ranges from 1 (indicating a perfect match) to negative values. The RMSE range depends on the relative scale of the data, with 0 indicating a perfect match and higher values representing greater mismatch, and it measures the degree of alignment between the predicted and observed values. The MAE represents the average absolute difference between the observed and predicted results. The KGE considers different types of model errors (errors in the mean, variability, and dynamics). These indicators are defined as follows:
R 2 = i = 1 N Q obs , i     Q ¯ o b s Q s i m , i     Q ¯ s i m 2 i = 1 N ( Q obs , i     Q ¯ o b s ) 2 i = 1 N ( Q s i m , i     Q ¯ s i m ) 2 ,   0 < R 2 < 1
N S E = 1 i = 1 N ( Q obs , i     Q s i m , i ) 2 i = 1 N ( Q obs , i     Q ¯ o b s ) 2
R M S E = 1 N i = 1 N ( Q obs , i Q s i m , i ) 2 ,   R M S E > 0
M A E = i = 1 N Q obs , i     Q s i m , i N ,   M A E > 0
K G E = 1 R 1 2 + α 1 2 + β 1 2 α = σ sim σ o b s ,   β = μ sim μ o b s ,   0 < K G E < 1
where Q o b j , i (m3/s) and Q s i m , i (m3/s) represent the observed and simulated flows, respectively, and Q ¯ o b j (m3/s) and Q ¯ s i m (m3/s) represent the average observed and simulated flows, respectively. ( μ s i m , σ s i m ) and ( μ o b j , σ o b j ), respectively, represent the expected and standard deviation of the simulated runoff and measured runoff. KGE comprises three components: the correlation coefficient R, the relative dispersion between the predicted and observed runoff α, and the bias β, which reflects the deviation between the predicted and observed flows and constrains the water balance.

4. Results and Discussion

Figure 5 illustrates the distribution of the calibration and validation datasets derived from 70 flood events between 2007 and 2018, demonstrating their hydrological variability through the maximum flow (horizontal axis), minimum flow (vertical axis), and standard deviation (represented by the symbol size). The dataset encompassed extreme hydrological conditions, with peak discharges ranging from 63.9 m3/s to 18,700 m3/s. Notably, 20% of the samples exhibited peak discharges exceeding 3000 m3/s. Both the calibration and validation subsets maintained comparable hydrological ranges (calibration: 71.5–18,700 m3/s; validation: 63.9–18,700 m3/s). The spatial overlap of the red (calibration) and blue (validation) symbols across the parameter space confirms the balanced dataset partitioning, indicating that both subsets adequately represent the full spectrum of flood magnitudes and variability. This strategic data division supports robust model training while preserving sufficient independence for reliable validation under extreme flow conditions.

4.1. Performance Assessment

The CNN and GRU are state-of-the-art neural networks that effectively extract spatiotemporal features from hydrological and meteorological data and are widely used in flood forecasting [3]. Consequently, we used the CNN-GRU model as a benchmark. Table 2 compares the 5-day flood forecasting performance of the CNN-GRU and IF-CNN-GRU models at Baihe Station. During calibration, the IF-CNN-GRU model achieved NSE values of 0.89–0.81, surpassing those of the CNN-GRU model (0.87–0.75). This superiority persisted in validation, with NSE ranges of 0.90–0.82 (IF-CNN-GRU) versus 0.85–0.70 (CNN-GRU). Both models exhibited peak accuracy in T + 1 forecasts, followed by a gradual decline in performance, which stabilized at longer lead times. However, the IF-CNN-GRU model demonstrated slower NSE attenuation (an 8% reduction over 5 d versus a 17% decrease for CNN-GRU during validation) and consistently outperformed the baseline model across all metrics (R2, RMSE, MAE, and KGE), mirroring the trends observed in the NSE. Figure 6 illustrates the performance contrast. The IF-CNN-GRU model maintained higher NSE and R2 values (Figure 6a,b) and lower MAE and RMSE values (Figure 6c,d) throughout the forecast horizon, indicating minimal error accumulation. The integration of the flood index ( I F ) into the hybrid model architecture significantly enhanced the multi-step prediction robustness, underscoring its efficacy in improving deep learning-based flood forecasting under complex hydrological dynamics.
Figure 7 compares the predictive capabilities of the CNN-GRU and IF-CNN-GRU models across the t + 1 to t + 5 forecast periods using scatterplot analysis. Both models exhibited a time-dependent performance degradation, with the optimal accuracy observed at t + 1 (highest coefficient of determination), followed by a systematic decline in the predictive performance as the lead times were extended. However, the IF-CNN-GRU model consistently maintained a superior coefficient of determination across both the calibration and validation phases, demonstrating a reduced outlier frequency and a more concentrated error distribution. Although both architectures struggled to accurately predict the peak discharges, the IF-CNN-GRU model produced fewer instances of severe prediction deviations than the baseline model. This enhanced robustness, particularly evident in the t + 3 and t + 5 forecasts, confirms the effectiveness of integrating the flood index ( I F ) in mitigating error propagation during multi-step flood forecasting.
Validation is essential in evaluating the generalization capabilities of a model. Table 2 and Figure 6 and Figure 7 show a significant decline in the performance of both flood forecasting models as the lead times were extended. This decline was observed because both models utilized Q t as input, and a strong correlation existed between Q t and Q t + 1 in adjacent time periods. This highlights that the extraction of prior flood characteristics by the neural network is more effective for short-term forecasts. As the forecast period increased, the influence of these extracted characteristics on the predicted discharge diminished. However, the IF-CNN-GRU model incorporates the flood index ( I F ) through a custom RNN, enhancing the “memory ability” of the network to retain flood state information. This allows flood information to be effectively “memorized” in the state variables of the RNN. The IF-CNN-GRU model demonstrated progressively enhanced generalization performance relative to the CNN-GRU model as the forecast period was extended.
Figure 8 presents the flood hydrographs for two typical flood events forecasted on T + 1, T + 3, and T + 5 d using the IF-CNN-GRU and CNN-GRU models. Flood No. 24 was a single-peak event with a maximum discharge of 8720 m3/s, a minimum discharge of 1480 m3/s, and an average flow of 4777 m3/s. Flood No. 27 exhibited a bimodal flow pattern with the highest peak discharge of 10,200 m3/s, the lowest discharge of 743 m3/s, and an average discharge of 2362 m3/s. The results clearly indicated that the IF-CNN-GRU model predicted the flood process with significantly greater accuracy than the CNN-GRU model across all three forecast periods. As the forecast period increased, the qualifying rate of the predicted flows (within a permissible error of 20% of the observed flow) decreased for both models. However, the reduction in the accuracy was notably smaller for the IF-CNN-GRU model than for the CNN-GRU model. This study demonstrates the effectiveness of incorporating the flood index ( I F ) into a custom RNN to extend the forecast period and enhance the flood prediction accuracy.

4.2. Uncertainty Source Quantification

This section examines the flood prediction results, focusing on two key factors: the model input and model structure. It also quantifies their contributions to the uncertainty and their interactions across various forecast periods. Figure 9 presents the uncertainty contributions of these factors to flood forecasting over a 1–5 d forecast period, using the MAE as the evaluation metric. The results indicate that, at t + 1, the primary source of uncertainty was the model structure. This was because both model structures adopted Q t as the input data, and the strong correlation between Q t and Q t + 1 in neighboring periods minimized the uncertainty from the model input during short forecast periods. However, as the forecast period was extended, the influence of prior flood information on the forecast flow diminished, leading to a gradual increase in uncertainty from the model input (ranging from 32% to 53%) and a decrease in uncertainty from the model structure (ranging from 32% to 28%). Additionally, the contribution of the interactions between these two factors to the forecast uncertainty remained significant, ranging from 36% to 17%.
Figure 10 illustrates the impact of various modeling factors on the flood forecasting uncertainty across the 1–5 d prediction periods at different flow quantiles. The variance decomposition value for each flow quantile represents the average of all sample points within that segment. Figure 10a–d correspond to four quantile ranges: 0–25%, 25–50%, 50–75%, and 75–100%, respectively. The overall trend in Figure 10 indicates that the contribution of the model structure to the uncertainty decreased over time, whereas the contribution of the model input steadily increased. Additionally, the interaction between the model input and structure varied significantly depending on the magnitude of the flood. An analysis of Figure 10a,b reveals that, for low-flow flood forecasts, the modeling uncertainty was primarily influenced by the interaction between the model input and structure, rather than by either factor individually. This highlights the complexity of flood forecasting under these conditions. In contrast, Figure 10c,d demonstrate that, in the medium- and high-discharge flood scenarios, the model input became the dominant factor influencing the uncertainty, with the contributions reaching 0.68. Meanwhile, the contributions of the model structure and interaction diminished as the prediction period increased. These findings emphasize the dynamic roles of different factors in the flood forecasting uncertainty across varying discharge levels.
Figure 11 presents a box plot illustrating the statistical results of the mean absolute error (MAE) for various sample set partitioning strategies and model structures over a 1–5 d prediction period. Figure 11a depicts 140 MAE values for each sample set partitioning strategy across all prediction periods. These values were derived from the outcomes of two model architectures trained on a dataset of 70 flood events. Figure 11a shows that the MAE values of the models trained on the five sample sets exhibited minimal variations in the upper quartile, lower quartile, lower edge, and median across each prediction period. However, as the forecast period increased, the differences in the maximum, upper quartile, lower quartile, and minimum MAE values across the forecasting results became more pronounced. The MAE values of the prediction results varied across the sample sets depending on the forecasting period. However, Figure 11a indicates that no specific sample set partitioning strategy was significantly superior to the others. Figure 11b compares the MAE box plots for different model architectures over 1–5 d lead times, with each box plot representing 350 simulated results from 70 flood events across five sample sets. The results show that the IF-CNN-GRU model outperformed the comparison model in the upper quartile, median, lower quartile, and lower edge across the prediction period. During the 1–2 d prediction period, the performance difference between the two models was minimal. However, as the prediction period increased, the performance gap between the IF-CNN-GRU and comparison models increased. Figure 11b illustrates that inputting historical flood information as the initial state of the neural network units in the recurrent neural network layer enhanced the flood forecasting accuracy during short prediction periods. Furthermore, the IF-CNN-GRU model’s integration of the flood index ( I F ) into the RNN to adjust the neural unit output states continuously improved the model’s accuracy for longer prediction periods.
The performance of machine learning-based models is primarily affected by the quality of the input datasets and model structures [33,35,36,37]. The IF-CNN-GRU model proposed in this paper integrates the flood index ( I F ) through custom-designed RNN units, leading to a significant improvement in the prediction accuracy. Subsequently, to assess the relative contributions of these two factors, i.e., new data ( I F ) and custom architecture modification (customized RNN units), we conducted a comparative analysis. During calibration, the CNN-GRU model without I F achieved NSE values of 0.87, 0.78, 0.76, 0.75, and 0.75 (Table 2). In contrast, the CNN-GRU model with I F (Figure 4) demonstrated improved performance, with NSE values of 0.89, 0.88, 0.83, 0.79, and 0.76. A similar trend was observed during validation, where the CNN-GRU model without I F yielded NSE values of 0.85, 0.79, 0.79, 0.73, and 0.70, whereas the I F -integrated variant achieved superior results of 0.87, 0.84, 0.83, 0.80, and 0.76, respectively. This improvement can be attributed to I F , a real-time flood monitoring indicator based on the daily effective precipitation [26], which provides critical information for flood prediction. Figure 12 illustrates the I F curve alongside the hydrograph for Flood No. 42, demonstrating a clear correlation between I F and the flood process. Therefore, training machine learning-based flood forecasting models using the flood index ( I F ) mitigates common challenges such as error accumulation and information insufficiency in multi-step flood forecasting.
Furthermore, based on the results of quantifying the sources of uncertainty (Section 4.2), the IF-CNN-GRU model outperformed even the CNN-GRU model with I F (Figure 4). This is because conventional recurrent neural networks often struggle to accurately learn the evolving spatiotemporal flood characteristics from forecasted meteorological and historical hydrological data owing to information deficits. The IF-CNN-GRU model enhances the memory capacity of RNNs through customized recurrent units that efficiently encode flood information, thereby aligning temporal data processing with the physical mechanisms of the actual flood dynamics. This integration extended the forecast period and enhanced the prediction accuracy, offering a robust solution for improved flood management.

5. Conclusions

In this study, a new, custom deep learning model, IF-CNN-GRU, was designed for multi-step-ahead flood forecasting. The model was applied to predict floods at the Baihe hydrological station, located in the middle section of the Han River, over the next 5 d. Additionally, the uncertainty of the IF-CNN-GRU model was analyzed using variance decomposition theory. The primary findings are summarized as follows.
(1)
Incorporating the flood index ( I F ) enhanced the memory capacity of the RNN by efficiently storing flood information in its state variables, enabling the deep learning-based flood forecast model to extend the forecast period and improving the prediction accuracy.
(2)
The uncertainty analysis revealed that the influences of individual modeling factors on the flood forecast uncertainty varied with the forecast period, and the contribution of interactions to the uncertainty was significant. As the forecast period increased, the uncertainty arising from the model inputs gradually increased, whereas the proportion attributable to the model structure decreased.
In summary, the IF-CNN-GRU model effectively extended the flood forecast period and improved the forecast accuracy, demonstrating strong generalization abilities and reliable results for 5 d flood forecasts. Future efforts will focus on enhancing the precision of the input data, further extending the model’s forecast period, and improving its accuracy.

Author Contributions

Conceptualization, J.S. and M.Y.; methodology, J.S.; software, J.S.; validation, J.S., M.Y. and J.Z.; formal analysis, J.S.; investigation, J.S.; resources, M.Y.; data curation, J.Z.; writing—original draft preparation, J.S.; writing—review and editing, J.S., M.Y., J.Z., N.C. and B.L.; visualization, J.S.; supervision, B.L.; project administration, N.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (52209005) and the Natural Science Foundation of Beijing (8232032).

Data Availability Statement

Data subject to third party restrictions.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Khosravi, K.; Pham, B.T.; Chapi, K.; Shirzadi, A.; Shahabi, H.; Revhaug, I.; Prakash, I.; Tien Bui, D. A comparative assessment of decision trees algorithms for flash flood susceptibility modeling at Haraz watershed, northern Iran. Sci. Total Environ. 2018, 627, 744–755. [Google Scholar] [CrossRef] [PubMed]
  2. Hwang, S.; Yoon, J.; Kang, N.; Lee, D.R. Development of flood forecasting system on city·mountains·small river area in Korea and assessment of forecast accuracy. J. Korea Water Resour. Assoc. 2020, 53, 225–226. [Google Scholar]
  3. Chen, C.; Jiang, J.; Liao, Z.; Zhou, Y.; Wang, H.; Pei, Q. A short-term flood prediction based on spatial deep learning network: A case study for Xi County, China. J. Hydrol. 2022, 607, 127535. [Google Scholar] [CrossRef]
  4. Abbot, J.; Marohasy, J. Input selection and optimisation for monthly rainfall forecasting in Queensland, Australia, using artificial neural networks. Atmos. Res. 2014, 138, 166–178. [Google Scholar] [CrossRef]
  5. Mosavi, A.; Ozturk, P.; Chau, K.-w. Flood Prediction Using Machine Learning Models: Literature Review. Water 2018, 10, 1536. [Google Scholar] [CrossRef]
  6. Mosavi, A.; Rabczuk, T.; Varkonyi-Koczy, A.R. Reviewing the Novel Machine Learning Tools for Materials Design. In Recent Advances in Technology Research and Education; Springer: Cham, Switzerland, 2018; pp. 50–58. [Google Scholar]
  7. Dineva, A.; Varkonyi-Koczy, A.R.; Tar, J.K. Fuzzy expert system for automatic wavelet shrinkage procedure selection for noise suppression. In Proceedings of the 18th International Conference on Intelligent Engineering Systems, Tihany, Hungary, 3–5 July 2014; pp. 163–168. [Google Scholar]
  8. Kim, S.; Matsumi, Y.; Pan, S.; Mase, H. A real-time forecast model using artificial neural network for after runner storm surges on the Tottori coast, Japan. Ocean Eng. 2016, 122, 44–53. [Google Scholar] [CrossRef]
  9. Suykens, J.A.K.; Vandewalle, J. Least Squares Support Vector Machine Classifiers. Neural Process. Lett. 1999, 9, 293–300. [Google Scholar] [CrossRef]
  10. Taherei Ghazvinei, P.; Hassanpour Darvishi, H.; Mosavi, A.; Yusof, K.B.W.; Alizamir, M.; Shamshirband, S.; Chau, K.W. Sugarcane growth prediction based on meteorological parameters using extreme learning machine and artificial neural network. Eng. Appl. Comput. Fluid Mech. 2018, 12, 738–749. [Google Scholar] [CrossRef]
  11. Le, X.H.; Ho, H.V.; Lee, G. River streamflow prediction using a deep neural network: A case study on the Red River, Vietnam. Korean J. Agric. Sci. 2019, 46, 843–856. [Google Scholar] [CrossRef]
  12. Cho, K.; Van Merrienboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. Comput. Sci. 2014. [CrossRef]
  13. Moishin, M.; Deo, R.C.; Prasad, R.; Raj, N.; Abdulla, S. Designing Deep-Based Learning Flood Forecast Model With ConvLSTM Hybrid Algorithm. IEEE Access 2021, 9, 50982–50993. [Google Scholar] [CrossRef]
  14. Granata, F.; Di Nunno, F.; de Marinis, G. Stacked machine learning algorithms and bidirectional long short-term memory networks for multi-step ahead streamflow forecasting: A comparative study. J. Hydrol. 2022, 613, 128431. [Google Scholar] [CrossRef]
  15. Cui, Z.; Guo, S.; Zhou, Y.; Wang, J. Exploration of dual-attention mechanism-based deep learning for multi-step-ahead flood probabilistic forecasting. J. Hydrol. 2023, 622, 129688. [Google Scholar] [CrossRef]
  16. Luo, Y.; Zhou, Y.; Chen, H.; Xiong, L.; Guo, S.; Chang, F.J. Exploring a spatiotemporal hetero graph-based long short-term memory model for multi-step-ahead flood forecasting. J. Hydrol. 2024, 633, 15. [Google Scholar] [CrossRef]
  17. Barino, F.O.; Silva, V.N.H.; Barbero, A.P.L.; Honório, L.d.M.; Santos, A.B.D. Correlated Time-Series in Multi-Day-Ahead Streamflow Forecasting Using Convolutional Networks. IEEE Access 2022, 8, 215748–215757. [Google Scholar] [CrossRef]
  18. Cao, Q.; Zhang, H.; Zhu, F.; Hao, Z.; Yuan, F. Multi-step-ahead flood forecasting using an improved BiLSTM-S2S model. J. Flood Risk Manag. 2022, 15, e12827. [Google Scholar] [CrossRef]
  19. Ghobadi, F.; Kang, D. Multi-Step Ahead Probabilistic Forecasting of Daily Streamflow Using Bayesian Deep Learning: A Multiple Case Study. Water 2022, 14, 3672. [Google Scholar] [CrossRef]
  20. Lin, K.; Chen, H.; Zhou, Y.; Sheng, S.; Luo, Y.; Guo, S.; Xu, C.Y. Exploring a similarity search-based data-driven framework for multi-step-ahead flood forecasting. Sci. Total Environ. 2023, 891, 164494. [Google Scholar] [CrossRef]
  21. Rasouli, K.; Hsieh, W.W.; Cannon, A.J. Daily streamflow forecasting by machine learning methods with weather and climate inputs. J. Hydrol. 2012, 414, 284–293. [Google Scholar] [CrossRef]
  22. de la Fuente, A.; Meruane, V.; Meruane, C. Hydrological Early Warning System Based on a Deep Learning Runoff Model Coupled with a Meteorological Forecast. Water 2019, 11, 1808. [Google Scholar] [CrossRef]
  23. Du, J.; Kimball, J.; Sheffield, J.; Pan, M.; Wood, E. Satellite Flood Assessment and Forecasts from SMAP and Landsat. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 6707–6715. [Google Scholar]
  24. Nosrati, K.; Saravi, M.M.; Shahbazi, A. Investigation of Flood Event Possibility over Iran Using Flood Index. Vaccine 2010, 21, 1958–1964. [Google Scholar]
  25. Deo, R.C.; Byun, H.-R.; Adamowski, J.F.; Kim, D.-W. A Real-time Flood Monitoring Index Based on Daily Effective Precipitation and its Application to Brisbane and Lockyer Valley Flood Events. Water Resour. Manag. 2015, 29, 4075–4093. [Google Scholar] [CrossRef]
  26. Deo, R.C.; Adamowski, J.F.; Begum, K.; Salcedo-Sanz, S.; Kim, D.W.; Dayal, K.S.; Byun, H.R. Quantifying flood events in Bangladesh with a daily-step flood monitoring index based on the concept of daily effective precipitation. Theor. Appl. Climatol. 2019, 137, 1201–1215. [Google Scholar] [CrossRef]
  27. Moishin, M.; Deo, R.C.; Prasad, R.; Raj, N.; Abdulla, S. Development of Flood Monitoring Index for daily flood risk evaluation: Case studies in Fiji. Stoch. Environ. Res. Risk Assess. 2021, 35, 1387–1402. [Google Scholar] [CrossRef]
  28. Shen, J.; Liu, P.; Xia, J.; Zhao, Y.; Dong, Y. Merging Multisatellite and Gauge Precipitation Based on Geographically Weighted Regression and Long Short-Term Memory Network. Remote Sens. 2022, 14, 3939. [Google Scholar] [CrossRef]
  29. Wang, Z.; Yan, W.; Oates, T. Time Series Classification from Scratch with Deep Neural Networks: A Strong Baseline. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017. [Google Scholar] [CrossRef]
  30. Fang, W.; Zhang, F.; Sheng, V.S.; Ding, Y. SCENT: A New Precipitation Nowcasting Method Based on Sparse Correspondence and Deep Neural Network. Neurocomputing 2021, 448, 10–20. [Google Scholar] [CrossRef]
  31. Li, X.; Zhuang, W.; Zhang, H. Short-term Power Load Forecasting Based on Gate Recurrent Unit Network and Cloud Computing Platform. In Proceedings of the 4th International Conference on Computer Science Application Engineering, Sanya, China, 20–22 October 2020. [Google Scholar]
  32. Bosshard, T.; Carambia, M.; Goergen, K.; Kotlarski, S.; Krahe, P.; Zappa, M.; Schär, C. Quantifying uncertainty sources in an ensemble of hydrological climate-impact projections. Water Resour. Res. 2013, 49, 1523–1536. [Google Scholar] [CrossRef]
  33. Song, T.; Ding, W.; Liu, H.; Wu, J.; Zhou, H.; Chu, J. Uncertainty Quantification in Machine Learning Modeling for Multi-Step Time Series Forecasting: Example of Recurrent Neural Networks in Discharge Simulations. Water 2020, 12, 912. [Google Scholar] [CrossRef]
  34. Déqué, M.; Rowell, D.P.; Lüthi, D.; Giorgi, F.; Hurk, B.V.D. An intercomparison of regional climate simulations for Europe: Assessing uncertainties in model projections. Clim. Change 2007, 81, 53–70. [Google Scholar] [CrossRef]
  35. Zhou, Y.; Guo, S.; Xu, C.-Y.; Chang, F.-J.; Yin, J. Improving the Reliability of Probabilistic Multi-Step-Ahead Flood Forecasting by Fusing Unscented Kalman Filter with Recurrent Neural Network. Water 2020, 12, 578. [Google Scholar] [CrossRef]
  36. Jha, A.; Chandrasekaran, A.; Kim, C.; Ramprasad, R. Impact of dataset uncertainties on machine learning model predictions: The example of polymer glass transition temperatures. Model. Simul. Mater. Sci. Eng. 2019, 27, 024002. [Google Scholar] [CrossRef]
  37. Rahmati, O.; Choubin, B.; Fathabadi, A.; Coulon, F.; Soltani, E.; Shahabi, H.; Mollaefar, E.; Tiefenbacher, J.; Cipullo, S.; Ahmad, B.B. Predicting uncertainty of machine learning models for modelling nitrate pollution of groundwater using quantile regression and UNEEC methods. Sci. Total Environ. 2019, 688, 855–866. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Location of the study area, showing hydrological and meteorological stations.
Figure 1. Location of the study area, showing hydrological and meteorological stations.
Hydrology 12 00104 g001
Figure 2. Structure diagram of GRU network.
Figure 2. Structure diagram of GRU network.
Hydrology 12 00104 g002
Figure 3. Structure of IF-CNN-GRU model.
Figure 3. Structure of IF-CNN-GRU model.
Hydrology 12 00104 g003
Figure 4. Comparative model structure.
Figure 4. Comparative model structure.
Hydrology 12 00104 g004
Figure 5. Minimum, maximum, and standard deviation of 70 flood events (the size of the circle represents the standard deviation).
Figure 5. Minimum, maximum, and standard deviation of 70 flood events (the size of the circle represents the standard deviation).
Hydrology 12 00104 g005
Figure 6. Trend plots of four evaluation indicators (NSE, R2, MAE, and RMSE) for CNN-GRU and IF-CNN-GRU models: (a) NSE, (b) R2, (c) MAE, (d) RMSE.
Figure 6. Trend plots of four evaluation indicators (NSE, R2, MAE, and RMSE) for CNN-GRU and IF-CNN-GRU models: (a) NSE, (b) R2, (c) MAE, (d) RMSE.
Hydrology 12 00104 g006
Figure 7. Scatterplots of flood forecast results for the calibration and validation periods of the IF-CNN-GRU and CNN-GRU models: (a) Scatterplots of T + 1 daily flood forecast results for the calibration periods, (b) Scatterplots of T + 1 daily flood forecast results for the validation periods, (c) Scatterplots of T + 3 daily flood forecast results for the calibration periods, (d) Scatterplots of T + 3 daily flood forecast results for the validation periods, (e) Scatterplots of T + 5 daily flood forecast results for the calibration periods, (f) Scatterplots of T + 5 daily flood forecast results for the validation periods.
Figure 7. Scatterplots of flood forecast results for the calibration and validation periods of the IF-CNN-GRU and CNN-GRU models: (a) Scatterplots of T + 1 daily flood forecast results for the calibration periods, (b) Scatterplots of T + 1 daily flood forecast results for the validation periods, (c) Scatterplots of T + 3 daily flood forecast results for the calibration periods, (d) Scatterplots of T + 3 daily flood forecast results for the validation periods, (e) Scatterplots of T + 5 daily flood forecast results for the calibration periods, (f) Scatterplots of T + 5 daily flood forecast results for the validation periods.
Hydrology 12 00104 g007
Figure 8. Comparison of observed and forecasted streamflows obtained from the CNN-LSTM and IF-CNN-LSTM models at T + 1, T + 3, and T + 5 for flood events corresponding to No. 24 and No. 27: (a) The hydrograph of Flood NO.24 by CNN-GRU, (b) The hydrograph of Flood NO.24 by IF-CNN-GRU, (c) The hydrograph of Flood NO.27 by CNN-GRU, (d) The hydrograph of Flood NO.27 by IF-CNN-GRU.
Figure 8. Comparison of observed and forecasted streamflows obtained from the CNN-LSTM and IF-CNN-LSTM models at T + 1, T + 3, and T + 5 for flood events corresponding to No. 24 and No. 27: (a) The hydrograph of Flood NO.24 by CNN-GRU, (b) The hydrograph of Flood NO.24 by IF-CNN-GRU, (c) The hydrograph of Flood NO.27 by CNN-GRU, (d) The hydrograph of Flood NO.27 by IF-CNN-GRU.
Hydrology 12 00104 g008
Figure 9. Ratio of contributions of uncertainty sources at 1–5 d lead time.
Figure 9. Ratio of contributions of uncertainty sources at 1–5 d lead time.
Hydrology 12 00104 g009
Figure 10. Ratio of contributions of uncertainty sources in different discharge quantiles at 1–5 d lead time: (a) 0–25% quantile, (b) 25–50% quantile, (c) 50–75% quantile, (d) 75–100% quantile.
Figure 10. Ratio of contributions of uncertainty sources in different discharge quantiles at 1–5 d lead time: (a) 0–25% quantile, (b) 25–50% quantile, (c) 50–75% quantile, (d) 75–100% quantile.
Hydrology 12 00104 g010
Figure 11. Box diagram for (a) MAE for different sample sets at 1–5 d lead time and (b) MAE for different model structures at 1–5 d lead time.
Figure 11. Box diagram for (a) MAE for different sample sets at 1–5 d lead time and (b) MAE for different model structures at 1–5 d lead time.
Hydrology 12 00104 g011
Figure 12. Flood index ( I F ) and flood process diagram for Flood No. 42.
Figure 12. Flood index ( I F ) and flood process diagram for Flood No. 42.
Hydrology 12 00104 g012
Table 1. Partition of sample sets 1–5.
Table 1. Partition of sample sets 1–5.
Sample SetCalibration DatasetValidation Dataset
sample set 1G1, G2, G3, G4G5
sample set 2G1, G2, G3, G5G4
sample set 3G1, G2, G4, G5G3
sample set 4G1, G3, G4, G5G2
sample set 5G2, G3, G4, G5G1
Table 2. Performance of CNN-GRU and IF-CNN-GRU models for flood forecasting at different lead times (1–5 d).
Table 2. Performance of CNN-GRU and IF-CNN-GRU models for flood forecasting at different lead times (1–5 d).
Lead Time (d)PhaseModelNSER2RMSEMAEKGE
1 CalibrationCNN-GRU0.870.87521.75231.490.88
IF-CNN-GRU0.890.89486.78229.760.87
ValidationCNN-GRU0.850.85594.92265.830.88
IF-CNN-GRU0.900.92486.69234.240.83
2 CalibrationCNN-GRU0.780.78683.85299.220.77
IF-CNN-GRU0.880.88497.79269.460.88
ValidationCNN-GRU0.790.79714.46325.230.80
IF-CNN-GRU0.860.86590.55309.640.84
3CalibrationCNN-GRU0.760.78711.79326.930.71
IF-CNN-GRU0.840.85576.35306.270.82
ValidationCNN-GRU0.790.82708.89367.250.71
IF-CNN-GRU0.850.87593.80344.700.80
4CalibrationCNN-GRU0.750.78726.57341.590.69
IF-CNN-GRU0.820.83604.63324.520.83
ValidationCNN-GRU0.730.78805.12412.430.66
IF-CNN-GRU0.820.83661.66384.300.80
5CalibrationCNN-GRU0.750.78718.20350.960.69
IF-CNN-GRU0.810.82626.89331.970.78
ValidationCNN-GRU0.700.73848.24414.020.66
IF-CNN-GRU0.820.83660.19391.050.78
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shen, J.; Yang, M.; Zhang, J.; Chen, N.; Li, B. A New Custom Deep Learning Model Coupled with a Flood Index for Multi-Step-Ahead Flood Forecasting. Hydrology 2025, 12, 104. https://doi.org/10.3390/hydrology12050104

AMA Style

Shen J, Yang M, Zhang J, Chen N, Li B. A New Custom Deep Learning Model Coupled with a Flood Index for Multi-Step-Ahead Flood Forecasting. Hydrology. 2025; 12(5):104. https://doi.org/10.3390/hydrology12050104

Chicago/Turabian Style

Shen, Jianming, Moyuan Yang, Juan Zhang, Nan Chen, and Binghua Li. 2025. "A New Custom Deep Learning Model Coupled with a Flood Index for Multi-Step-Ahead Flood Forecasting" Hydrology 12, no. 5: 104. https://doi.org/10.3390/hydrology12050104

APA Style

Shen, J., Yang, M., Zhang, J., Chen, N., & Li, B. (2025). A New Custom Deep Learning Model Coupled with a Flood Index for Multi-Step-Ahead Flood Forecasting. Hydrology, 12(5), 104. https://doi.org/10.3390/hydrology12050104

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop